WorldWideScience

Sample records for subgrid averaged models

  1. Recursive renormalization group theory based subgrid modeling

    Science.gov (United States)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  2. Analysis and modeling of subgrid scalar mixing using numerical data

    Science.gov (United States)

    Girimaji, Sharath S.; Zhou, YE

    1995-01-01

    Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.

  3. A Lagrangian dynamic subgrid-scale model turbulence

    Science.gov (United States)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  4. High-resolution subgrid models: background, grid generation, and implementation

    Science.gov (United States)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  5. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  6. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

    International Nuclear Information System (INIS)

    Singh, Satbir; You, Donghyun

    2013-01-01

    Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

  7. Subgrid Modeling of AGN-driven Turbulence in Galaxy Clusters

    Science.gov (United States)

    Scannapieco, Evan; Brüggen, Marcus

    2008-10-01

    Hot, underdense bubbles powered by active galactic nuclei (AGNs) are likely to play a key role in halting catastrophic cooling in the centers of cool-core galaxy clusters. We present three-dimensional simulations that capture the evolution of such bubbles, using an adaptive mesh hydrodynamic code, FLASH3, to which we have added a subgrid model of turbulence and mixing. While pure hydro simulations indicate that AGN bubbles are disrupted into resolution-dependent pockets of underdense gas, proper modeling of subgrid turbulence indicates that this is a poor approximation to a turbulent cascade that continues far beyond the resolution limit. Instead, Rayleigh-Taylor instabilities act to effectively mix the heated region with its surroundings, while at the same time preserving it as a coherent structure, consistent with observations. Thus, bubbles are transformed into hot clouds of mixed material as they move outward in the hydrostatic intracluster medium (ICM), much as large airbursts lead to a distinctive "mushroom cloud" structure as they rise in the hydrostatic atmosphere of Earth. Properly capturing the evolution of such clouds has important implications for many ICM properties. In particular, it significantly changes the impact of AGNs on the distribution of entropy and metals in cool-core clusters such as Perseus.

  8. Subgrid models for mass and thermal diffusion in turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, David H [Los Alamos National Laboratory; Lim, Hyunkyung [STONY BROOK UNIV; Li, Xiao - Lin [STONY BROOK UNIV; Gilmm, James G [STONY BROOK UNIV

    2008-01-01

    We are concerned with the chaotic flow fields of turbulent mixing. Chaotic flow is found in an extreme form in multiply shocked Richtmyer-Meshkov unstable flows. The goal of a converged simulation for this problem is twofold: to obtain converged solutions for macro solution features, such as the trajectories of the principal shock waves, mixing zone edges, and mean densities and velocities within each phase, and also for such micro solution features as the joint probability distributions of the temperature and species concentration. We introduce parameterized subgrid models of mass and thermal diffusion, to define large eddy simulations (LES) that replicate the micro features observed in the direct numerical simulation (DNS). The Schmidt numbers and Prandtl numbers are chosen to represent typical liquid, gas and plasma parameter values. Our main result is to explore the variation of the Schmidt, Prandtl and Reynolds numbers by three orders of magnitude, and the mesh by a factor of 8 per linear dimension (up to 3200 cells per dimension), to allow exploration of both DNS and LES regimes and verification of the simulations for both macro and micro observables. We find mesh convergence for key properties describing the molecular level of mixing, including chemical reaction rates between the distinct fluid species. We find results nearly independent of Reynolds number for Re 300, 6000, 600K . Methodologically, the results are also new. In common with the shock capturing community, we allow and maintain sharp solution gradients, and we enhance these gradients through use of front tracking. In common with the turbulence modeling community, we include subgrid scale models with no adjustable parameters for LES. To the authors' knowledge, these two methodologies have not been previously combined. In contrast to both of these methodologies, our use of Front Tracking, with DNS or LES resolution of the momentum equation at or near the Kolmogorov scale, but without

  9. Subgrid models for mass and thermal diffusion in turbulent mixing

    International Nuclear Information System (INIS)

    Lim, H; Yu, Y; Glimm, J; Li, X-L; Sharp, D H

    2010-01-01

    We propose a new method for the large eddy simulation (LES) of turbulent mixing flows. The method yields convergent probability distribution functions (PDFs) for temperature and concentration and a chemical reaction rate when applied to reshocked Richtmyer-Meshkov (RM) unstable flows. Because such a mesh convergence is an unusual and perhaps original capability for LES of RM flows, we review previous validation studies of the principal components of the algorithm. The components are (i) a front tracking code, FronTier, to control numerical mass diffusion and (ii) dynamic subgrid scale (SGS) models to compensate for unresolved scales in the LES. We also review the relevant code comparison studies. We compare our results to a simple model based on 1D diffusion, taking place in the geometry defined statistically by the interface (the 50% isoconcentration surface between the two fluids). Several conclusions important to physics could be drawn from our study. We model chemical reactions with no closure approximations beyond those in the LES of the fluid variables itself, and as with dynamic SGS models, these closures contain no adjustable parameters. The chemical reaction rate is specified by the joint PDF for temperature and concentration. We observe a bimodal distribution for the PDF and we observe significant dependence on fluid transport parameters.

  10. A moving subgrid model for simulation of reflood heat transfer

    International Nuclear Information System (INIS)

    Frepoli, Cesare; Mahaffy, John H.; Hochreiter, Lawrence E.

    2003-01-01

    In the quench front and froth region the thermal-hydraulic parameters experience a sharp axial variation. The heat transfer regime changes from single-phase liquid, to nucleate boiling, to transition boiling and finally to film boiling in a small axial distance. One of the major limitations of all the current best-estimate codes is that a relatively coarse mesh is used to solve the complex fluid flow and heat transfer problem in proximity of the quench front during reflood. The use of a fine axial mesh for the entire core becomes prohibitive because of the large computational costs involved. Moreover, as the mesh size decreases, the standard numerical methods based on a semi-implicit scheme, tend to become unstable. A subgrid model was developed to resolve the complex thermal-hydraulic problem at the quench front and froth region. This model is a Fine Hydraulic Moving Grid (FHMG) that overlies a coarse Eulerian mesh in the proximity of the quench front and froth region. The fine mesh moves in the core and follows the quench front as it advances in the core while the rods cool and quench. The FHMG software package was developed and implemented into the COBRA-TF computer code. This paper presents the model and discusses preliminary results obtained with the COBRA-TF/FHMG computer code

  11. Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD

    Science.gov (United States)

    Agostinelli, Giulia; Baglietto, Emilio

    2017-11-01

    The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.

  12. On the Representation of Subgrid Microtopography Effects in Process-based Hydrologic Models

    Science.gov (United States)

    Jan, A.; Painter, S. L.; Coon, E. T.

    2017-12-01

    Increased availability of high-resolution digital elevation are enabling process-based hydrologic modeling on finer and finer scales. However, spatial variability in surface elevation (microtopography) exists below the scale of a typical hyper-resolution grid cell and has the potential to play a significant role in water retention, runoff, and surface/subsurface interactions. Though the concept of microtopographic features (depressions, obstructions) and the associated implications on flow and discharge are well established, representing those effects in watershed-scale integrated surface/subsurface hydrology models remains a challenge. Using the complex and coupled hydrologic environment of the Arctic polygonal tundra as an example, we study the effects of submeter topography and present a subgrid model parameterized by small-scale spatial heterogeneities for use in hyper-resolution models with polygons at a scale of 15-20 meters forming the surface cells. The subgrid model alters the flow and storage terms in the diffusion wave equation for surface flow. We compare our results against sub-meter scale simulations (acts as a benchmark for our simulations) and hyper-resolution models without the subgrid representation. The initiation of runoff in the fine-scale simulations is delayed and the recession curve is slowed relative to simulated runoff using the hyper-resolution model with no subgrid representation. Our subgrid modeling approach improves the representation of runoff and water retention relative to models that ignore subgrid topography. We evaluate different strategies for parameterizing subgrid model and present a classification-based method to efficiently move forward to larger landscapes. This work was supported by the Interoperable Design of Extreme-scale Application Software (IDEAS) project and the Next-Generation Ecosystem Experiments-Arctic (NGEE Arctic) project. NGEE-Arctic is supported by the Office of Biological and Environmental Research in the

  13. Subgrid Parameterization of the Soil Moisture Storage Capacity for a Distributed Rainfall-Runoff Model

    Directory of Open Access Journals (Sweden)

    Weijian Guo

    2015-05-01

    Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.

  14. Sub-Grid Modeling of Electrokinetic Effects in Micro Flows

    Science.gov (United States)

    Chen, C. P.

    2005-01-01

    Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this

  15. Enhancing the representation of subgrid land surface characteristics in land surface models

    Directory of Open Access Journals (Sweden)

    Y. Ke

    2013-09-01

    Full Text Available Land surface heterogeneity has long been recognized as important to represent in the land surface models. In most existing land surface models, the spatial variability of surface cover is represented as subgrid composition of multiple surface cover types, although subgrid topography also has major controls on surface processes. In this study, we developed a new subgrid classification method (SGC that accounts for variability of both topography and vegetation cover. Each model grid cell was represented with a variable number of elevation classes and each elevation class was further described by a variable number of vegetation types optimized for each model grid given a predetermined total number of land response units (LRUs. The subgrid structure of the Community Land Model (CLM was used to illustrate the newly developed method in this study. Although the new method increases the computational burden in the model simulation compared to the CLM subgrid vegetation representation, it greatly reduced the variations of elevation within each subgrid class and is able to explain at least 80% of the total subgrid plant functional types (PFTs. The new method was also evaluated against two other subgrid methods (SGC1 and SGC2 that assigned fixed numbers of elevation and vegetation classes for each model grid (SGC1: M elevation bands–N PFTs method; SGC2: N PFTs–M elevation bands method. Implemented at five model resolutions (0.1°, 0.25°, 0.5°, 1.0°and 2.0° with three maximum-allowed total number of LRUs (i.e., NLRU of 24, 18 and 12 over North America (NA, the new method yielded more computationally efficient subgrid representation compared to SGC1 and SGC2, particularly at coarser model resolutions and moderate computational intensity (NLRU = 18. It also explained the most PFTs and elevation variability that is more homogeneously distributed spatially. The SGC method will be implemented in CLM over the NA continent to assess its impacts on

  16. On the TFNS Subgrid Models for Liquid-Fueled Turbulent Combustion

    Science.gov (United States)

    Liu, Nan-Suey; Wey, Thomas

    2014-01-01

    This paper describes the time-filtered Navier-Stokes (TFNS) approach capable of capturing unsteady flow structures important for turbulent mixing in the combustion chamber and two different subgrid models used to emulate the major processes occurring in the turbulence-chemistry interaction. These two subgrid models are termed as LEM-like model and EUPDF-like model (Eulerian probability density function), respectively. Two-phase turbulent combustion in a single-element lean-direct-injection (LDI) combustor is calculated by employing the TFNS/LEM-like approach as well as the TFNS/EUPDF-like approach. Results obtained from the TFNS approach employing these two different subgrid models are compared with each other, along with the experimental data, followed by more detailed comparison between the results of an updated calculation using the TFNS/LEM-like model and the experimental data.

  17. Parameterizing Subgrid-Scale Orographic Drag in the High-Resolution Rapid Refresh (HRRR) Atmospheric Model

    Science.gov (United States)

    Toy, M. D.; Olson, J.; Kenyon, J.; Smirnova, T. G.; Brown, J. M.

    2017-12-01

    The accuracy of wind forecasts in numerical weather prediction (NWP) models is improved when the drag forces imparted on atmospheric flow by subgrid-scale orography are included. Without such parameterizations, only the terrain resolved by the model grid, along with the small-scale obstacles parameterized by the roughness lengths can have an effect on the flow. This neglects the impacts of subgrid-scale terrain variations, which typically leads to wind speeds that are too strong. Using statistical information about the subgrid-scale orography, such as the mean and variance of the topographic height within a grid cell, the drag forces due to flow blocking, gravity wave drag, and turbulent form drag are estimated and distributed vertically throughout the grid cell column. We recently implemented the small-scale gravity wave drag paramterization of Steeneveld et al. (2008) and Tsiringakis et al. (2017) for stable planetary boundary layers, and the turbulent form drag parameterization of Beljaars et al. (2004) in the High-Resolution Rapid Refresh (HRRR) NWP model developed at the National Oceanic and Atmospheric Administration (NOAA). As a result, a high surface wind speed bias in the model has been reduced and small improvement to the maintenance of stable layers has also been found. We present the results of experiments with the subgrid-scale orographic drag parameterization for the regional HRRR model, as well as for a global model in development at NOAA, showing the direct and indirect impacts.

  18. Structure-Preserving Variational Multiscale Modeling of Turbulent Incompressible Flow with Subgrid Vortices

    Science.gov (United States)

    Evans, John; Coley, Christopher; Aronson, Ryan; Nelson, Corey

    2017-11-01

    In this talk, a large eddy simulation methodology for turbulent incompressible flow will be presented which combines the best features of divergence-conforming discretizations and the residual-based variational multiscale approach to large eddy simulation. In this method, the resolved motion is represented using a divergence-conforming discretization, that is, a discretization that preserves the incompressibility constraint in a pointwise manner, and the unresolved fluid motion is explicitly modeled by subgrid vortices that lie within individual grid cells. The evolution of the subgrid vortices is governed by dynamical model equations driven by the residual of the resolved motion. Consequently, the subgrid vortices appropriately vanish for laminar flow and fully resolved turbulent flow. As the resolved velocity field and subgrid vortices are both divergence-free, the methodology conserves mass in a pointwise sense and admits discrete balance laws for energy, enstrophy, and helicity. Numerical results demonstrate the methodology yields improved results versus state-of-the-art eddy viscosity models in the context of transitional, wall-bounded, and rotational flow when a divergence-conforming B-spline discretization is utilized to represent the resolved motion.

  19. Resolving terrestrial ecosystem processes along a subgrid topographic gradient for an earth-system model

    Science.gov (United States)

    Subin, Z M; Milly, Paul C.D.; Sulman, B N; Malyshev, Sergey; Shevliakova, E

    2014-01-01

    Soil moisture is a crucial control on surface water and energy fluxes, vegetation, and soil carbon cycling. Earth-system models (ESMs) generally represent an areal-average soil-moisture state in gridcells at scales of 50–200 km and as a result are not able to capture the nonlinear effects of topographically-controlled subgrid heterogeneity in soil moisture, in particular where wetlands are present. We addressed this deficiency by building a subgrid representation of hillslope-scale topographic gradients, TiHy (Tiled-hillslope Hydrology), into the Geophysical Fluid Dynamics Laboratory (GFDL) land model (LM3). LM3-TiHy models one or more representative hillslope geometries for each gridcell by discretizing them into land model tiles hydrologically coupled along an upland-to-lowland gradient. Each tile has its own surface fluxes, vegetation, and vertically-resolved state variables for soil physics and biogeochemistry. LM3-TiHy simulates a gradient in soil moisture and water-table depth between uplands and lowlands in each gridcell. Three hillslope hydrological regimes appear in non-permafrost regions in the model: wet and poorly-drained, wet and well-drained, and dry; with large, small, and zero wetland area predicted, respectively. Compared to the untiled LM3 in stand-alone experiments, LM3-TiHy simulates similar surface energy and water fluxes in the gridcell-mean. However, in marginally wet regions around the globe, LM3-TiHy simulates shallow groundwater in lowlands, leading to higher evapotranspiration, lower surface temperature, and higher leaf area compared to uplands in the same gridcells. Moreover, more than four-fold larger soil carbon concentrations are simulated globally in lowlands as compared with uplands. We compared water-table depths to those simulated by a recent global model-observational synthesis, and we compared wetland and inundated areas diagnosed from the model to observational datasets. The comparisons demonstrate that LM3-TiHy has the

  20. A simple dynamic subgrid-scale model for LES of particle-laden turbulence

    Science.gov (United States)

    Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz

    2017-04-01

    In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.

  1. Unsteady Flame Embedding (UFE) Subgrid Model for Turbulent Premixed Combustion Simulations

    KAUST Repository

    El-Asrag, Hossam

    2010-01-04

    We present a formulation for an unsteady subgrid model for premixed combustion in the flamelet regime. Since chemistry occurs at the unresolvable scales, it is necessary to introduce a subgrid model that accounts for the multi-scale nature of the problem using the information available on the resolved scales. Most of the current models are based on the laminar flamelet concept, and often neglect the unsteady effects. The proposed model\\'s primary objective is to encompass many of the flame/turbulence interactions unsteady features and history effects. In addition it provides a dynamic and accurate approach for computing the subgrid flame propagation velocity. The unsteady flame embedding approach (UFE) treats the flame as an ensemble of locally one-dimensional flames. A set of elemental one dimensional flames is used to describe the turbulent flame structure at the subgrid level. The stretched flame calculations are performed on the stagnation line of a strained flame using the unsteady filtered strain rate computed from the resolved- grid. The flame iso-surface is tracked using an accurate high-order level set formulation to propagate the flame interface at the coarse resolution with minimum numerical diffusion. In this paper the solver and the model components are introduced and used to investigate two unsteady flames with different Lewis numbers in the thin reaction zone regime. The results show that the UFE model captures the unsteady flame-turbulence interactions and the flame propagation speed reasonably well. Higher propagation speed is observed for the lower than unity Lewis number flame because of the impact of differential diffusion.

  2. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  3. Dynamic subgrid scale model used in a deep bundle turbulence prediction using the large eddy simulation method

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    1996-01-01

    Turbulence is one of the most commonly occurring phenomena of engineering interest in the field of fluid mechanics. Since most flows are turbulent, there is a significant payoff for improved predictive models of turbulence. One area of concern is the turbulent buffeting forces experienced by the tubes in steam generators of nuclear power plants. Although the Navier-Stokes equations are able to describe turbulent flow fields, the large number of scales of turbulence limit practical flow field calculations with current computing power. The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (Smagorinsky, 1963) (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  4. Advanced subgrid modeling for Multiphase CFD in CASL VERA tools

    International Nuclear Information System (INIS)

    Baglietto, Emilio; Gilman, Lindsey; Sugrue, Rosie

    2014-01-01

    This work introduces advanced modeling capabilities that are being developed to improve the accuracy and extend the applicability of Multiphase CFD. Specifics of the advanced and hardened boiling closure model are described in this work. The development has been driven by new physical understanding, derived from the innovative experimental techniques available at MIT. A new experimental-based mechanistic approach to heat partitioning is proposed. The model introduces a new description of the bubble evaporation, sliding and interaction on the heated surface to accurately capture the evaporation occurring at the heated surface, while also tracking the local surface conditions. The model is being assembled to cover an extended application area, up to Critical Heat Flux (CHF). The accurate description of the bubble interaction, effective microlayer and dry surface area are considered to be the enabling quantities towards innovated CHF capturing methodologies. Further, improved mechanistic force-balance models for bubble departure predictions and lift-off diameter predictions are implemented in the model. Studies demonstrate the influence of the newly implemented partitioning components. Finally, the development work towards a more consistent and integrated hydrodynamic closure is presented. The main objective here is to develop a set of robust momentum closure relations which focuses on the specific application to PWR conditions, but will facilitate the application to other geometries, void fractions, and flow regimes. The innovative approach considers local flow conditions on a cell-by-cell basis to ensure robustness. Closure relations of interest initially include drag, lift, and turbulence dispersion, with near wall corrections applied for both drag and lift. (author)

  5. QUANTIFYING SUBGRID POLLUTANT VARIABILITY IN EULERIAN AIR QUALITY MODELS

    Science.gov (United States)

    In order to properly assess human risk due to exposure to hazardous air pollutants or air toxics, detailed information is needed on the location and magnitude of ambient air toxic concentrations. Regional scale Eulerian air quality models are typically limited to relatively coar...

  6. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    Science.gov (United States)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  7. Simulations of mixing in Inertial Confinement Fusion with front tracking and sub-grid scale models

    Science.gov (United States)

    Rana, Verinder; Lim, Hyunkyung; Melvin, Jeremy; Cheng, Baolian; Glimm, James; Sharp, David

    2015-11-01

    We present two related results. The first discusses the Richtmyer-Meshkov (RMI) and Rayleigh-Taylor instabilities (RTI) and their evolution in Inertial Confinement Fusion simulations. We show the evolution of the RMI to the late time RTI under transport effects and tracking. The role of the sub-grid scales helps capture the interaction of turbulence with diffusive processes. The second assesses the effects of concentration on the physics model and examines the mixing properties in the low Reynolds number hot spot. We discuss the effect of concentration on the Schmidt number. The simulation results are produced using the University of Chicago code FLASH and Stony Brook University's front tracking algorithm.

  8. Application of a Steady Meandering River with Piers Using a Lattice Boltzmann Sub-Grid Model in Curvilinear Coordinate Grid

    Directory of Open Access Journals (Sweden)

    Liping Chen

    2018-05-01

    Full Text Available A sub-grid multiple relaxation time (MRT lattice Boltzmann model with curvilinear coordinates is applied to simulate an artificial meandering river. The method is based on the D2Q9 model and standard Smagorinsky sub-grid scale (SGS model is introduced to simulate meandering flows. The interpolation supplemented lattice Boltzmann method (ISLBM and the non-equilibrium extrapolation method are used for second-order accuracy and boundary conditions. The proposed model was validated by a meandering channel with a 180° bend and applied to a steady curved river with piers. Excellent agreement between the simulated results and previous computational and experimental data was found, showing that MRT-LBM (MRT lattice Boltzmann method coupled with a Smagorinsky sub-grid scale (SGS model in a curvilinear coordinates grid is capable of simulating practical meandering flows.

  9. Large eddy simulation of new subgrid scale model for three-dimensional bundle flows

    International Nuclear Information System (INIS)

    Barsamian, H.R.; Hassan, Y.A.

    2004-01-01

    Having led to increased inefficiencies and power plant shutdowns fluid flow induced vibrations within heat exchangers are of great concern due to tube fretting-wear or fatigue failures. Historically, scaling law and measurement accuracy problems were encountered for experimental analysis at considerable effort and expense. However, supercomputers and accurate numerical methods have provided reliable results and substantial decrease in cost. In this investigation Large Eddy Simulation has been successfully used to simulate turbulent flow by the numeric solution of the incompressible, isothermal, single phase Navier-Stokes equations. The eddy viscosity model and a new subgrid scale model have been utilized to model the smaller eddies in the flow domain. A triangular array flow field was considered and numerical simulations were performed in two- and three-dimensional fields, and were compared to experimental findings. Results show good agreement of the numerical findings to that of the experimental, and solutions obtained with the new subgrid scale model represent better energy dissipation for the smaller eddies. (author)

  10. Effect of grid resolution and subgrid assumptions on the model prediction of a reactive buoyant plume under convective conditions

    International Nuclear Information System (INIS)

    Chock, D.P.; Winkler, S.L.; Pu Sun

    2002-01-01

    We have introduced a new and elaborate approach to understand the impact of grid resolution and subgrid chemistry assumption on the grid-model prediction of species concentrations for a system with highly non-homogeneous chemistry - a reactive buoyant plume immediately downwind of the stack in a convective boundary layer. The Parcel-Grid approach plume was used to describe both the air parcel turbulent transport and chemistry. This approach allows an identical transport process for all simulations. It also allows a description of subgrid chemistry. The ambient and plume parcel transport follows the description of Luhar and Britter (Atmos. Environ, 23 (1989) 1911, 26A (1992) 1283). The chemistry follows that of the Carbon-Bond mechanism. Three different grid sizes were considered: fine, medium and coarse, together with three different subgrid chemistry assumptions: micro-scale or individual parcel, tagged-parcel (plume and ambient parcels treated separately), and untagged-parcel (plume and ambient parcels treated indiscriminately). Reducing the subgrid information is not necessarily similar to increasing the model grid size. In our example, increasing the grid size leads to a reduction in the suppression of ozone in the presence of a high-NO x stack plume, and a reduction in the effectiveness of the NO x -inhibition effect. On the other hand, reducing the subgrid information (by using the untagged-parcel assumption) leads to an increase in ozone reduction and an enhancement of the NO x -inhibition effect insofar as the ozone extremum is concerned. (author)

  11. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  12. Rotating Turbulent Flow Simulation with LES and Vreman Subgrid-Scale Models in Complex Geometries

    Directory of Open Access Journals (Sweden)

    Tao Guo

    2014-07-01

    Full Text Available The large eddy simulation (LES method based on Vreman subgrid-scale model and SIMPIEC algorithm were applied to accurately capture the flowing character in Francis turbine passage under the small opening condition. The methodology proposed is effective to understand the flow structure well. It overcomes the limitation of eddy-viscosity model which is excessive, dissipative. Distributions of pressure, velocity, and vorticity as well as some special flow structure in guide vane near-wall zones and blade passage were gained. The results show that the tangential velocity component of fluid has absolute superiority under small opening condition. This situation aggravates the impact between the wake vortices that shed from guide vanes. The critical influence on the balance of unit by spiral vortex in blade passage and the nonuniform flow around guide vane, combined with the transmitting of stress wave, has been confirmed.

  13. A regional scale model for ozone in the United States with subgrid representation of urban and power plant plumes

    International Nuclear Information System (INIS)

    Sillman, S.; Logan, J.A.; Wofsy, S.C.

    1990-01-01

    A new approach to modeling regional air chemistry is presented for application to industrialized regions such as the continental US. Rural chemistry and transport are simulated using a coarse grid, while chemistry and transport in urban and power plant plumes are represented by detailed subgrid models. Emissions from urban and power plant sources are processed in generalized plumes where chemistry and dilution proceed for 8-12 hours before mixing with air in a large resolution element. A realistic fraction of pollutants reacts under high-NO x conditions, and NO x is removed significantly before dispersal. Results from this model are compared with results from grid odels that do not distinguish plumes and with observational data defining regional ozone distributions. Grid models with coarse resolution are found to artificially disperse NO x over rural areas, therefore overestimating rural levels of both NO x and O 3 . Regional net ozone production is too high in coarse grid models, because production of O 3 is more efficient per molecule of NO x in the low-concentration regime of rural areas than in heavily polluted plumes from major emission sources. Ozone levels simulated by this model are shown to agree with observations in urban plumes and in rural regions. The model reproduces accurately average regional and peak ozone concentrations observed during a 4-day ozone episode. Computational costs for the model are reduced 25-to 100-fold as compared to fine-mesh models

  14. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  15. Study of subgrid-scale velocity models for reacting and nonreacting flows

    Science.gov (United States)

    Langella, I.; Doan, N. A. K.; Swaminathan, N.; Pope, S. B.

    2018-05-01

    A study is conducted to identify advantages and limitations of existing large-eddy simulation (LES) closures for the subgrid-scale (SGS) kinetic energy using a database of direct numerical simulations (DNS). The analysis is conducted for both reacting and nonreacting flows, different turbulence conditions, and various filter sizes. A model, based on dissipation and diffusion of momentum (LD-D model), is proposed in this paper based on the observed behavior of four existing models. Our model shows the best overall agreements with DNS statistics. Two main investigations are conducted for both reacting and nonreacting flows: (i) an investigation on the robustness of the model constants, showing that commonly used constants lead to a severe underestimation of the SGS kinetic energy and enlightening their dependence on Reynolds number and filter size; and (ii) an investigation on the statistical behavior of the SGS closures, which suggests that the dissipation of momentum is the key parameter to be considered in such closures and that dilatation effect is important and must be captured correctly in reacting flows. Additional properties of SGS kinetic energy modeling are identified and discussed.

  16. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    Science.gov (United States)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  17. An investigation of the sub-grid variability of trace gases and aerosols for global climate modeling

    Directory of Open Access Journals (Sweden)

    Y. Qian

    2010-07-01

    Full Text Available One fundamental property and limitation of grid based models is their inability to identify spatial details smaller than the grid cell size. While decades of work have gone into developing sub-grid treatments for clouds and land surface processes in climate models, the quantitative understanding of sub-grid processes and variability for aerosols and their precursors is much poorer. In this study, WRF-Chem is used to simulate the trace gases and aerosols over central Mexico during the 2006 MILAGRO field campaign, with multiple spatial resolutions and emission/terrain scenarios. Our analysis focuses on quantifying the sub-grid variability (SGV of trace gases and aerosols within a typical global climate model grid cell, i.e. 75×75 km2.

    Our results suggest that a simulation with 3-km horizontal grid spacing adequately reproduces the overall transport and mixing of trace gases and aerosols downwind of Mexico City, while 75-km horizontal grid spacing is insufficient to represent local emission and terrain-induced flows along the mountain ridge, subsequently affecting the transport and mixing of plumes from nearby sources. Therefore, the coarse model grid cell average may not correctly represent aerosol properties measured over polluted areas. Probability density functions (PDFs for trace gases and aerosols show that secondary trace gases and aerosols, such as O3, sulfate, ammonium, and nitrate, are more likely to have a relatively uniform probability distribution (i.e. smaller SGV over a narrow range of concentration values. Mostly inert and long-lived trace gases and aerosols, such as CO and BC, are more likely to have broad and skewed distributions (i.e. larger SGV over polluted regions. Over remote areas, all trace gases and aerosols are more uniformly distributed compared to polluted areas. Both CO and O3 SGV vertical profiles are nearly constant within the PBL during daytime, indicating that trace gases

  18. The Storm Surge and Sub-Grid Inundation Modeling in New York City during Hurricane Sandy

    Directory of Open Access Journals (Sweden)

    Harry V. Wang

    2014-03-01

    Full Text Available Hurricane Sandy inflicted heavy damage in New York City and the New Jersey coast as the second costliest storm in history. A large-scale, unstructured grid storm tide model, Semi-implicit Eulerian Lagrangian Finite Element (SELFE, was used to hindcast water level variation during Hurricane Sandy in the mid-Atlantic portion of the U.S. East Coast. The model was forced by eight tidal constituents at the model’s open boundary, 1500 km away from the coast, and the wind and pressure fields from atmospheric model Regional Atmospheric Modeling System (RAMS provided by Weatherflow Inc. The comparisons of the modeled storm tide with the NOAA gauge stations from Montauk, NY, Long Island Sound, encompassing New York Harbor, Atlantic City, NJ, to Duck, NC, were in good agreement, with an overall root mean square error and relative error in the order of 15–20 cm and 5%–7%, respectively. Furthermore, using large-scale model outputs as the boundary conditions, a separate sub-grid model that incorporates LIDAR data for the major portion of the New York City was also set up to investigate the detailed inundation process. The model results compared favorably with USGS’ Hurricane Sandy Mapper database in terms of its timing, local inundation area, and the depth of the flooding water. The street-level inundation with water bypassing the city building was created and the maximum extent of horizontal inundation was calculated, which was within 30 m of the data-derived estimate by USGS.

  19. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    Energy Technology Data Exchange (ETDEWEB)

    Buschman, Francis X., E-mail: Francis.Buschman@unnpp.gov; Aumiller, David L.

    2017-02-15

    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  20. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    International Nuclear Information System (INIS)

    Buschman, Francis X.; Aumiller, David L.

    2017-01-01

    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  1. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    Science.gov (United States)

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  2. From Detailed Description of Chemical Reacting Carbon Particles to Subgrid Models for CFD

    Directory of Open Access Journals (Sweden)

    Schulze S.

    2013-04-01

    Full Text Available This work is devoted to the development and validation of a sub-model for the partial oxidation of a spherical char particle moving in an air/steam atmosphere. The particle diameter is 2 mm. The coal particle is represented by moisture- and ash-free nonporous carbon while the coal rank is implemented using semi-global reaction rate expressions taken from the literature. The submodel includes six gaseous chemical species (O2, CO2, CO, H2O, H2, N2. Three heterogeneous reactions are employed, along with two homogeneous semi-global reactions, namely carbon monoxide oxidation and the water-gas-shift reaction. The distinguishing feature of the subgrid model is that it takes into account the influence of homogeneous reactions on integral characteristics such as carbon combustion rates and particle temperature. The sub-model was validated by comparing its results with a comprehensive CFD-based model resolving the issues of bulk flow and boundary layer around the particle. In this model, the Navier-Stokes equations coupled with the energy and species conservation equations were used to solve the problem by means of the pseudo-steady state approach. At the surface of the particle, the balance of mass, energy and species concentration was applied including the effect of the Stefan flow and heat loss due to radiation at the surface of the particle. Good agreement was achieved between the sub-model and the CFD-based model. Additionally, the CFD-based model was verified against experimental data published in the literature (Makino et al. (2003 Combust. Flame 132, 743-753. Good agreement was achieved between numerically predicted and experimentally obtained data for input conditions corresponding to the kinetically controlled regime. The maximal discrepancy (10% between the experiments and the numerical results was observed in the diffusion-controlled regime. Finally, we discuss the influence of the Reynolds number, the ambient O2 mass fraction and the ambient

  3. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    Science.gov (United States)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that

  4. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    Science.gov (United States)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  5. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  6. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  7. An improved anisotropy-resolving subgrid-scale model for flows in laminar–turbulent transition region

    International Nuclear Information System (INIS)

    Inagaki, Masahide; Abe, Ken-ichi

    2017-01-01

    Highlights: • An anisotropy-resolving subgrid-scale model, covering a wide range of grid resolutions, is improved. • The new model enhances its applicability to flows in the laminar-turbulent transition region. • A mixed-timescale subgrid-scale model is used as the eddy viscosity model. • The proposed model successfully predicts the channel flows at transitional Reynolds numbers. • The influence of the definition of the grid-filter width is also investigated. - Abstract: Some types of mixed subgrid-scale (SGS) models combining an isotropic eddy-viscosity model and a scale-similarity model can be used to effectively improve the accuracy of large eddy simulation (LES) in predicting wall turbulence. Abe (2013) has recently proposed a stabilized mixed model that maintains its computational stability through a unique procedure that prevents the energy transfer between the grid-scale (GS) and SGS components induced by the scale-similarity term. At the same time, since this model can successfully predict the anisotropy of the SGS stress, the predictive performance, particularly at coarse grid resolutions, is remarkably improved in comparison with other mixed models. However, since the stabilized anisotropy-resolving SGS model includes a transport equation of the SGS turbulence energy, k SGS , containing a production term proportional to the square root of k SGS , its applicability to flows with both laminar and turbulent regions is not so high. This is because such a production term causes k SGS to self-reproduce. Consequently, the laminar–turbulent transition region predicted by this model depends on the inflow or initial condition of k SGS . To resolve these issues, in the present study, the mixed-timescale (MTS) SGS model proposed by Inagaki et al. (2005) is introduced into the stabilized mixed model as the isotropic eddy-viscosity part and the production term in the k SGS transport equation. In the MTS model, the SGS turbulence energy, k es , estimated by

  8. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    Science.gov (United States)

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  9. Smaller global and regional carbon emissions from gross land use change when considering sub-grid secondary land cohorts in a global dynamic vegetation model

    Science.gov (United States)

    Yue, Chao; Ciais, Philippe; Li, Wei

    2018-02-01

    Several modelling studies reported elevated carbon emissions from historical land use change (ELUC) by including bidirectional transitions on the sub-grid scale (termed gross land use change), dominated by shifting cultivation and other land turnover processes. However, most dynamic global vegetation models (DGVMs) that have implemented gross land use change either do not account for sub-grid secondary lands, or often have only one single secondary land tile over a model grid cell and thus cannot account for various rotation lengths in shifting cultivation and associated secondary forest age dynamics. Therefore, it remains uncertain how realistic the past ELUC estimations are and how estimated ELUC will differ between the two modelling approaches with and without multiple sub-grid secondary land cohorts - in particular secondary forest cohorts. Here we investigated historical ELUC over 1501-2005 by including sub-grid forest age dynamics in a DGVM. We run two simulations, one with no secondary forests (Sageless) and the other with sub-grid secondary forests of six age classes whose demography is driven by historical land use change (Sage). Estimated global ELUC for 1501-2005 is 176 Pg C in Sage compared to 197 Pg C in Sageless. The lower ELUC values in Sage arise mainly from shifting cultivation in the tropics under an assumed constant rotation length of 15 years, being 27 Pg C in Sage in contrast to 46 Pg C in Sageless. Estimated cumulative ELUC values from wood harvest in the Sage simulation (31 Pg C) are however slightly higher than Sageless (27 Pg C) when the model is forced by reconstructed harvested areas because secondary forests targeted in Sage for harvest priority are insufficient to meet the prescribed harvest area, leading to wood harvest being dominated by old primary forests. An alternative approach to quantify wood harvest ELUC, i.e. always harvesting the close-to-mature forests in both Sageless and Sage, yields similar values of 33 Pg C by both

  10. Sub-grid scale combustion models for large eddy simulation of unsteady premixed flame propagation around obstacles.

    Science.gov (United States)

    Di Sarli, Valeria; Di Benedetto, Almerinda; Russo, Gennaro

    2010-08-15

    In this work, an assessment of different sub-grid scale (sgs) combustion models proposed for large eddy simulation (LES) of steady turbulent premixed combustion (Colin et al., Phys. Fluids 12 (2000) 1843-1863; Flohr and Pitsch, Proc. CTR Summer Program, 2000, pp. 61-82; Kim and Menon, Combust. Sci. Technol. 160 (2000) 119-150; Charlette et al., Combust. Flame 131 (2002) 159-180; Pitsch and Duchamp de Lageneste, Proc. Combust. Inst. 29 (2002) 2001-2008) was performed to identify the model that best predicts unsteady flame propagation in gas explosions. Numerical results were compared to the experimental data by Patel et al. (Proc. Combust. Inst. 29 (2002) 1849-1854) for premixed deflagrating flame in a vented chamber in the presence of three sequential obstacles. It is found that all sgs combustion models are able to reproduce qualitatively the experiment in terms of step of flame acceleration and deceleration around each obstacle, and shape of the propagating flame. Without adjusting any constants and parameters, the sgs model by Charlette et al. also provides satisfactory quantitative predictions for flame speed and pressure peak. Conversely, the sgs combustion models other than Charlette et al. give correct predictions only after an ad hoc tuning of constants and parameters. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Final Report. Evaluating the Climate Sensitivity of Dissipative Subgrid-Scale Mixing Processes and Variable Resolution in NCAR's Community Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-12-14

    The goals of this project were to (1) assess and quantify the sensitivity and scale-dependency of unresolved subgrid-scale mixing processes in NCAR’s Community Earth System Model (CESM), and (2) to improve the accuracy and skill of forthcoming CESM configurations on modern cubed-sphere and variable-resolution computational grids. The research thereby contributed to the description and quantification of uncertainties in CESM’s dynamical cores and their physics-dynamics interactions.

  12. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  13. Computation of transitional flow past a circular cylinder using multiblock lattice Boltzmann method with a dynamic subgrid scale model

    International Nuclear Information System (INIS)

    Premnath, Kannan N; Pattison, Martin J; Banerjee, Sanjoy

    2013-01-01

    Lattice Boltzmann method (LBM) is a kinetic based numerical scheme for the simulation of fluid flow. While the approach has attracted considerable attention during the last two decades, there is a need for systematic investigation of its applicability for complex canonical turbulent flow problems of engineering interest, where the nature of the numerical properties of the underlying scheme plays an important role for their accurate solution. In this paper, we discuss and evaluate a LBM based on a multiblock approach for efficient large eddy simulation of three-dimensional external flow past a circular cylinder in the transitional regime characterized by the presence of multiple scales. For enhanced numerical stability at higher Reynolds numbers, a multiple relaxation time formulation is considered. The effect of subgrid scales is represented by means of a Smagorinsky eddy-viscosity model, where the model coefficient is computed locally by means of a dynamic procedure, providing better representation of flow physics with reduced empiricism. Simulations are performed for a Reynolds number of 3900 based on the free stream velocity and cylinder diameter for which prior data is available for comparison. The presence of laminar boundary layer which separates into a pair of shear layers that evolve into turbulent wakes impose particular challenge for numerical methods for this condition. The relatively low numerical dissipation introduced by the inherently parallel and second-order accurate LBM is an important computational asset in this regard. Computations using five different grid levels, where the various blocks are suitably aligned to resolve multiscale flow features show that the structure of the recirculation region is well reproduced and the statistics of the mean flow and turbulent fluctuations are in satisfactory agreement with prior data. (paper)

  14. Computation of transitional flow past a circular cylinder using multiblock lattice Boltzmann method with a dynamic subgrid scale model

    Energy Technology Data Exchange (ETDEWEB)

    Premnath, Kannan N [Department of Mechanical Engineering, University of Colorado Denver, 1200 Larimer Street, Denver, CO 80217 (United States); Pattison, Martin J [HyPerComp Inc., 2629 Townsgate Road, Suite 105, Westlake Village, CA 91361 (United States); Banerjee, Sanjoy, E-mail: kannan.premnath@ucdenver.edu, E-mail: kannan.np@gmail.com [Department of Chemical Engineering, City College of New York, City University of New York, New York, NY 10031 (United States)

    2013-10-15

    Lattice Boltzmann method (LBM) is a kinetic based numerical scheme for the simulation of fluid flow. While the approach has attracted considerable attention during the last two decades, there is a need for systematic investigation of its applicability for complex canonical turbulent flow problems of engineering interest, where the nature of the numerical properties of the underlying scheme plays an important role for their accurate solution. In this paper, we discuss and evaluate a LBM based on a multiblock approach for efficient large eddy simulation of three-dimensional external flow past a circular cylinder in the transitional regime characterized by the presence of multiple scales. For enhanced numerical stability at higher Reynolds numbers, a multiple relaxation time formulation is considered. The effect of subgrid scales is represented by means of a Smagorinsky eddy-viscosity model, where the model coefficient is computed locally by means of a dynamic procedure, providing better representation of flow physics with reduced empiricism. Simulations are performed for a Reynolds number of 3900 based on the free stream velocity and cylinder diameter for which prior data is available for comparison. The presence of laminar boundary layer which separates into a pair of shear layers that evolve into turbulent wakes impose particular challenge for numerical methods for this condition. The relatively low numerical dissipation introduced by the inherently parallel and second-order accurate LBM is an important computational asset in this regard. Computations using five different grid levels, where the various blocks are suitably aligned to resolve multiscale flow features show that the structure of the recirculation region is well reproduced and the statistics of the mean flow and turbulent fluctuations are in satisfactory agreement with prior data. (paper)

  15. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  16. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  17. Tracer water transport and subgrid precipitation variation within atmospheric general circulation models

    Science.gov (United States)

    Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.

    1988-03-01

    A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.

  18. Tracer water transport and subgrid precipitation variation within atmospheric general circulation models

    Science.gov (United States)

    Koster, Randal D.; Eagleson, Peter S.; Broecker, Wallace S.

    1988-01-01

    A capability is developed for monitoring tracer water movement in the three-dimensional Goddard Institute for Space Science Atmospheric General Circulation Model (GCM). A typical experiment with the tracer water model follows water evaporating from selected grid squares and determines where this water first returns to the Earth's surface as precipitation or condensate, thereby providing information on the lateral scales of hydrological transport in the GCM. Through a comparison of model results with observations in nature, inferences can be drawn concerning real world water transport. Tests of the tracer water model include a comparison of simulated and observed vertically-integrated vapor flux fields and simulations of atomic tritium transport from the stratosphere to the oceans. The inter-annual variability of the tracer water model results is also examined.

  19. Filtered Mass Density Function for Subgrid Scale Modeling of Turbulent Diffusion Flames

    National Research Council Canada - National Science Library

    Givi, Peyman

    2002-01-01

    .... These equations were solved with a new Lagrangian Monte Carlo scheme. The model predictions were compared with results obtained via conventional LES closures and with direct numerical simulation (DNS...

  20. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  1. Renormalization-group theory for the eddy viscosity in subgrid modeling

    Science.gov (United States)

    Zhou, YE; Vahala, George; Hossain, Murshed

    1988-01-01

    Renormalization-group theory is applied to incompressible three-dimensional Navier-Stokes turbulence so as to eliminate unresolvable small scales. The renormalized Navier-Stokes equation now includes a triple nonlinearity with the eddy viscosity exhibiting a mild cusp behavior, in qualitative agreement with the test-field model results of Kraichnan. For the cusp behavior to arise, not only is the triple nonlinearity necessary but the effects of pressure must be incorporated in the triple term. The renormalized eddy viscosity will not exhibit a cusp behavior if it is assumed that a spectral gap exists between the large and small scales.

  2. Mesh-Sequenced Realizations for Evaluation of Subgrid-Scale Models for Turbulent Combustion (Short Term Innovative Research Program)

    Science.gov (United States)

    2018-02-15

    conservation equations. The closure problem hinges on the evaluation of the filtered chemical production rates. In MRA/MSR, simultaneous large-eddy... simultaneous , constrained large-eddy simulations at three different mesh levels as a means of connecting reactive scalar information at different...functions of a locally normalized subgrid Damköhler number (a measure of the distribution of inverse chemical time scales in the neighborhood of a

  3. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  4. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  5. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    Science.gov (United States)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  6. Birefringent dispersive FDTD subgridding scheme

    OpenAIRE

    De Deckere, B; Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2016-01-01

    A novel 2D finite difference time domain (FDTD) subgridding method is proposed, only subject to the Courant limit of the coarse grid. By making mu or epsilon inside the subgrid dispersive, unconditional stability is induced at the cost of a sparse, implicit set of update equations. By only adding dispersion along preferential directions, it is possible to dramatically reduce the rank of the matrix equation that needs to be solved.

  7. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  8. Subgrid-scale turbulence in shock-boundary layer flows

    Science.gov (United States)

    Jammalamadaka, Avinash; Jaberi, Farhad

    2015-04-01

    Data generated by direct numerical simulation (DNS) for a Mach 2.75 zero-pressure gradient turbulent boundary layer interacting with shocks of different intensities are used for a priori analysis of subgrid-scale (SGS) turbulence and various terms in the compressible filtered Navier-Stokes equations. The numerical method used for DNS is based on a hybrid scheme that uses a non-dissipative central scheme in the shock-free turbulent regions and a robust monotonicity-preserving scheme in the shock regions. The behavior of SGS stresses and their components, namely Leonard, Cross and Reynolds components, is examined in various regions of the flow for different shock intensities and filter widths. The backscatter in various regions of the flow is found to be significant only instantaneously, while the ensemble-averaged statistics indicate no significant backscatter. The budgets for the SGS kinetic energy equation are examined for a better understanding of shock-tubulence interactions at the subgrid level and also with the aim of providing useful information for one-equation LES models. A term-by-term analysis of SGS terms in the filtered total energy equation indicate that while each term in this equation is significant by itself, the net contribution by all of them is relatively small. This observation is consistent with our a posteriori analysis.

  9. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  10. Modeling lightning-NOx chemistry on a sub-grid scale in a global chemical transport model

    Directory of Open Access Journals (Sweden)

    A. Gressent

    2016-05-01

    Full Text Available For the first time, a plume-in-grid approach is implemented in a chemical transport model (CTM to parameterize the effects of the nonlinear reactions occurring within high concentrated NOx plumes from lightning NOx emissions (LNOx in the upper troposphere. It is characterized by a set of parameters including the plume lifetime, the effective reaction rate constant related to NOx–O3 chemical interactions, and the fractions of NOx conversion into HNO3 within the plume. Parameter estimates were made using the Dynamical Simple Model of Atmospheric Chemical Complexity (DSMACC box model, simple plume dispersion simulations, and the 3-D Meso-NH (non-hydrostatic mesoscale atmospheric model. In order to assess the impact of the LNOx plume approach on the NOx and O3 distributions on a large scale, simulations for the year 2006 were performed using the GEOS-Chem global model with a horizontal resolution of 2° × 2.5°. The implementation of the LNOx parameterization implies an NOx and O3 decrease on a large scale over the region characterized by a strong lightning activity (up to 25 and 8 %, respectively, over central Africa in July and a relative increase downwind of LNOx emissions (up to 18 and 2 % for NOx and O3, respectively, in July. The calculated variability in NOx and O3 mixing ratios around the mean value according to the known uncertainties in the parameter estimates is at a maximum over continental tropical regions with ΔNOx [−33.1, +29.7] ppt and ΔO3 [−1.56, +2.16] ppb, in January, and ΔNOx [−14.3, +21] ppt and ΔO3 [−1.18, +1.93] ppb, in July, mainly depending on the determination of the diffusion properties of the atmosphere and the initial NO mixing ratio injected by lightning. This approach allows us (i to reproduce a more realistic lightning NOx chemistry leading to better NOx and O3 distributions on the large scale and (ii to focus on other improvements to reduce remaining uncertainties from processes

  11. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.; Lee, Bok Jik; Im, Hong G.; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, Philip H.

    2017-01-01

    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  12. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.

    2017-01-05

    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  13. Impact of Subgrid Scale Models and Heat Loss on Large Eddy Simulations of a Premixed Jet Burner Using Flamelet-Generated Manifolds

    Science.gov (United States)

    Hernandez Perez, Francisco E.; Im, Hong G.; Lee, Bok Jik; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, L. Philip H.

    2017-11-01

    Large eddy simulations (LES) of a turbulent premixed jet flame in a confined chamber are performed employing the flamelet-generated manifold (FGM) method for tabulation of chemical kinetics and thermochemical properties, as well as the OpenFOAM framework for computational fluid dynamics. The burner has been experimentally studied by Lammel et al. (2011) and features an off-center nozzle, feeding a preheated lean methane-air mixture with an equivalence ratio of 0.71 and mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the FGM tabulation via burner-stabilized flamelets and the subgrid-scale (SGS) turbulence-chemistry interaction is modeled via presumed filtered density functions. The impact of heat loss inclusion as well as SGS modeling for both the SGS stresses and SGS variance of progress variable on the numerical results is investigated. Comparisons of the LES results against measurements show a significant improvement in the prediction of temperature when heat losses are incorporated into FGM. While further enhancements in the LES results are accomplished by using SGS models based on transported quantities and/or dynamically computed coefficients as compared to the Smagorinsky model, heat loss inclusion is more relevant. This research was sponsored by King Abdullah University of Science and Technology (KAUST) and made use of computational resources at KAUST Supercomputing Laboratory.

  14. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  15. Comparison of GCM subgrid fluxes calculated using BATS and SiB schemes with a coupled land-atmosphere high-resolution model

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Jinmei; Arritt, R.W. [Iowa State Univ., Ames, IA (United States)

    1996-12-31

    The importance of land-atmosphere interactions and biosphere in climate change studies has long been recognized, and several land-atmosphere interaction schemes have been developed. Among these, the Simple Biosphere scheme (SiB) of Sellers et al. and the Biosphere Atmosphere Transfer Scheme (BATS) of Dickinson et al. are two of the most widely known. The effects of GCM subgrid-scale inhomogeneities of surface properties in general circulation models also has received increasing attention in recent years. However, due to the complexity of land surface processes and the difficulty to prescribe the large number of parameters that determine atmospheric and soil interactions with vegetation, many previous studies and results seem to be contradictory. A GCM grid element typically represents an area of 10{sup 4}-10{sup 6} km{sup 2}. Within such an area, there exist variations of soil type, soil wetness, vegetation type, vegetation density and topography, as well as urban areas and water bodies. In this paper, we incorporate both BATS and SiB2 land surface process schemes into a nonhydrostatic, compressible version of AMBLE model (Atmospheric Model -- Boundary-Layer Emphasis), and compare the surface heat fluxes and mesoscale circulations calculated using the two schemes. 8 refs., 5 figs.

  16. Mass-flux subgrid-scale parameterization in analogy with multi-component flows: a formulation towards scale independence

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2012-11-01

    Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.

    The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.

    An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.

    The main purpose of the present paper

  17. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  18. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  19. Large-eddy simulation with accurate implicit subgrid-scale diffusion

    NARCIS (Netherlands)

    B. Koren (Barry); C. Beets

    1996-01-01

    textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is

  20. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  1. High-Resolution Global Modeling of the Effects of Subgrid-Scale Clouds and Turbulence on Precipitating Cloud Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bogenschutz, Peter [National Center for Atmospheric Research, Boulder, CO (United States); Moeng, Chin-Hoh [National Center for Atmospheric Research, Boulder, CO (United States)

    2015-10-13

    The PI’s at the National Center for Atmospheric Research (NCAR), Chin-Hoh Moeng and Peter Bogenschutz, have primarily focused their time on the implementation of the Simplified-Higher Order Turbulence Closure (SHOC; Bogenschutz and Krueger 2013) to the Multi-scale Modeling Framework (MMF) global model and testing of SHOC on deep convective cloud regimes.

  2. Hybrid Large Eddy Simulation / Reynolds Averaged Navier-Stokes Modeling in Directed Energy Applications

    Science.gov (United States)

    Zilberter, Ilya Alexandrovich

    In this work, a hybrid Large Eddy Simulation / Reynolds-Averaged Navier Stokes (LES/RANS) turbulence model is applied to simulate two flows relevant to directed energy applications. The flow solver blends the Menter Baseline turbulence closure near solid boundaries with a Lenormand-type subgrid model in the free-stream with a blending function that employs the ratio of estimated inner and outer turbulent length scales. A Mach 2.2 mixing nozzle/diffuser system representative of a gas laser is simulated under a range of exit pressures to assess the ability of the model to predict the dynamics of the shock train. The simulation captures the location of the shock train responsible for pressure recovery but under-predicts the rate of pressure increase. Predicted turbulence production at the wall is found to be highly sensitive to the behavior of the RANS turbulence model. A Mach 2.3, high-Reynolds number, three-dimensional cavity flow is also simulated in order to compute the wavefront aberrations of an optical beam passing thorough the cavity. The cavity geometry is modeled using an immersed boundary method, and an auxiliary flat plate simulation is performed to replicate the effects of the wind-tunnel boundary layer on the computed optical path difference. Pressure spectra extracted on the cavity walls agree with empirical predictions based on Rossiter's formula. Proper orthogonal modes of the wavefront aberrations in a beam originating from the cavity center agree well with experimental data despite uncertainty about in flow turbulence levels and boundary layer thicknesses over the wind tunnel window. Dynamic mode decomposition of a planar wavefront spanning the cavity reveals that wavefront distortions are driven by shear layer oscillations at the Rossiter frequencies; these disturbances create eddy shocklets that propagate into the free-stream, creating additional optical wavefront distortion.

  3. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    Science.gov (United States)

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  5. Localization of the dynamic two-parameter subgrid-scale model and application to near-wall turbulent flows

    International Nuclear Information System (INIS)

    Wang, B.; Bergstrom, D.J.

    2002-01-01

    The dynamic two-parameter mixed model (DTPMM) has been recently introduced in the large eddy simulation (LES). However, current approaches in the literatures are mathematically inconsistent. In this paper, the DTPMM has been optimized using the functional variational method. The mathematical inconsistency has been removed and a governing system of two integral equations for the model coefficients of the DTPMM and some significant features have been obtained. Coherent structures relating to the vortex motion of large vortices have been investigated, using the vortex λ 2 -definition of Jeong and Hussain (1995). The numerical results agrees with the classical wall law of von Karman (1939) and experimental correlation of Aydin and Leutheusser (1991). (author)

  6. Spectral non-equilibrium property in homogeneous isotropic turbulence and its implication in subgrid-scale modeling

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Le [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Zhu, Ying [Laboratory of Mathematics and Physics, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Liu, Yangwei, E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Lu, Lipeng [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China)

    2015-10-09

    The non-equilibrium property in turbulence is a non-negligible problem in large-eddy simulation but has not yet been systematically considered. The generalization from equilibrium turbulence to non-equilibrium turbulence requires a clear recognition of the non-equilibrium property. As a preliminary step of this recognition, the present letter defines a typical non-equilibrium process, that is, the spectral non-equilibrium process, in homogeneous isotropic turbulence. It is then theoretically investigated by employing the skewness of grid-scale velocity gradient, which permits the decomposition of resolved velocity field into an equilibrium one and a time-reversed one. Based on this decomposition, an improved Smagorinsky model is proposed to correct the non-equilibrium behavior of the traditional Smagorinsky model. The present study is expected to shed light on the future studies of more generalized non-equilibrium turbulent flows. - Highlights: • A spectral non-equilibrium process in isotropic turbulence is defined theoretically. • A decomposition method is proposed to divide a non-equilibrium turbulence field. • An improved Smagorinsky model is proposed to correct the non-equilibrium behavior.

  7. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  8. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  9. Statistical dynamical subgrid-scale parameterizations for geophysical flows

    International Nuclear Information System (INIS)

    O'Kane, T J; Frederiksen, J S

    2008-01-01

    Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.

  10. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  11. Yearly, seasonal and monthly daily average diffuse sky radiation models

    International Nuclear Information System (INIS)

    Kassem, A.S.; Mujahid, A.M.; Turner, D.W.

    1993-01-01

    A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs

  12. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  13. A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence

    International Nuclear Information System (INIS)

    Chumakov, Sergei

    2008-01-01

    We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.

  14. A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence.

    Science.gov (United States)

    Chumakov, Sergei G

    2008-09-01

    We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.

  15. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    Hunter, N.F. Jr.

    1990-01-01

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  16. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  17. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  18. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  19. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  20. A collisional-radiative average atom model for hot plasmas

    International Nuclear Information System (INIS)

    Rozsnyai, B.F.

    1996-01-01

    A collisional-radiative 'average atom' (AA) model is presented for the calculation of opacities of hot plasmas not in the condition of local thermodynamic equilibrium (LTE). The electron impact and radiative rate constants are calculated using the dipole oscillator strengths of the average atom. A key element of the model is the photon escape probability which at present is calculated for a semi infinite slab. The Fermi statistics renders the rate equation for the AA level occupancies nonlinear, which requires iterations until the steady state. AA level occupancies are found. Detailed electronic configurations are built into the model after the self-consistent non-LTE AA state is found. The model shows a continuous transition from the non-LTE to the LTE state depending on the optical thickness of the plasma. 22 refs., 13 figs., 1 tab

  1. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  2. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  3. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  4. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  5. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  6. Zonally averaged chemical-dynamical model of the lower thermosphere

    International Nuclear Information System (INIS)

    Kasting, J.F.; Roble, R.G.

    1981-01-01

    A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model

  7. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  8. The dynamics of multimodal integration: The averaging diffusion model.

    Science.gov (United States)

    Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James

    2017-12-01

    We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

  9. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  10. Online Prediction under Model Uncertainty Via Dynamic Model Averaging: Application to a Cold Rolling Mill

    National Research Council Canada - National Science Library

    Raftery, Adrian E; Karny, Miroslav; Andrysek, Josef; Ettler, Pavel

    2007-01-01

    ... is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the (correct...

  11. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.

    2015-08-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  12. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.; Combes, A.N.; Short, K.M.; Lefevre, J.; Hamilton, N.A.; Smyth, I.M.; Little, M.H.; Byrne, H.M.

    2015-01-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  13. Demonstration of two-phase Direct Numerical Simulation (DNS) methods potentiality to give information to averaged models: application to bubbles column

    International Nuclear Information System (INIS)

    Magdeleine, S.

    2009-11-01

    This work is a part of a long term project that aims at using two-phase Direct Numerical Simulation (DNS) in order to give information to averaged models. For now, it is limited to isothermal bubbly flows with no phase change. It could be subdivided in two parts: Firstly, theoretical developments are made in order to build an equivalent of Large Eddy Simulation (LES) for two phase flows called Interfaces and Sub-grid Scales (ISS). After the implementation of the ISS model in our code called Trio U , a set of various cases is used to validate this model. Then, special test are made in order to optimize the model for our particular bubbly flows. Thus we showed the capacity of the ISS model to produce a cheap pertinent solution. Secondly, we use the ISS model to perform simulations of bubbly flows in column. Results of these simulations are averaged to obtain quantities that appear in mass, momentum and interfacial area density balances. Thus, we processed to an a priori test of a complete one dimensional averaged model.We showed that this model predicts well the simplest flows (laminar and monodisperse). Moreover, the hypothesis of one pressure, which is often made in averaged model like CATHARE, NEPTUNE and RELAP5, is satisfied in such flows. At the opposite, without a polydisperse model, the drag is over-predicted and the uncorrelated A i flux needs a closure law. Finally, we showed that in turbulent flows, fluctuations of velocity and pressure in the liquid phase are not represented by the tested averaged model. (author)

  14. An averaged polarizable potential for multiscale modeling in phospholipid membranes

    DEFF Research Database (Denmark)

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...

  15. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia......We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  16. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    OpenAIRE

    Samir Khaled Safi

    2014-01-01

    The autocorrelation function (ACF) measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...

  17. Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms

    Directory of Open Access Journals (Sweden)

    Samir Khaled Safi

    2014-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different   distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  18. A new subgrid characteristic length for turbulence simulations on anisotropic grids

    Science.gov (United States)

    Trias, F. X.; Gorobets, A.; Silvis, M. H.; Verstappen, R. W. C. P.; Oliva, A.

    2017-11-01

    Direct numerical simulations of the incompressible Navier-Stokes equations are not feasible yet for most practical turbulent flows. Therefore, dynamically less complex mathematical formulations are necessary for coarse-grained simulations. In this regard, eddy-viscosity models for Large-Eddy Simulation (LES) are probably the most popular example thereof. This type of models requires the calculation of a subgrid characteristic length which is usually associated with the local grid size. For isotropic grids, this is equal to the mesh step. However, for anisotropic or unstructured grids, such as the pancake-like meshes that are often used to resolve near-wall turbulence or shear layers, a consensus on defining the subgrid characteristic length has not been reached yet despite the fact that it can strongly affect the performance of LES models. In this context, a new definition of the subgrid characteristic length is presented in this work. This flow-dependent length scale is based on the turbulent, or subgrid stress, tensor and its representations on different grids. The simplicity and mathematical properties suggest that it can be a robust definition that minimizes the effects of mesh anisotropies on simulation results. The performance of the proposed subgrid characteristic length is successfully tested for decaying isotropic turbulence and a turbulent channel flow using artificially refined grids. Finally, a simple extension of the method for unstructured meshes is proposed and tested for a turbulent flow around a square cylinder. Comparisons with existing subgrid characteristic length scales show that the proposed definition is much more robust with respect to mesh anisotropies and has a great potential to be used in complex geometries where highly skewed (unstructured) meshes are present.

  19. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  20. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  1. Optimal 25-Point Finite-Difference Subgridding Techniques for the 2D Helmholtz Equation

    Directory of Open Access Journals (Sweden)

    Tingting Wu

    2016-01-01

    Full Text Available We present an optimal 25-point finite-difference subgridding scheme for solving the 2D Helmholtz equation with perfectly matched layer (PML. This scheme is second order in accuracy and pointwise consistent with the equation. Subgrids are used to discretize the computational domain, including the interior domain and the PML. For the transitional node in the interior domain, the finite difference equation is formulated with ghost nodes, and its weight parameters are chosen by a refined choice strategy based on minimizing the numerical dispersion. Numerical experiments are given to illustrate that the newly proposed schemes can produce highly accurate seismic modeling results with enhanced efficiency.

  2. Corporate Average Fuel Economy Compliance and Effects Modeling System Documentation

    Science.gov (United States)

    2009-04-01

    The Volpe National Transportation Systems Center (Volpe Center) of the United States Department of Transportation's Research and Innovative Technology Administration has developed a modeling system to assist the National Highway Traffic Safety Admini...

  3. Bayesian Model Averaging in the Presence of Structural Breaks

    NARCIS (Netherlands)

    F. Ravazzolo (Francesco); D.J.C. van Dijk (Dick); R. Paap (Richard); Ph.H.B.F. Franses (Philip Hans)

    2006-01-01

    textabstractThis paper develops a return forecasting methodology that allows for instabil ity in the relationship between stock returns and predictor variables, for model uncertainty, and for parameter estimation uncertainty. The predictive regres sion speci¯cation that is put forward allows for

  4. Bayesian averaging over Decision Tree models for trauma severity scoring.

    Science.gov (United States)

    Schetinin, V; Jakaite, L; Krzanowski, W

    2018-01-01

    Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.

  6. Evidence on Features of a DSGE Business Cycle Model from Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2012-01-01

    textabstractThe empirical support for features of a Dynamic Stochastic General Equilibrium model with two technology shocks is valuated using Bayesian model averaging over vector autoregressions. The model features include equilibria, restrictions on long-run responses, a structural break of unknown

  7. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  8. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  9. Collaborative Project: High-resolution Global Modeling of the Effects of Subgrid-Scale Clouds and Turbulence on Precipitating Cloud Systems

    Energy Technology Data Exchange (ETDEWEB)

    Randall, David A. [Colorado State Univ., Fort Collins, CO (United States). Dept. of Atmospheric Science

    2015-11-01

    We proposed to implement, test, and evaluate recently developed turbulence parameterizations, using a wide variety of methods and modeling frameworks together with observations including ARM data. We have successfully tested three different turbulence parameterizations in versions of the Community Atmosphere Model: CLUBB, SHOC, and IPHOC. All three produce significant improvements in the simulated climate. CLUBB will be used in CAM6, and also in ACME. SHOC is being tested in the NCEP forecast model. In addition, we have achieved a better understanding of the strengths and limitations of the PDF-based parameterizations of turbulence and convection.

  10. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  11. Bayesian model averaging using particle filtering and Gaussian mixture modeling : Theory, concepts, and simulation experiments

    NARCIS (Netherlands)

    Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.

    2012-01-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive

  12. A reduced-order modeling approach to represent subgrid-scale hydrological dynamics for land-surface simulations: application in a polygonal tundra landscape

    Science.gov (United States)

    Pau, G. S. H.; Bisht, G.; Riley, W. J.

    2014-09-01

    Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO2, CH4) exchanges with the atmosphere range from the molecular scale (pore-scale O2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" that reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface-subsurface isothermal simulations were performed for summer months (June-September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998-2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 103) with very small relative approximation error (training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with

  13. An Evaluation of Marine Boundary Layer Cloud Property Simulations in the Community Atmosphere Model Using Satellite Observations: Conventional Subgrid Parameterization versus CLUBB

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hua [Joint Center for Earth Systems Technology, University of Maryland, Baltimore County, Baltimore, Maryland; Zhang, Zhibo [Joint Center for Earth Systems Technology, and Physics Department, University of Maryland, Baltimore County, Baltimore, Maryland; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Wang, Minghuai [Institute for Climate and Global Change Research, and School of Atmospheric Sciences, Nanjing University, Nanjing, China

    2018-03-01

    This paper presents a two-step evaluation of the marine boundary layer (MBL) cloud properties from two Community Atmospheric Model (version 5.3, CAM5) simulations, one based on the CAM5 standard parameterization schemes (CAM5-Base), and the other on the Cloud Layers Unified By Binormals (CLUBB) scheme (CAM5-CLUBB). In the first step, we compare the cloud properties directly from model outputs between the two simulations. We find that the CAM5-CLUBB run produces more MBL clouds in the tropical and subtropical large-scale descending regions. Moreover, the stratocumulus (Sc) to cumulus (Cu) cloud regime transition is much smoother in CAM5-CLUBB than in CAM5-Base. In addition, in CAM5-Base we find some grid cells with very small low cloud fraction (<20%) to have very high in-cloud water content (mixing ratio up to 400mg/kg). We find no such grid cells in the CAM5-CLUBB run. However, we also note that both simulations, especially CAM5-CLUBB, produce a significant amount of “empty” low cloud cells with significant cloud fraction (up to 70%) and near-zero in-cloud water content. In the second step, we use satellite observations from CERES, MODIS and CloudSat to evaluate the simulated MBL cloud properties by employing the COSP satellite simulators. We note that a feature of the COSP-MODIS simulator to mimic the minimum detection threshold of MODIS cloud masking removes much more low clouds from CAM5-CLUBB than it does from CAM5-Base. This leads to a surprising result — in the large-scale descending regions CAM5-CLUBB has a smaller COSP-MODIS cloud fraction and weaker shortwave cloud radiative forcing than CAM5-Base. A sensitivity study suggests that this is because CAM5-CLUBB suffers more from the above-mentioned “empty” clouds issue than CAM5-Base. The COSP-MODIS cloud droplet effective radius in CAM5-CLUBB shows a spatial increase from coastal St toward Cu, which is in qualitative agreement with MODIS observations. In contrast, COSP-MODIS cloud droplet

  14. Monte Carlo-based subgrid parameterization of vertical velocity and stratiform cloud microphysics in ECHAM5.5-HAM2

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2013-08-01

    Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.

  15. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang; Wang, Suojin; Huang, Jianhua Z.

    2013-01-01

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non

  16. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  17. Application of Bayesian model averaging to measurements of the primordial power spectrum

    International Nuclear Information System (INIS)

    Parkinson, David; Liddle, Andrew R.

    2010-01-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940 s s is specified at a pivot scale 0.015 Mpc -1 . For the tensors model averaging can tighten the credible upper limit, depending on prior assumptions.

  18. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  19. Comparison of power pulses from homogeneous and time-average-equivalent models

    International Nuclear Information System (INIS)

    De, T.K.; Rouben, B.

    1995-01-01

    The time-average-equivalent model is an 'instantaneous' core model designed to reproduce the same three dimensional power distribution as that generated by a time-average model. However it has been found that the time-average-equivalent model gives a full-core static void reactivity about 8% smaller than the time-average or homogeneous models. To investigate the consequences of this difference in static void reactivity in time dependent calculations, simulations of the power pulse following a hypothetical large-loss-of-coolant accident were performed with a homogeneous model and compared with the power pulse from the time-average-equivalent model. The results show that there is a much smaller difference in peak dynamic reactivity than in static void reactivity between the two models. This is attributed to the fact that voiding is not complete, but also to the retardation effect of the delayed-neutron precursors on the dynamic flux shape. The difference in peak reactivity between the models is 0.06 milli-k. The power pulses are essentially the same in the two models, because the delayed-neutron fraction in the time-average-equivalent model is lower than in the homogeneous model, which compensates for the lower void reactivity in the time-average-equivalent model. (author). 1 ref., 5 tabs., 9 figs

  20. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  1. Econometric modelling of Serbian current account determinants: Jackknife Model Averaging approach

    Directory of Open Access Journals (Sweden)

    Petrović Predrag

    2014-01-01

    Full Text Available This research aims to model Serbian current account determinants for the period Q1 2002 - Q4 2012. Taking into account the majority of relevant determinants, using the Jackknife Model Averaging approach, 48 different models have been estimated, where 1254 equations needed to be estimated and averaged for each of the models. The results of selected representative models indicate moderate persistence of the CA and positive influence of: fiscal balance, oil trade balance, terms of trade, relative income and real effective exchange rates, where we should emphasise: (i a rather strong influence of relative income, (ii the fact that the worsening of oil trade balance results in worsening of other components (probably non-oil trade balance of CA and (iii that the positive influence of terms of trade reveals functionality of the Harberger-Laursen-Metzler effect in Serbia. On the other hand, negative influence is evident in case of: relative economic growth, gross fixed capital formation, net foreign assets and trade openness. What particularly stands out is the strong effect of relative economic growth that, most likely, reveals high citizens' future income growth expectations, which has negative impact on the CA.

  2. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  3. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  4. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  5. Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations

    KAUST Repository

    Iliev, Oleg P.

    2010-01-01

    We present a two-scale finite element method for solving Brinkman\\'s equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We make use of the recently proposed discontinuous Galerkin FEM for Stokes equations by Wang and Ye in [12] and the concept of subgrid approximation developed for Darcy\\'s equations by Arbogast in [4]. In order to reduce the error along the coarse-grid interfaces we have added a alternating Schwarz iteration using patches around the coarse-grid boundaries. We have implemented the subgrid method using Deal.II FEM library, [7], and we present the computational results for a number of model problems. © 2010 Springer-Verlag Berlin Heidelberg.

  6. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  7. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  8. Sensitivity of regional meteorology and atmospheric composition during the DISCOVER-AQ period to subgrid-scale cloud-radiation interactions

    Science.gov (United States)

    Huang, X.; Allen, D. J.; Herwehe, J. A.; Alapaty, K. V.; Loughner, C.; Pickering, K. E.

    2014-12-01

    Subgrid-scale cloudiness directly influences global and regional atmospheric radiation budgets by attenuating shortwave radiation, leading to suppressed convection, decreased surface precipitation as well as other meteorological parameter changes. We use the latest version of WRF (v3.6, Apr 2014), which incorporates the Kain-Fritsch (KF) convective parameterization to provide subgrid-scale cloud fraction and condensate feedback to the rapid radiative transfer model-global (RRTMG) shortwave and longwave radiation schemes. We apply the KF scheme to simulate the DISCOVER-AQ Maryland field campaign (July 2011), and compare the sensitivity of meteorological parameters to the control run that does not include subgrid cloudiness. Furthermore, we will examine the chemical impact from subgrid cloudiness using a regional chemical transport model (CMAQ). There are several meteorological parameters influenced by subgrid cumulus clouds that are very important to air quality modeling, including changes in surface temperature that impact biogenic emission rates; changes in PBL depth that affect pollutant concentrations; and changes in surface humidity levels that impact peroxide-related reactions. Additionally, subgrid cumulus clouds directly impact air pollutant concentrations by modulating photochemistry and vertical mixing. Finally, we will compare with DISCOVER-AQ flight observation data and evaluate how well this off-line CMAQ simulation driven by WRF with the KF scheme simulates the effects of regional convection on atmospheric composition.

  9. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    Science.gov (United States)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  10. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  11. Estimation and Forecasting in Vector Autoregressive Moving Average Models for Rich Datasets

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Kapetanios, George

    We address the issue of modelling and forecasting macroeconomic variables using rich datasets, by adopting the class of Vector Autoregressive Moving Average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares (...

  12. Free-free opacity in dense plasmas with an average atom model

    International Nuclear Information System (INIS)

    Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick; Kilcrease, David Parker; Starrett, Charles Edward

    2017-01-01

    A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.

  13. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  14. Analysis of litter size and average litter weight in pigs using a recursive model

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel; Thompson, Robin

    2007-01-01

    An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size......, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance...

  15. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  16. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  17. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...

  18. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition of ...

  19. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  20. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    OpenAIRE

    Kirti AREKAR; Rinku JAIN

    2017-01-01

    The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE) by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in...

  1. Extension of the time-average model to Candu refueling schemes involving reshuffling

    International Nuclear Information System (INIS)

    Rouben, Benjamin; Nichita, Eleodor

    2008-01-01

    Candu reactors consist of a horizontal non-pressurized heavy-water-filled vessel penetrated axially by fuel channels, each containing twelve 50-cm-long fuel bundles cooled by pressurized heavy water. Candu reactors are refueled on-line and, as a consequence, the core flux and power distributions change continuously. For design purposes, a 'time-average' model was developed in the 1970's to calculate the average over time of the flux and power distribution and to study the effects of different refueling schemes. The original time-average model only allows treatment of simple push-through refueling schemes whereby fresh fuel is inserted at one end of the channel and irradiated fuel is removed from the other end. With the advent of advanced fuel cycles and new Candu designs, novel refueling schemes may be considered, such as reshuffling discharged fuel from some channels into other channels, to achieve better overall discharge burnup. Such reshuffling schemes cannot be handled by the original time-average model. This paper presents an extension of the time-average model to allow for the treatment of refueling schemes with reshuffling. Equations for the extended model are presented, together with sample results for a simple demonstration case. (authors)

  2. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  3. A novel Generalized State-Space Averaging (GSSA) model for advanced aircraft electric power systems

    International Nuclear Information System (INIS)

    Ebrahimi, Hadi; El-Kishky, Hassan

    2015-01-01

    Highlights: • A study model is developed for aircraft electric power systems. • A novel GSSA model is developed for the interconnected power grid. • The system’s dynamics are characterized under various conditions. • The averaged results are compared and verified with the actual model. • The obtained measured values are validated with available aircraft standards. - Abstract: The growing complexity of Advanced Aircraft Electric Power Systems (AAEPS) has made conventional state-space averaging models inadequate for systems analysis and characterization. This paper presents a novel Generalized State-Space Averaging (GSSA) model for the system analysis, control and characterization of AAEPS. The primary objective of this paper is to introduce a mathematically elegant and computationally simple model to copy the AAEPS behavior at the critical nodes of the electric grid. Also, to reduce some or all of the drawbacks (complexity, cost, simulation time…, etc) associated with sensor-based monitoring and computer aided design software simulations popularly used for AAEPS characterization. It is shown in this paper that the GSSA approach overcomes the limitations of the conventional state-space averaging method, which fails to predict the behavior of AC signals in a circuit analysis. Unlike conventional averaging method, the GSSA model presented in this paper includes both DC and AC components. This would capture the key dynamic and steady-state characteristics of the aircraft electric systems. The developed model is then examined for the aircraft system’s visualization and accuracy of computation under different loading scenarios. Through several case studies, the applicability and effectiveness of the GSSA method is verified by comparing to the actual real-time simulation model obtained from Powersim 9 (PSIM9) software environment. The simulations results represent voltage, current and load power at the major nodes of the AAEPS. It has been demonstrated that

  4. The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average options

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    innovation for a Laplace innovation assumption improves the pricing in a smaller way. Apart from investigating directly the value of model sophistication in terms of dollar losses, we also use the model condence set approach to statistically infer the set of models that delivers the best pricing performance.......We assess the predictive accuracy of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set 248 multivariate models that differer...

  5. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    Science.gov (United States)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF

  6. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    International Nuclear Information System (INIS)

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  7. Serpent-COREDAX analysis of CANDU-6 time-average model

    Energy Technology Data Exchange (ETDEWEB)

    Motalab, M.A.; Cho, B.; Kim, W.; Cho, N.Z.; Kim, Y., E-mail: yongheekim@kaist.ac.kr [Korea Advanced Inst. of Science and Technology (KAIST), Dept. of Nuclear and Quantum Engineering Daejeon (Korea, Republic of)

    2015-07-01

    COREDAX-2 is the nuclear core analysis nodal code that has adopted the Analytic Function Expansion Nodal (AFEN) methodology which has been developed in Korea. AFEN method outperforms in terms of accuracy compared to other conventional nodal methods. To evaluate the possibility of CANDU-type core analysis using the COREDAX-2, the time-average analysis code system was developed. The two-group homogenized cross-sections were calculated using Monte Carlo code, Serpent2. A stand-alone time-average module was developed to determine the time-average burnup distribution in the core for a given fuel management strategy. The coupled Serpent-COREDAX-2 calculation converges to an equilibrium time-average model for the CANDU-6 core. (author)

  8. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    Science.gov (United States)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  9. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  10. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  11. A new nonlinear turbulence model based on Partially-Averaged Navier-Stokes Equations

    International Nuclear Information System (INIS)

    Liu, J T; Wu, Y L; Cai, C; Liu, S H; Wang, L Q

    2013-01-01

    Partially-averaged Navier-Stokes (PANS) Model was recognized as a Reynolds-averaged Navier-Stokes (RANS) to direct numerical simulation (DNS) bridging method. PANS model was purported for any filter width-from RANS to DNS. PANS method also shared some similarities with the currently popular URANS (unsteady RANS) method. In this paper, a new PANS model was proposed, which was based on RNG k-ε turbulence model. The Standard and RNG k-ε turbulence model were both isotropic models, as well as PANS models. The sheer stress in those PANS models was solved by linear equation. The linear hypothesis was not accurate in the simulation of complex flow, such as stall phenomenon. The sheer stress here was solved by nonlinear method proposed by Ehrhard. Then, the nonlinear PANS model was set up. The pressure coefficient of the suction side of the NACA0015 hydrofoil was predicted. The result of pressure coefficient agrees well with experimental result, which proves that the nonlinear PANS model can capture the high pressure gradient flow. A low specific centrifugal pump was used to verify the capacity of the nonlinear PANS model. The comparison between the simulation results of the centrifugal pump and Particle Image Velocimetry (PIV) results proves that the nonlinear PANS model can be used in the prediction of complex flow field

  12. Average intensity and spreading of partially coherent model beams propagating in a turbulent biological tissue

    International Nuclear Information System (INIS)

    Wu, Yuqian; Zhang, Yixin; Wang, Qiu; Hu, Zhengda

    2016-01-01

    For Gaussian beams with three different partially coherent models, including Gaussian-Schell model (GSM), Laguerre-Gaussian Schell-model (LGSM) and Bessel-Gaussian Schell-model (BGSM) beams propagating through a biological turbulent tissue, the expression of the spatial coherence radius of a spherical wave propagating in a turbulent biological tissue, and the average intensity and beam spreading for GSM, LGSM and BGSM beams are derived based on the fractal model of power spectrum of refractive-index variations in biological tissue. Effects of partially coherent model and parameters of biological turbulence on such beams are studied in numerical simulations. Our results reveal that the spreading of GSM beams is smaller than LGSM and BGSM beams on the same conditions, and the beam with larger source coherence width has smaller beam spreading than that with smaller coherence width. The results are useful for any applications involved light beam propagation through tissues, especially the cases where the average intensity and spreading properties of the light should be taken into account to evaluate the system performance and investigations in the structures of biological tissue. - Highlights: • Spatial coherence radius of a spherical wave propagating in a turbulent biological tissue is developed. • Expressions of average intensity and beam spreading for GSM, LGSM and BGSM beams in a turbulent biological tissue are derived. • The contrast for the three partially coherent model beams is shown in numerical simulations. • The results are useful for any applications involved light beam propagation through tissues.

  13. Model averaging in the analysis of leukemia mortality among Japanese A-bomb survivors

    International Nuclear Information System (INIS)

    Richardson, David B.; Cole, Stephen R.

    2012-01-01

    Epidemiological studies often include numerous covariates, with a variety of possible approaches to control for confounding of the association of primary interest, as well as a variety of possible models for the exposure-response association of interest. Walsh and Kaiser (Radiat Environ Biophys 50:21-35, 2011) advocate a weighted averaging of the models, where the weights are a function of overall model goodness of fit and degrees of freedom. They apply this method to analyses of radiation-leukemia mortality associations among Japanese A-bomb survivors. We caution against such an approach, noting that the proposed model averaging approach prioritizes the inclusion of covariates that are strong predictors of the outcome, but which may be irrelevant as confounders of the association of interest, and penalizes adjustment for covariates that are confounders of the association of interest, but may contribute little to overall model goodness of fit. We offer a simple illustration of how this approach can lead to biased results. The proposed model averaging approach may also be suboptimal as way to handle competing model forms for an exposure-response association of interest, given adjustment for the same set of confounders; alternative approaches, such as hierarchical regression, may provide a more useful way to stabilize risk estimates in this setting. (orig.)

  14. Distribution of average, marginal, and participation tax rates among Czech taxpayers: results from a TAXBEN model

    Czech Academy of Sciences Publication Activity Database

    Dušek, Libor; Kalíšková, Klára; Münich, Daniel

    2013-01-01

    Roč. 63, č. 6 (2013), s. 474-504 ISSN 0015-1920 R&D Projects: GA TA ČR(CZ) TD010033 Institutional support: RVO:67985998 Keywords : TAXBEN models * average tax rates * marginal tax rates Subject RIV: AH - Economics Impact factor: 0.358, year: 2013 http://journal.fsv.cuni.cz/storage/1287_dusek.pdf

  15. Averaging of the Equations of the Standard Cosmological Model over Rapid Oscillations

    Science.gov (United States)

    Ignat'ev, Yu. G.; Samigullina, A. R.

    2017-11-01

    An averaging of the equations of the standard cosmological model (SCM) is carried out. It is shown that the main contribution to the macroscopic energy density of the scalar field comes from its microscopic oscillations with the Compton period. The effective macroscopic equation of state of the oscillations of the scalar field corresponds to the nonrelativistic limit.

  16. Properties of bright solitons in averaged and unaveraged models for SDG fibres

    Science.gov (United States)

    Kumar, Ajit; Kumar, Atul

    1996-04-01

    Using the slowly varying envelope approximation and averaging over the fibre cross-section the evolution equation for optical pulses in semiconductor-doped glass (SDG) fibres is derived from the nonlinear wave equation. Bright soliton solutions of this equation are obtained numerically and their properties are studied and compared with those of the bright solitons in the unaveraged model.

  17. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  18. Air quality impact of two power plants using a sub-grid

    International Nuclear Information System (INIS)

    Drevet, Jerome; Musson-Genon, Luc

    2012-01-01

    Modeling point source emissions of air pollutants with regional Eulerian models is likely to lead to errors because a 3D Eulerian model is not able to correctly reproduce the evolution of a plume near its source. To overcome these difficulties, we applied a Gaussian puff model - imbedded within a 3D Eulerian model - for an impact assessment of EDF fossil fuel-fired power plants of Porcheville and Vitry, Ile-de-France. We simulated an entire year of atmospheric processes for an area covering the Paris region with the Polyphemus platform with which we conducted various scenarios with or without a Gaussian puff model, referred as Plume-in-grid, to independently handle 'with major point source emissions in Ile-de-France. Our study focuses on four chemical compounds (NO, NO 2 , SO 2 and O 3 ). The use of a Gaussian model is important, particularly for primary compounds with low reactivity such as SO, especially as industrial stacks are the major source of its emissions. SO 2 concentrations simulated using Plume-in-grid tare closer to the concentrations measured by the stations of the air quality agencies (Associations Agreees de Surveillance de la Qualite de l'Air, AASQA), although they remain largely overestimated. The use of a Gaussian model increases the concentrations near the source and lowers background levels of various chemical species (except O 3 ). The simulated concentrations may vary by over 30 % depending on whether we consider the Gaussian model for primary compounds such as SO 2 and NO, and around 2 % for secondary compounds such as NO 2 and O 3 . Regarding the impact of fossil fuel-fired power plants, simulated concentrations are increased by about 1 μg/m 3 approximately for SO 2 annual averages close to the Porcheville stack and are lowered by about 0.5 μg/m 3 far from the sources, highlighting the less diffusive character of the Gaussian model by comparison with the Eulerian model. The integration of a sub-grid Gaussian model offers the advantage of

  19. Effects of stratospheric aerosol surface processes on the LLNL two-dimensional zonally averaged model

    International Nuclear Information System (INIS)

    Connell, P.S.; Kinnison, D.E.; Wuebbles, D.J.; Burley, J.D.; Johnston, H.S.

    1992-01-01

    We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE 11 satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions

  20. Robust Determinants of Growth in Asian Developing Economies: A Bayesian Panel Data Model Averaging Approach

    OpenAIRE

    LEON-GONZALEZ, Roberto; VINAYAGATHASAN, Thanabalasingam

    2013-01-01

    This paper investigates the determinants of growth in the Asian developing economies. We use Bayesian model averaging (BMA) in the context of a dynamic panel data growth regression to overcome the uncertainty over the choice of control variables. In addition, we use a Bayesian algorithm to analyze a large number of competing models. Among the explanatory variables, we include a non-linear function of inflation that allows for threshold effects. We use an unbalanced panel data set of 27 Asian ...

  1. Comparative Analysis of Market Volatility in Indian Banking and IT Sectors by using Average Decline Model

    Directory of Open Access Journals (Sweden)

    Kirti AREKAR

    2017-12-01

    Full Text Available The stock market volatility is depends on three major features, complete volatility, volatility fluctuations, and volatility attention and they are calculate by the statistical techniques. Comparative analysis of market volatility for two major index i.e. banking & IT sector in Bombay stock exchange (BSE by using average decline model. The average degeneration process in volatility has being used after very high and low stock returns. The results of this study explain significant decline in volatility fluctuations, attention, and level between epochs of pre and post particularly high stock returns.

  2. Evidence on a Real Business Cycle Model with Neutral and Investment-Specific Technology Shocks using Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2010-01-01

    textabstractThe empirical support for a real business cycle model with two technology shocks is evaluated using a Bayesian model averaging procedure. This procedure makes use of a finite mixture of many models within the class of vector autoregressive (VAR) processes. The linear VAR model is

  3. Reynolds-Averaged Navier-Stokes Analysis of Zero Efflux Flow Control over a Hump Model

    Science.gov (United States)

    Rumsey, Christopher L.

    2006-01-01

    The unsteady flow over a hump model with zero efflux oscillatory flow control is modeled computationally using the unsteady Reynolds-averaged Navier-Stokes equations. Three different turbulence models produce similar results, and do a reasonably good job predicting the general character of the unsteady surface pressure coefficients during the forced cycle. However, the turbulent shear stresses are underpredicted in magnitude inside the separation bubble, and the computed results predict too large a (mean) separation bubble compared with experiment. These missed predictions are consistent with earlier steady-state results using no-flow-control and steady suction, from a 2004 CFD validation workshop for synthetic jets.

  4. A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

    Science.gov (United States)

    Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by

  5. Validation of numerical model for cook stove using Reynolds averaged Navier-Stokes based solver

    Science.gov (United States)

    Islam, Md. Moinul; Hasan, Md. Abdullah Al; Rahman, Md. Mominur; Rahaman, Md. Mashiur

    2017-12-01

    Biomass fired cook stoves, for many years, have been the main cooking appliance for the rural people of developing countries. Several researches have been carried out to the find efficient stoves. In the present study, numerical model of an improved household cook stove is developed to analyze the heat transfer and flow behavior of gas during operation. The numerical model is validated with the experimental results. Computation of the numerical model is executed the using non-premixed combustion model. Reynold's averaged Navier-Stokes (RaNS) equation along with the κ - ɛ model governed the turbulent flow associated within the computed domain. The computational results are in well agreement with the experiment. Developed numerical model can be used to predict the effect of different biomasses on the efficiency of the cook stove.

  6. Distribution of average, marginal, and participation tax rates among Czech taxpayers: results from a TAXBEN model

    Czech Academy of Sciences Publication Activity Database

    Dušek, Libor; Kalíšková, Klára; Münich, Daniel

    2013-01-01

    Roč. 63, č. 6 (2013), s. 474-504 ISSN 0015-1920 R&D Projects: GA MŠk(CZ) SVV 267801/2013 Institutional support: PRVOUK-P23 Keywords : TAXBEN models * average tax rates * marginal tax rates Subject RIV: AH - Economics Impact factor: 0.358, year: 2013 http://journal.fsv.cuni.cz/storage/1287_dusek.pdf

  7. A Tidally Averaged Sediment-Transport Model for San Francisco Bay, California

    Science.gov (United States)

    Lionberger, Megan A.; Schoellhamer, David H.

    2009-01-01

    A tidally averaged sediment-transport model of San Francisco Bay was incorporated into a tidally averaged salinity box model previously developed and calibrated using salinity, a conservative tracer (Uncles and Peterson, 1995; Knowles, 1996). The Bay is represented in the model by 50 segments composed of two layers: one representing the channel (>5-meter depth) and the other the shallows (0- to 5-meter depth). Calculations are made using a daily time step and simulations can be made on the decadal time scale. The sediment-transport model includes an erosion-deposition algorithm, a bed-sediment algorithm, and sediment boundary conditions. Erosion and deposition of bed sediments are calculated explicitly, and suspended sediment is transported by implicitly solving the advection-dispersion equation. The bed-sediment model simulates the increase in bed strength with depth, owing to consolidation of fine sediments that make up San Francisco Bay mud. The model is calibrated to either net sedimentation calculated from bathymetric-change data or measured suspended-sediment concentration. Specified boundary conditions are the tributary fluxes of suspended sediment and suspended-sediment concentration in the Pacific Ocean. Results of model calibration and validation show that the model simulates the trends in suspended-sediment concentration associated with tidal fluctuations, residual velocity, and wind stress well, although the spring neap tidal suspended-sediment concentration variability was consistently underestimated. Model validation also showed poor simulation of seasonal sediment pulses from the Sacramento-San Joaquin River Delta at Point San Pablo because the pulses enter the Bay over only a few days and the fate of the pulses is determined by intra-tidal deposition and resuspension that are not included in this tidally averaged model. The model was calibrated to net-basin sedimentation to calculate budgets of sediment and sediment-associated contaminants. While

  8. Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems

    Science.gov (United States)

    Shahab, Azin

    In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.

  9. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  10. Time series forecasting using ERNN and QR based on Bayesian model averaging

    Science.gov (United States)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  11. Comparative study of dense plasma state equations obtained from different models of average-atom

    International Nuclear Information System (INIS)

    Fromy, Patrice

    1991-01-01

    This research thesis addresses the influence of temperature and density effects on magnitudes such as pressure, energy, ionisation, and on energy levels of a body described according to the approximation of an electrically neutral isolated atomic sphere. Starting from the general formalism of the functional density, with some approximations, the author deduces the Thomas-Fermi, Thomas-Fermi-Dirac, and Thomas-Fermi-Dirac-Weizsaecker models, and an average-atom approximated quantum model. For each of these models, the author presents an explicit method of resolution, as well as the determination of different magnitudes taken into account in this study. For the different studied magnitudes, the author highlights effects due to the influence of temperature and of density, as well as variations due to the different models [fr

  12. Average spectral power changes at the hippocampal electroencephalogram in schizophrenia model induced by ketamine.

    Science.gov (United States)

    Sampaio, Luis Rafael L; Borges, Lucas T N; Silva, Joyse M F; de Andrade, Francisca Roselin O; Barbosa, Talita M; Oliveira, Tatiana Q; Macedo, Danielle; Lima, Ricardo F; Dantas, Leonardo P; Patrocinio, Manoel Cláudio A; do Vale, Otoni C; Vasconcelos, Silvânia M M

    2018-02-01

    The use of ketamine (Ket) as a pharmacological model of schizophrenia is an important tool for understanding the main mechanisms of glutamatergic regulated neural oscillations. Thus, the aim of the current study was to evaluate Ket-induced changes in the average spectral power using the hippocampal quantitative electroencephalography (QEEG). To this end, male Wistar rats were submitted to a stereotactic surgery for the implantation of an electrode in the right hippocampus. After three days, the animals were divided into four groups that were treated for 10 consecutive days with Ket (10, 50, or 100 mg/kg). Brainwaves were captured on the 1st or 10th day, respectively, to acute or repeated treatments. The administration of Ket (10, 50, or 100 mg/kg), compared with controls, induced changes in the hippocampal average spectral power of delta, theta, alpha, gamma low or high waves, after acute or repeated treatments. Therefore, based on the alterations in the average spectral power of hippocampal waves induced by Ket, our findings might provide a basis for the use of hippocampal QEEG in animal models of schizophrenia. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  13. Estimating the average treatment effect on survival based on observational data and using partly conditional modeling.

    Science.gov (United States)

    Gong, Qi; Schaubel, Douglas E

    2017-03-01

    Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.

  14. Sensitivity test of parameterizations of subgrid-scale orographic form drag in the NCAR CESM1

    Science.gov (United States)

    Liang, Yishuang; Wang, Lanning; Zhang, Guang Jun; Wu, Qizhong

    2017-05-01

    Turbulent drag caused by subgrid orographic form drag has significant effects on the atmosphere. It is represented through parameterization in large-scale numerical prediction models. An indirect parameterization scheme, the Turbulent Mountain Stress scheme (TMS), is currently used in the National Center for Atmospheric Research Community Earth System Model v1.0.4. In this study we test a direct scheme referred to as BBW04 (Beljaars et al. in Q J R Meteorol Soc 130:1327-1347, 10.1256/qj.03.73), which has been used in several short-term weather forecast models and earth system models. Results indicate that both the indirect and direct schemes increase surface wind stress and improve the model's performance in simulating low-level wind speed over complex orography compared to the simulation without subgrid orographic effect. It is shown that the TMS scheme produces a more intense wind speed adjustment, leading to lower wind speed near the surface. The low-level wind speed by the BBW04 scheme agrees better with the ERA-Interim reanalysis and is more sensitive to complex orography as a direct method. Further, the TMS scheme increases the 2-m temperature and planetary boundary layer height over large areas of tropical and subtropical Northern Hemisphere land.

  15. A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships

    Directory of Open Access Journals (Sweden)

    Shuang Guan

    2017-10-01

    Full Text Available Many of the existing autoregressive moving average (ARMA forecast models are based on one main factor. In this paper, we proposed a new two-factor first-order ARMA forecast model based on fuzzy fluctuation logical relationships of both a main factor and a secondary factor of a historical training time series. Firstly, we generated a fluctuation time series (FTS for two factors by calculating the difference of each data point with its previous day, then finding the absolute means of the two FTSs. We then constructed a fuzzy fluctuation time series (FFTS according to the defined linguistic sets. The next step was establishing fuzzy fluctuation logical relation groups (FFLRGs for a two-factor first-order autoregressive (AR(1 model and forecasting the training data with the AR(1 model. Then we built FFLRGs for a two-factor first-order autoregressive moving average (ARMA(1,m model. Lastly, we forecasted test data with the ARMA(1,m model. To illustrate the performance of our model, we used real Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX and Dow Jones datasets as a secondary factor to forecast TAIEX. The experiment results indicate that the proposed two-factor fluctuation ARMA method outperformed the one-factor method based on real historic data. The secondary factor may have some effects on the main factor and thereby impact the forecasting results. Using fuzzified fluctuations rather than fuzzified real data could avoid the influence of extreme values in historic data, which performs negatively while forecasting. To verify the accuracy and effectiveness of the model, we also employed our method to forecast the Shanghai Stock Exchange Composite Index (SHSECI from 2001 to 2015 and the international gold price from 2000 to 2010.

  16. Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting

    Science.gov (United States)

    Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.

    2018-04-01

    Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.

  17. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    Energy Technology Data Exchange (ETDEWEB)

    Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  18. Forecasting Rice Productivity and Production of Odisha, India, Using Autoregressive Integrated Moving Average Models

    Directory of Open Access Journals (Sweden)

    Rahul Tripathi

    2014-01-01

    Full Text Available Forecasting of rice area, production, and productivity of Odisha was made from the historical data of 1950-51 to 2008-09 by using univariate autoregressive integrated moving average (ARIMA models and was compared with the forecasted all Indian data. The autoregressive (p and moving average (q parameters were identified based on the significant spikes in the plots of partial autocorrelation function (PACF and autocorrelation function (ACF of the different time series. ARIMA (2, 1, 0 model was found suitable for all Indian rice productivity and production, whereas ARIMA (1, 1, 1 was best fitted for forecasting of rice productivity and production in Odisha. Prediction was made for the immediate next three years, that is, 2007-08, 2008-09, and 2009-10, using the best fitted ARIMA models based on minimum value of the selection criterion, that is, Akaike information criteria (AIC and Schwarz-Bayesian information criteria (SBC. The performances of models were validated by comparing with percentage deviation from the actual values and mean absolute percent error (MAPE, which was found to be 0.61 and 2.99% for the area under rice in Odisha and India, respectively. Similarly for prediction of rice production and productivity in Odisha and India, the MAPE was found to be less than 6%.

  19. ANALISIS CURAH HUJAN DAN DEBIT MODEL SWAT DENGAN METODE MOVING AVERAGE DI DAS CILIWUNG HULU

    Directory of Open Access Journals (Sweden)

    Defri Satiya Zuma

    2017-09-01

    Full Text Available Watershed can be regarded as a hydrological system that has a function in transforming rainwater as an input into outputs such as flow and sediment. The transformation of inputs into outputs has specific forms and properties. The transformation involves many processes, including processes occurred on the surface of the land, river basins, in soil and aquifer. This study aimed to apply the SWAT model  in  Ciliwung Hulu Watershed, asses the effect of average rainfall  on 3 days, 5 days, 7 days and 10 days of the hydrological characteristics in Ciliwung Hulu Watershed. The correlation coefficient (r between rainfall and discharge was positive, it indicated that there was an unidirectional relationship between rainfall and discharge in the upstream, midstream and downstream of the watershed. The upper limit ratio of discharge had a downward trend from upstream to downstream, while the lower limit ratio of  discharge had an upward trend from upstream to downstream. It showed that the discharge peak in Ciliwung  Hulu Watershed from upstream to downstream had a downward trend while the baseflow from upstream to downstream had an upward trend. It showed that the upstream of Ciliwung Hulu Watershed had the highest ratio of discharge peak  and baseflow so it needs the soil and water conservations and technical civil measures. The discussion concluded that the SWAT model could be well applied in Ciliwung Hulu Watershed, the most affecting average rainfall on the hydrological characteristics was the average rainfall of 10 days. On average  rainfall of 10 days, all components had contributed maximally for river discharge.

  20. Radiative forcing and climate metrics for ozone precursor emissions: the impact of multi-model averaging

    Directory of Open Access Journals (Sweden)

    C. R. MacIntosh

    2015-04-01

    Full Text Available Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds and CO. When these ozone changes are used to calculate radiative forcing (RF (and climate metrics such as the global warming potential (GWP and global temperature-change potential (GTP there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia. We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3

  1. Valuing structure, model uncertainty and model averaging in vector autoregressive processes

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2004-01-01

    textabstractEconomic policy decisions are often informed by empirical analysis based on accurate econometric modeling. However, a decision-maker is usually only interested in good estimates of outcomes, while an analyst must also be interested in estimating the model. Accurate inference on

  2. Autonomous Operation of Hybrid Microgrid With AC and DC Subgrids

    DEFF Research Database (Denmark)

    Chiang Loh, Poh; Li, Ding; Kang Chai, Yi

    2013-01-01

    sources distributed throughout the two types of subgrids, which is certainly tougher than previous efforts developed for only ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc sources, ac sources, and interlinking...... converters. Suitable control and normalization schemes are now developed for controlling them with the overall hybrid microgrid performance already verified in simulation and experiment.......This paper investigates on power-sharing issues of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac subgrids interconnected by power electronic interfaces. The main challenge here is to manage power flows among all...

  3. Adaptive and self-averaging Thouless-Anderson-Palmer mean-field theory for probabilistic modeling

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2001-01-01

    We develop a generalization of the Thouless-Anderson-Palmer (TAP) mean-field approach of disorder physics. which makes the method applicable to the computation of approximate averages in probabilistic models for real data. In contrast to the conventional TAP approach, where the knowledge...... of the distribution of couplings between the random variables is required, our method adapts to the concrete set of couplings. We show the significance of the approach in two ways: Our approach reproduces replica symmetric results for a wide class of toy models (assuming a nonglassy phase) with given disorder...... distributions in the thermodynamic limit. On the other hand, simulations on a real data model demonstrate that the method achieves more accurate predictions as compared to conventional TAP approaches....

  4. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    International Nuclear Information System (INIS)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-01-01

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.

  5. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  6. Real-time traffic signal optimization model based on average delay time per person

    Directory of Open Access Journals (Sweden)

    Pengpeng Jiao

    2015-10-01

    Full Text Available Real-time traffic signal control is very important for relieving urban traffic congestion. Many existing traffic control models were formulated using optimization approach, with the objective functions of minimizing vehicle delay time. To improve people’s trip efficiency, this article aims to minimize delay time per person. Based on the time-varying traffic flow data at intersections, the article first fits curves of accumulative arrival and departure vehicles, as well as the corresponding functions. Moreover, this article transfers vehicle delay time to personal delay time using average passenger load of cars and buses, employs such time as the objective function, and proposes a signal timing optimization model for intersections to achieve real-time signal parameters, including cycle length and green time. This research further implements a case study based on practical data collected at an intersection in Beijing, China. The average delay time per person and queue length are employed as evaluation indices to show the performances of the model. The results show that the proposed methodology is capable of improving traffic efficiency and is very effective for real-world applications.

  7. Application of the Periodic Average System Model in Dam Deformation Analysis

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2015-01-01

    Full Text Available Dams are among the most important hydraulic engineering facilities used for water supply, flood control, and hydroelectric power. Monitoring of dams is crucial since deformation might have occurred. How to obtain the deformation information and then judge the safe conditions is the key and difficult problem in dam deformation monitoring field. This paper proposes the periodic average system model and creates the concept of “settlement activity” based on the dam deformation issue. Long-term deformation monitoring data is carried out in a pumped-storage power station, this model combined with settlement activity is used to make the single point deformation analysis, and then the whole settlement activity profile is drawn by clustering analysis. Considering the cumulative settlement value of every point, the dam deformation trend is analyzed in an intuitive effect way. The analysis mode of combined single point with multipoints is realized. The results show that the key deformation information of the dam can be easily grasped by the application of the periodic average system model combined with the distribution diagram of settlement activity. And, above all, the ideas of this research provide an effective method for dam deformation analysis.

  8. Average and dispersion of the luminosity-redshift relation in the concordance model

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Dayan, I. [DESY Hamburg (Germany). Theory Group; Gasperini, M. [Bari Univ. (Italy). Dipt. di Fisica; Istituto Nazionale di Fisica Nucleare, Bari (Italy); Marozzi, G. [College de France, 75 - Paris (France); Geneve Univ. (Switzerland). Dept. de Physique Theorique and CAP; Nugier, F. [Ecole Normale Superieure CNRS, Paris (France). Laboratoire de Physique Theorique; Veneziano, G. [College de France, 75 - Paris (France); CERN, Geneva (Switzerland). Physics Dept.; New York Univ., NY (United States). Dept. of Physics

    2013-03-15

    Starting from the luminosity-redshift relation recently given up to second order in the Poisson gauge, we calculate the effects of the realistic stochastic background of perturbations of the so-called concordance model on the combined light-cone and ensemble average of various functions of the luminosity distance, and on their variance, as functions of redshift. We apply a gauge-invariant light-cone averaging prescription which is free from infrared and ultraviolet divergences, making our results robust with respect to changes of the corresponding cutoffs. Our main conclusions, in part already anticipated in a recent letter for the case of a perturbation spectrum computed in the linear regime, are that such inhomogeneities not only cannot avoid the need for dark energy, but also cannot prevent, in principle, the determination of its parameters down to an accuracy of order 10{sup -3} - 10{sup -5}, depending on the averaged observable and on the regime considered for the power spectrum. However, taking into account the appropriate corrections arising in the non-linear regime, we predict an irreducible scatter of the data approaching the 10% level which, for limited statistics, will necessarily limit the attainable precision. The predicted dispersion appears to be in good agreement with current observational estimates of the distance-modulus variance due to Doppler and lensing effects (at low and high redshifts, respectively), and represents a challenge for future precision measurements.

  9. Reynolds-Averaged Navier-Stokes Modeling of Turbulent Free Shear Layers

    Science.gov (United States)

    Schilling, Oleg

    2017-11-01

    Turbulent mixing of gases in free shear layers is simulated using a weighted essentially nonoscillatory implementation of ɛ- and L-based Reynolds-averaged Navier-Stokes models. Specifically, the air/air shear layer with velocity ratio 0.6 studied experimentally by Bell and Mehta (1990) is modeled. The detailed predictions of turbulent kinetic energy dissipation rate and lengthscale models are compared to one another, and to the experimental data. The role of analytical, self-similar solutions for model calibration and physical insights is also discussed. It is shown that turbulent lengthscale-based models are unable to predict both the growth parameter (spreading rate) and turbulent kinetic energy normalized by the square of the velocity difference of the streams. The terms in the K, ɛ, and L equation budgets are compared between the models, and it is shown that the production and destruction mechanisms are substantially different in the ɛ and L equations. Application of the turbulence models to the Brown and Roshko (1974) experiments with streams having various velocity and density ratios is also briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Determining Time-Varying Drivers of Spot Oil Price in a Dynamic Model Averaging Framework

    Directory of Open Access Journals (Sweden)

    Krzysztof Drachal

    2018-05-01

    Full Text Available This article presents results from modelling spot oil prices by Dynamic Model Averaging (DMA. First, based on a literature review and availability of data, the following oil price drivers have been selected: stock prices indices, stock prices volatility index, exchange rates, global economic activity, interest rates, supply and demand indicators and inventories level. Next, they have been included as explanatory variables in various DMA models with different initial parameters. Monthly data between January 1986 and December 2015 has been analyzed. Several variations of DMA models have been constructed, because DMA requires the initial setting of certain parameters. Interestingly, DMA has occurred to be robust to setting different values to these parameters. It has also occurred that the quality of prediction is the highest for the model with the drivers solely connected with the stock markets behavior. Drivers connected with macroeconomic fundamental indicators have not been found so important. This observation can serve as an argument favoring the hypothesis of the increasing financialization of the oil market, at least in the short-term period. The predictions from other, slightly different modelling variations based on DMA methodology, have happened to be consistent with each other in general. Many constructed models have outperformed alternative forecasting methods. It has also been found that normalization of the initial data, although not necessary for DMA from the theoretical point of view, significantly improves the quality of prediction.

  11. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  12. Rf system modeling for the high average power FEL at CEBAF

    International Nuclear Information System (INIS)

    Merminga, L.; Fugitt, J.; Neil, G.; Simrock, S.

    1995-01-01

    High beam loading and energy recovery compounded by use of superconducting cavities, which requires tight control of microphonic noise, place stringent constraints on the linac rf system design of the proposed high average power FEL at CEBAF. Longitudinal dynamics imposes off-crest operation, which in turn implies a large tuning angle to minimize power requirements. Amplitude and phase stability requirements are consistent with demonstrated performance at CEBAF. A numerical model of the CEBAF rf control system is presented and the response of the system is examined under large parameter variations, microphonic noise, and beam current fluctuations. Studies of the transient behavior lead to a plausible startup and recovery scenario

  13. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  14. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    for hierarchical data structures, reflecting increasingly common types of assay data. We illustrate the usefulness of the methodology by means of a cytotoxicology example where the sensitivity of two types of assays are evaluated and compared. By means of a simulation study, we show that the proposed framework......This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  15. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  16. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Science.gov (United States)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  17. Assimilation of time-averaged observations in a quasi-geostrophic atmospheric jet model

    Energy Technology Data Exchange (ETDEWEB)

    Huntley, Helga S. [University of Washington, Department of Applied Mathematics, Seattle, WA (United States); University of Delaware, School of Marine Science and Policy, Newark, DE (United States); Hakim, Gregory J. [University of Washington, Department of Atmospheric Sciences, Seattle, WA (United States)

    2010-11-15

    The problem of reconstructing past climates from a sparse network of noisy time-averaged observations is considered with a novel ensemble Kalman filter approach. Results for a sparse network of 100 idealized observations for a quasi-geostrophic model of a jet interacting with a mountain reveal that, for a wide range of observation averaging times, analysis errors are reduced by about 50% relative to the control case without assimilation. Results are robust to changes to observational error, the number of observations, and an imperfect model. Specifically, analysis errors are reduced relative to the control case for observations having errors up to three times the climatological variance for a fixed 100-station network, and for networks consisting of ten or more stations when observational errors are fixed at one-third the climatological variance. In the limit of small numbers of observations, station location becomes critically important, motivating an optimally determined network. A network of fifteen optimally determined observations reduces analysis errors by 30% relative to the control, as compared to 50% for a randomly chosen network of 100 observations. (orig.)

  18. Statistical aspects of autoregressive-moving average models in the assessment of radon mitigation

    International Nuclear Information System (INIS)

    Dunn, J.E.; Henschel, D.B.

    1989-01-01

    Radon values, as reflected by hourly scintillation counts, seem dominated by major, pseudo-periodic, random fluctuations. This methodological paper reports a moderate degree of success in modeling these data using relatively simple autoregressive-moving average models to assess the effectiveness of radon mitigation techniques in existing housing. While accounting for the natural correlation of successive observations, familiar summary statistics such as steady state estimates, standard errors, confidence limits, and tests of hypothesis are produced. The Box-Jenkins approach is used throughout. In particular, intervention analysis provides an objective means of assessing the effectiveness of an active mitigation measure, such as a fan off/on cycle. Occasionally, failure to declare a significant intervention has suggested a means of remedial action in the data collection procedure

  19. Forecasting Construction Tender Price Index in Ghana using Autoregressive Integrated Moving Average with Exogenous Variables Model

    Directory of Open Access Journals (Sweden)

    Ernest Kissi

    2018-03-01

    Full Text Available Prices of construction resources keep on fluctuating due to unstable economic situations that have been experienced over the years. Clients knowledge of their financial commitments toward their intended project remains the basis for their final decision. The use of construction tender price index provides a realistic estimate at the early stage of the project. Tender price index (TPI is influenced by various economic factors, hence there are several statistical techniques that have been employed in forecasting. Some of these include regression, time series, vector error correction among others. However, in recent times the integrated modelling approach is gaining popularity due to its ability to give powerful predictive accuracy. Thus, in line with this assumption, the aim of this study is to apply autoregressive integrated moving average with exogenous variables (ARIMAX in modelling TPI. The results showed that ARIMAX model has a better predictive ability than the use of the single approach. The study further confirms the earlier position of previous research of the need to use the integrated model technique in forecasting TPI. This model will assist practitioners to forecast the future values of tender price index. Although the study focuses on the Ghanaian economy, the findings can be broadly applicable to other developing countries which share similar economic characteristics.

  20. Forecast of sea surface temperature off the Peruvian coast using an autoregressive integrated moving average model

    Directory of Open Access Journals (Sweden)

    Carlos Quispe

    2013-04-01

    Full Text Available El Niño connects globally climate, ecosystems and socio-economic activities. Since 1980 this event has been tried to be predicted, but until now the statistical and dynamical models are insuffi cient. Thus, the objective of the present work was to explore using an autoregressive moving average model the effect of El Niño over the sea surface temperature (TSM off the Peruvian coast. The work involved 5 stages: identifi cation, estimation, diagnostic checking, forecasting and validation. Simple and partial autocorrelation functions (FAC and FACP were used to identify and reformulate the orders of the model parameters, as well as Akaike information criterium (AIC and Schwarz criterium (SC for the selection of the best models during the diagnostic checking. Among the main results the models ARIMA(12,0,11 were proposed, which simulated monthly conditions in agreement with the observed conditions off the Peruvian coast: cold conditions at the end of 2004, and neutral conditions at the beginning of 2005.

  1. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  2. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    Science.gov (United States)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  3. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    International Nuclear Information System (INIS)

    Diamant, A; Ybarra, N; Seuntjens, J; El Naqa, I

    2016-01-01

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible

  4. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Diamant, A; Ybarra, N; Seuntjens, J [McGill University, Montreal, Quebec (Canada); El Naqa, I [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible

  5. Reynolds-Averaged Turbulence Model Assessment for a Highly Back-Pressured Isolator Flowfield

    Science.gov (United States)

    Baurle, Robert A.; Middleton, Troy F.; Wilson, L. G.

    2012-01-01

    The use of computational fluid dynamics in scramjet engine component development is widespread in the existing literature. Unfortunately, the quantification of model-form uncertainties is rarely addressed with anything other than sensitivity studies, requiring that the computational results be intimately tied to and calibrated against existing test data. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Due to ground test facility limitations, this expanded role is believed to be a requirement by some in the test and evaluation community if scramjet engines are to be given serious consideration as a viable propulsion device. An effort has been initiated at the NASA Langley Research Center to validate several turbulence closure models used for Reynolds-averaged simulations of scramjet isolator flows. The turbulence models considered were the Menter BSL, Menter SST, Wilcox 1998, Wilcox 2006, and the Gatski-Speziale explicit algebraic Reynolds stress models. The simulations were carried out using the VULCAN computational fluid dynamics package developed at the NASA Langley Research Center. A procedure to quantify the numerical errors was developed to account for discretization errors in the validation process. This procedure utilized the grid convergence index defined by Roache as a bounding estimate for the numerical error. The validation data was collected from a mechanically back-pressured constant area (1 2 inch) isolator model with an isolator entrance Mach number of 2.5. As expected, the model-form uncertainty was substantial for the shock-dominated, massively separated flowfield within the isolator as evidenced by a 6 duct height variation in shock train length depending on the turbulence model employed. Generally speaking, the turbulence models that did not include an explicit stress limiter more closely

  6. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  7. Large deviations of a long-time average in the Ehrenfest urn model

    Science.gov (United States)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  8. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    Energy Technology Data Exchange (ETDEWEB)

    Soltanzadeh, I. [Tehran Univ. (Iran, Islamic Republic of). Inst. of Geophysics; Azadi, M.; Vakili, G.A. [Atmospheric Science and Meteorological Research Center (ASMERC), Teheran (Iran, Islamic Republic of)

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast. (orig.)

  9. Using Bayesian Model Averaging (BMA to calibrate probabilistic surface temperature forecasts over Iran

    Directory of Open Access Journals (Sweden)

    I. Soltanzadeh

    2011-07-01

    Full Text Available Using Bayesian Model Averaging (BMA, an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM, with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP Global Forecast System (GFS and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009 over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  10. PERAMALAN DERET WAKTU MENGGUNAKAN MODEL FUNGSI BASIS RADIAL (RBF DAN AUTO REGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA

    Directory of Open Access Journals (Sweden)

    DT Wiyanti

    2013-07-01

    Full Text Available Salah satu metode peramalan yang paling dikembangkan saat ini adalah time series, yakni menggunakan pendekatan kuantitatif dengan data masa lampau yang dijadikan acuan untuk peramalan masa depan. Berbagai penelitian telah mengusulkan metode-metode untuk menyelesaikan time series, di antaranya statistik, jaringan syaraf, wavelet, dan sistem fuzzy. Metode-metode tersebut memiliki kekurangan dan keunggulan yang berbeda. Namun permasalahan yang ada dalam dunia nyata merupakan masalah yang kompleks. Satu metode saja mungkin tidak mampu mengatasi masalah tersebut dengan baik. Dalam artikel ini dibahas penggabungan dua buah metode yaitu Auto Regressive Integrated Moving Average (ARIMA dan Radial Basis Function (RBF. Alasan penggabungan kedua metode ini adalah karena adanya asumsi bahwa metode tunggal tidak dapat secara total mengidentifikasi semua karakteristik time series. Pada artikel ini dibahas peramalan terhadap data Indeks Harga Perdagangan Besar (IHPB dan data inflasi komoditi Indonesia; kedua data berada pada rentang tahun 2006 hingga beberapa bulan di tahun 2012. Kedua data tersebut masing-masing memiliki enam variabel. Hasil peramalan metode ARIMA-RBF dibandingkan dengan metode ARIMA dan metode RBF secara individual. Hasil analisa menunjukkan bahwa dengan metode penggabungan ARIMA dan RBF, model yang diberikan memiliki hasil yang lebih akurat dibandingkan dengan penggunaan salah satu metode saja. Hal ini terlihat dalam visual plot, MAPE, dan RMSE dari semua variabel pada dua data uji coba. The accuracy of time series forecasting is the subject of many decision-making processes. Time series use a quantitative approach to employ data from the past to make forecast for the future. Many researches have proposed several methods to solve time series, such as using statistics, neural networks, wavelets, and fuzzy systems. These methods have different advantages and disadvantages. But often the problem in the real world is just too complex that a

  11. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  12. An axially averaged-radial transport model of tokamak edge plasmas

    International Nuclear Information System (INIS)

    Prinja, A.K.; Conn, R.W.

    1984-01-01

    A two-zone axially averaged-radial transport model for edge plasmas is described that incorporates parallel electron and ion conduction, localized recycling, parallel electron pressure gradient effects and sheath losses. Results for high recycling show that the radial electron temperature profile is determined by parallel electron conduction over short radial distances (proportional 3 cm). At larger radius where Tsub(e) has fallen appreciably, convective transport becomes equally important. The downstream density and ion temperature profiles are very flat over the region where electron conduction dominates. This is seen to result from a sharply decaying velocity profile that follows the radial electron temperature. A one-dimensional analytical recycling model shows that at high neutral pumping rates, the plasma density at the plate, nsub(ia), scales linearly with the unperturbed background density, nsub(io). When ionization dominates nsub(ia)/nsub(io) proportional exp(nsub(io)) while in the intermediate regime nsub(ia)/nsub(io) proportional exp(proportional nsub(io)). Such behavior is qualitatively in accord with experimental observations. (orig.)

  13. Two-Dimensional Depth-Averaged Beach Evolution Modeling: Case Study of the Kizilirmak River Mouth, Turkey

    DEFF Research Database (Denmark)

    Baykal, Cüneyt; Ergin, Ayşen; Güler, Işikhan

    2014-01-01

    investigated by satellite images, physical model tests, and one-dimensional numerical models. The current study uses a two-dimensional depth-averaged numerical beach evolution model, developed based on existing methodologies. This model is mainly composed of four main submodels: a phase-averaged spectral wave......This study presents an application of a two-dimensional beach evolution model to a shoreline change problem at the Kizilirmak River mouth, which has been facing severe coastal erosion problems for more than 20 years. The shoreline changes at the Kizilirmak River mouth have been thus far...... transformation model, a two-dimensional depth-averaged numerical waveinduced circulation model, a sediment transport model, and a bottom evolution model. To validate and verify the numerical model, it is applied to several cases of laboratory experiments. Later, the model is applied to a shoreline change problem...

  14. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    Science.gov (United States)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  15. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    Science.gov (United States)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  16. Extending a Consensus-based Fuzzy Ordered Weighting Average (FOWA Model in New Water Quality Indices

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Baghapour

    2017-07-01

    Full Text Available In developing a specific WQI (Water Quality Index, many water quality parameters are involved with different levels of importance. The impact of experts’ different opinions and viewpoints, current risks affecting their opinions, and plurality of the involved parameters double the significance of the issue. Hence, the current study tries to apply a consensus-based FOWA (Fuzzy Ordered Weighting Average model as one of the most powerful and well-known Multi Criteria Decision Making (MCDM techniques to determine the importance of the used parameters in the development of such WQIs which is shown with an example. This operator has provided the capability of modeling the risks in decision-making through applying the optimistic degree of stakeholders and their power coupled with the use of fuzzy numbers. Totally, 22 water quality parameters for drinking purposes are considered in this study. To determine the weight of each parameter, the viewpoints of 4 decision-making groups of experts are taken into account. After determining the final weights, to validate the use of each parameter in a potential WQI, consensus degrees of both the decision makers and the parameters are calculated. All calculations are carried out by using the expertise software called Group Fuzzy Decision Making (GFDM. The highest and the lowest weight values, 0.999 and 0.073 respectively, are related to Hg and temperature. Regarding the type of consumption that is drinking, the parameters’ weights and ranks are consistent with their health impacts. Moreover, the decision makers’ highest and lowest consensus degrees were 0.9905 and 0.9669, respectively. Among the water quality parameters, temperature (with consensus degree of 0.9972 and Pb (with consensus degree of 0.9665, received the highest and lowest agreement from the decision making group. This study indicates that the weight of parameters in determining water quality largely depends on the experts’ opinions and

  17. Extending a Consensus-based Fuzzy Ordered Weighting Average (FOWA Model in New Water Quality Indices

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Baghapour

    2017-07-01

    Full Text Available In developing a specific WQI (Water Quality Index, many quality parameters are involved with different levels of importance. The impact of experts’ different opinions and viewpoints, current risks affecting their opinions, and plurality of the involved parameters double the significance of the issue. Hence, the current study tries to apply a consensus-based FOWA (Fuzzy Ordered Weighting Average model as one of the most powerful and well-known Multi-Criteria Decision- Making (MCDM techniques to determine the importance of the used parameters in the development of such WQIs which is shown with an example. This operator has provided the capability of modeling the risks in decision-making through applying the optimistic degree of stakeholders and their power coupled with the use of fuzzy numbers. Totally, 22 water quality parameters for drinking purposes were considered in this study. To determine the weight of each parameter, the viewpoints of 4 decision-making groups of experts were taken into account. After determining the final weights, to validate the use of each parameter in a potential WQI, consensus degrees of both the decision makers and the parameters are calculated. The highest and the lowest weight values, 0.999 and 0.073 respectively, were related to Hg and temperature. Regarding the type of consumption that was drinking, the parameters’ weights and ranks were consistent with their health impacts. Moreover, the decision makers’ highest and lowest consensus degrees were 0.9905 and 0.9669, respectively. Among the water quality parameters, temperature (with consensus degree of 0.9972 and Pb (with consensus degree of 0.9665, received the highest and lowest agreement with the decision-making group. This study indicated that the weight of parameters in determining water quality largely depends on the experts’ opinions and approaches. Moreover, using the FOWA model provides results accurate and closer- to-reality on the significance of

  18. Benefits of dominance over additive models for the estimation of average effects in the presence of dominance

    NARCIS (Netherlands)

    Duenk, Pascal; Calus, Mario P.L.; Wientjes, Yvonne C.J.; Bijma, Piter

    2017-01-01

    In quantitative genetics, the average effect at a single locus can be estimated by an additive (A) model, or an additive plus dominance (AD) model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and

  19. Combining multi-objective optimization and bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Wohling, Thomas [NON LANL

    2008-01-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  20. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    Science.gov (United States)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  1. Implementation, Comparison and Application of an Average Simulation Model of a Wind Turbine Driven Doubly Fed Induction Generator

    Directory of Open Access Journals (Sweden)

    Lidula N. Widanagama Arachchige

    2017-10-01

    Full Text Available Wind turbine driven doubly-fed induction generators (DFIGs are widely used in the wind power industry. With the increasing penetration of wind farms, analysis of their effect on power systems has become a critical requirement. This paper presents the modeling of wind turbine driven DFIGs using the conventional vector controls in a detailed model of a DFIG that represents power electronics (PE converters with device level models and proposes an average model eliminating the PE converters. The PSCAD/EMTDC™ (4.6 electromagnetic transient simulation software is used to develop the detailed and the proposing average model of a DFIG. The comparison of the two models reveals that the designed average DFIG model is adequate for simulating and analyzing most of the transient conditions.

  2. Model of averaged turbulent flow around cylindrical column for simulation of the saltation

    Czech Academy of Sciences Publication Activity Database

    Kharlamova, Irina; Kharlamov, Alexander; Vlasák, Pavel

    2014-01-01

    Roč. 21, č. 2 (2014), s. 103-110 ISSN 1802-1484 R&D Projects: GA ČR GA103/09/1718 Institutional research plan: CEZ:AV0Z20600510 Institutional support: RVO:67985874 Keywords : sediment transport * flow around cylinder * logarithmic profile * dipole line * averaged turbulent flow Subject RIV: BK - Fluid Dynamics

  3. Forecast Accuracy and Economic Gains from Bayesian Model Averaging using Time Varying Weights

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); R.H. Kleijn (Richard); H.K. van Dijk (Herman); M.J.C.M. Verbeek (Marno)

    2009-01-01

    textabstractSeveral Bayesian model combination schemes, including some novel approaches that simultaneously allow for parameter uncertainty, model uncertainty and robust time varying model weights, are compared in terms of forecast accuracy and economic gains using financial and macroeconomic time

  4. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    Science.gov (United States)

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights

  5. Landscape-scale soil moisture heterogeneity and its influence on surface fluxes at the Jornada LTER site: Evaluating a new model parameterization for subgrid-scale soil moisture variability

    Science.gov (United States)

    Baker, I. T.; Prihodko, L.; Vivoni, E. R.; Denning, A. S.

    2017-12-01

    Arid and semiarid regions represent a large fraction of global land, with attendant importance of surface energy and trace gas flux to global totals. These regions are characterized by strong seasonality, especially in precipitation, that defines the level of ecosystem stress. Individual plants have been observed to respond non-linearly to increasing soil moisture stress, where plant function is generally maintained as soils dry down to a threshold at which rapid closure of stomates occurs. Incorporating this nonlinear mechanism into landscape-scale models can result in unrealistic binary "on-off" behavior that is especially problematic in arid landscapes. Subsequently, models have `relaxed' their simulation of soil moisture stress on evapotranspiration (ET). Unfortunately, these relaxations are not physically based, but are imposed upon model physics as a means to force a more realistic response. Previously, we have introduced a new method to represent soil moisture regulation of ET, whereby the landscape is partitioned into `BINS' of soil moisture wetness, each associated with a fractional area of the landscape or grid cell. A physically- and observationally-based nonlinear soil moisture stress function is applied, but when convolved with the relative area distribution represented by wetness BINS the system has the emergent property of `smoothing' the landscape-scale response without the need for non-physical impositions on model physics. In this research we confront BINS simulations of Bowen ratio, soil moisture variability and trace gas flux with soil moisture and eddy covariance observations taken at the Jornada LTER dryland site in southern New Mexico. We calculate the mean annual wetting cycle and associated variability about the mean state and evaluate model performance against this variability and time series of land surface fluxes from the highly instrumented Tromble Weir watershed. The BINS simulations capture the relatively rapid reaction to wetting

  6. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    Science.gov (United States)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  8. Bayesian Averaging over Many Dynamic Model Structures with Evidence on the Great Ratios and Liquidity Trap Risk

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2008-01-01

    textabstractA Bayesian model averaging procedure is presented that makes use of a finite mixture of many model structures within the class of vector autoregressive (VAR) processes. It is applied to two empirical issues. First, stability of the Great Ratios in U.S. macro-economic time series is

  9. Wind-Farm Forecasting Using the HARMONIE Weather Forecast Model and Bayes Model Averaging for Bias Removal.

    Science.gov (United States)

    O'Brien, Enda; McKinstry, Alastair; Ralph, Adam

    2015-04-01

    Building on previous work presented at EGU 2013 (http://www.sciencedirect.com/science/article/pii/S1876610213016068 ), more results are available now from a different wind-farm in complex terrain in southwest Ireland. The basic approach is to interpolate wind-speed forecasts from an operational weather forecast model (i.e., HARMONIE in the case of Ireland) to the precise location of each wind-turbine, and then use Bayes Model Averaging (BMA; with statistical information collected from a prior training-period of e.g., 25 days) to remove systematic biases. Bias-corrected wind-speed forecasts (and associated power-generation forecasts) are then provided twice daily (at 5am and 5pm) out to 30 hours, with each forecast validation fed back to BMA for future learning. 30-hr forecasts from the operational Met Éireann HARMONIE model at 2.5km resolution have been validated against turbine SCADA observations since Jan. 2014. An extra high-resolution (0.5km grid-spacing) HARMONIE configuration has been run since Nov. 2014 as an extra member of the forecast "ensemble". A new version of HARMONIE with extra filters designed to stabilize high-resolution configurations has been run since Jan. 2015. Measures of forecast skill and forecast errors will be provided, and the contributions made by the various physical and computational enhancements to HARMONIE will be quantified.

  10. Benefits of Dominance over Additive Models for the Estimation of Average Effects in the Presence of Dominance

    Directory of Open Access Journals (Sweden)

    Pascal Duenk

    2017-10-01

    Full Text Available In quantitative genetics, the average effect at a single locus can be estimated by an additive (A model, or an additive plus dominance (AD model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and identically distributed. Our objective was to investigate the accuracy of an estimated average effect (α^ in the presence of dominance, using either a single locus A-model or AD-model. Estimation was based on a finite sample from a large population in Hardy-Weinberg equilibrium (HWE, and the root mean squared error of α^ was calculated for several broad-sense heritabilities, sample sizes, and sizes of the dominance effect. Results show that with the A-model, both sampling deviations of genotype frequencies from HWE frequencies and sampling deviations of allele frequencies contributed to the error. With the AD-model, only sampling deviations of allele frequencies contributed to the error, provided that all three genotype classes were sampled. In the presence of dominance, the root mean squared error of α^ with the AD-model was always smaller than with the A-model, even when the heritability was less than one. Remarkably, in the absence of dominance, there was no disadvantage of fitting dominance. In conclusion, the AD-model yields more accurate estimates of average effects from a finite sample, because it is more robust against sampling deviations from HWE frequencies than the A-model. Genetic models that include dominance, therefore, yield higher accuracies of estimated average effects than purely additive models when dominance is present.

  11. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    Science.gov (United States)

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  12. Introduction to turbulence modelling: Applications of Reynolds Averaged Navier Stokes Equations (RANSE) to engineering problems

    International Nuclear Information System (INIS)

    Laurence, D.

    1997-01-01

    The k-ε model and Reynolds stress transport model are set out in a few words. Limitations of models are shown, particularly for turbulence generation in the turbulent viscosity context, and, more generally, the uncertainties and miscellaneous changes made to the dissipation equation. The performances of models are then compared, using results of the three latest ERCOFTA/IAHR workshops. It is shown that algebraic constraints which can be derived exactly by assuming asymptotic limits (rapid distortion, homogeneous shear at infinite time, 2D turbulence) have inhibited a better tuning of the models for real life flow where these limits are not encountered. A more pragmatic approach could be taken by allowing the constants to be functions of invariant parameters. But these functions, making the models non-linear, can lead to bifurcations or instability. One essential parameter is the distance to the wall, which recent models have tried to eliminate, although this parameter appears indirectly through the Poisson equation for the fluctuating pressure. A possible indirect model is the elliptic relaxation. Progress was recently achieved in near-wall low Re modelling, but these advances do not always result in benefits to industry since only the 'wall function' approaches can be used in the high Re, 3D flows that we need to study. With the knowledge gained from near-wall modelling, it might be profitable to revisit the 'wall functions' devised 20 years ago. (author)

  13. Consistency, Verification, and Validation of Turbulence Models for Reynolds-Averaged Navier-Stokes Applications

    Science.gov (United States)

    Rumsey, Christopher L.

    2009-01-01

    In current practice, it is often difficult to draw firm conclusions about turbulence model accuracy when performing multi-code CFD studies ostensibly using the same model because of inconsistencies in model formulation or implementation in different codes. This paper describes an effort to improve the consistency, verification, and validation of turbulence models within the aerospace community through a website database of verification and validation cases. Some of the variants of two widely-used turbulence models are described, and two independent computer codes (one structured and one unstructured) are used in conjunction with two specific versions of these models to demonstrate consistency with grid refinement for several representative problems. Naming conventions, implementation consistency, and thorough grid resolution studies are key factors necessary for success.

  14. Comparison of depth-averaged concentration and bed load flux sediment transport models of dam-break flow

    Directory of Open Access Journals (Sweden)

    Jia-heng Zhao

    2017-10-01

    Full Text Available This paper presents numerical simulations of dam-break flow over a movable bed. Two different mathematical models were compared: a fully coupled formulation of shallow water equations with erosion and deposition terms (a depth-averaged concentration flux model, and shallow water equations with a fully coupled Exner equation (a bed load flux model. Both models were discretized using the cell-centered finite volume method, and a second-order Godunov-type scheme was used to solve the equations. The numerical flux was calculated using a Harten, Lax, and van Leer approximate Riemann solver with the contact wave restored (HLLC. A novel slope source term treatment that considers the density change was introduced to the depth-averaged concentration flux model to obtain higher-order accuracy. A source term that accounts for the sediment flux was added to the bed load flux model to reflect the influence of sediment movement on the momentum of the water. In a one-dimensional test case, a sensitivity study on different model parameters was carried out. For the depth-averaged concentration flux model, Manning's coefficient and sediment porosity values showed an almost linear relationship with the bottom change, and for the bed load flux model, the sediment porosity was identified as the most sensitive parameter. The capabilities and limitations of both model concepts are demonstrated in a benchmark experimental test case dealing with dam-break flow over variable bed topography.

  15. An Extended Eddy-Diffusivity Mass-Flux Scheme for Unified Representation of Subgrid-Scale Turbulence and Convection

    Science.gov (United States)

    Tan, Zhihong; Kaul, Colleen M.; Pressel, Kyle G.; Cohen, Yair; Schneider, Tapio; Teixeira, João.

    2018-03-01

    Large-scale weather forecasting and climate models are beginning to reach horizontal resolutions of kilometers, at which common assumptions made in existing parameterization schemes of subgrid-scale turbulence and convection—such as that they adjust instantaneously to changes in resolved-scale dynamics—cease to be justifiable. Additionally, the common practice of representing boundary-layer turbulence, shallow convection, and deep convection by discontinuously different parameterizations schemes, each with its own set of parameters, has contributed to the proliferation of adjustable parameters in large-scale models. Here we lay the theoretical foundations for an extended eddy-diffusivity mass-flux (EDMF) scheme that has explicit time-dependence and memory of subgrid-scale variables and is designed to represent all subgrid-scale turbulence and convection, from boundary layer dynamics to deep convection, in a unified manner. Coherent up and downdrafts in the scheme are represented as prognostic plumes that interact with their environment and potentially with each other through entrainment and detrainment. The more isotropic turbulence in their environment is represented through diffusive fluxes, with diffusivities obtained from a turbulence kinetic energy budget that consistently partitions turbulence kinetic energy between plumes and environment. The cross-sectional area of up and downdrafts satisfies a prognostic continuity equation, which allows the plumes to cover variable and arbitrarily large fractions of a large-scale grid box and to have life cycles governed by their own internal dynamics. Relatively simple preliminary proposals for closure parameters are presented and are shown to lead to a successful simulation of shallow convection, including a time-dependent life cycle.

  16. Dynamic average modeling of a bidirectional solid state transformer for feasibility studies and real-time implementation

    OpenAIRE

    Martínez Velasco, Juan Antonio; Alepuz Menéndez, Salvador; Gonzalez Molina, Francisco; Martín Arnedo, Jacinto

    2014-01-01

    Detailed switching models of power electronics devices often lead to long computing times, limiting the size of the system to be simulated. This drawback is especially important when the goal is to implement the model in a real-time simulation platform. An alternative is to use dynamic average models (DAM) for analyzing the dynamic behavior of power electronic devices. This paper presents the development of a DAM for a bidirectional solid-state transformer and its implementation in a real-tim...

  17. Control of Stochastic Master Equation Models of Genetic Regulatory Networks by Approximating Their Average Behavior

    Science.gov (United States)

    Umut Caglar, Mehmet; Pal, Ranadip

    2010-10-01

    The central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid.'' However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of data in the cellular level and probabilistic nature of interactions. Probabilistic models like Stochastic Master Equation (SME) or deterministic models like differential equations (DE) can be used to analyze these types of interactions. SME models based on chemical master equation (CME) can provide detailed representation of genetic regulatory system, but their use is restricted by the large data requirements and computational costs of calculations. The differential equations models on the other hand, have low calculation costs and much more adequate to generate control procedures on the system; but they are not adequate to investigate the probabilistic nature of interactions. In this work the success of the mapping between SME and DE is analyzed, and the success of a control policy generated by DE model with respect to SME model is examined. Index Terms--- Stochastic Master Equation models, Differential Equation Models, Control Policy Design, Systems biology

  18. Zonally averaged model of dynamics, chemistry and radiation for the atmosphere

    Science.gov (United States)

    Tung, K. K.

    1985-01-01

    A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations on a sphere, taking advantage of the more direct relationship between the mean meridional circulation and diabatic heating rate which is available in isentropic coordinates. Possible differences between results of nongeostrophic theory and the commonly used geostrophic formulation are discussed concerning: (1) the role of eddy forcing of the diabatic circulation, and (2) the nonlinear nearly inviscid limit vs the geostrophic limit. Problems associated with the traditional Rossby number scaling in quasi-geostrophic formulations are pointed out and an alternate, more general scaling based on the smallness of mean meridional to zonal velocities for a rotating planet is suggested. Such a scaling recovers the geostrophic balanced wind relationship for the mean zonal flow but reveals that the mean meridional velocity is in general ageostrophic.

  19. Spatial models for probabilistic prediction of wind power with application to annual-average and high temporal resolution data

    DEFF Research Database (Denmark)

    Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder

    2017-01-01

    average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...

  20. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  1. Diffuse radiation models and monthly-average, daily, diffuse data for a wide latitude range

    International Nuclear Information System (INIS)

    Gopinathan, K.K.; Soler, A.

    1995-01-01

    Several years of measured data on global and diffuse radiation and sunshine duration for 40 widely spread locations in the latitude range 36° S to 60° N are used to develop and test models for estimating monthly-mean, daily, diffuse radiation on horizontal surfaces. Applicability of the clearness-index (K) and sunshine fraction (SSO) models for diffuse estimation and the effect of combining several variables into a single multilinear equation are tested. Correlations connecting the diffuse to global fraction (HdH) with K and SSO predict Hd values more accurately than their separate use. Among clearness-index and sunshine-fraction models, SSO models are found to have better accuracy if correlations are developed for wide latitude ranges. By including a term for declinations in the correlation, the accuracy of the estimated data can be marginally improved. The addition of latitude to the equation does not help to improve the accuracy further. (author)

  2. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Science.gov (United States)

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  3. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Directory of Open Access Journals (Sweden)

    Takashi Shinzato

    Full Text Available In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  4. Analysis of subchannel effects and their treatment in average channel PWR core models

    International Nuclear Information System (INIS)

    Cuervo, D.; Ahnert, C.; Aragones, J.M.

    2004-01-01

    Neutronic thermal-hydraulic coupling is meanly made at this moment using whole plant thermal-hydraulic codes with one channel per assembly or quarter of assembly in more detailed cases. To extract safety limits variables a new calculation has to be performed using thermal-hydraulic subchannel codes in an embedded or off-line manner what implies an increase of calculation time. Another problem of this separated analysis of whole core and not channel is that the whole core calculation is not resolving the real problem due to the modification of the variables values by the homogenization process that is carried out to perform the whole core analysis. This process is making that some magnitudes are over or under-predicted causing that the problem that is being solved is not the original one. The purpose of the work that is being developed is to investigate the effects of the averaging process in the results obtained by the whole core analysis and to develop some corrections that may be included in this analysis to obtain results closer to the ones obtained by a detailed subchannel analysis. This paper shows the results obtained for a sample case and the conclusions for future work. (author)

  5. Reynolds averaged modelling of low momentum propane jet diffusion flames in cross flow

    Energy Technology Data Exchange (ETDEWEB)

    Majeski, A.J.; Chui, E.H. [Natural Resources Canada, Ottawa, ON (Canada). CANMET Energy Technology Centre; Kostiuk, L.W. [Alberta Univ., Edmonton, AB (Canada). Dept. of Mechanical Engineering

    2003-07-01

    It is common practice to use continuous low flow rate flares to dispose of unwanted or by-product combustible gases resulting from the manufacturing process or oil recovery operations. This study evaluates the usefulness of computational fluid dynamics (CFD) modelling in the context of low momentum flux reacting jets. The experimental data was gathered at the University of Alberta's Combustion Wind Tunnel. This data was used to compare data obtained from the CFD simulations. Only a small subset of the experimental conditions was used for the computational model. No attempt was made to fine tune any of the individual models. They were all part of the commercial CFD software package CFX-TASC flow, by ANSYS Inc. Flame length and angle results compared favourably with experiments. The shape of the plume changed significantly in the far field. This could be explained by distortion caused by the turbulence model used. A flame front model was incorporated in an effort to estimate combustion efficiency. The results obtained were not conclusive. 20 refs., 4 figs.

  6. What Type of Finance Matters for Growth? Bayesian Model Averaging Evidence

    Czech Academy of Sciences Publication Activity Database

    Iftekhar, H.; Horváth, Roman; Mareš, J.

    -, - (2018) ISSN 0258-6770 R&D Projects: GA ČR GA16-09190S Institutional support: RVO:67985556 Keywords : long-term economic growth * Bayesian model * uncertainty Subject RIV: AH - Economic s Impact factor: 1.431, year: 2016 http://library.utia.cas.cz/separaty/2017/E/horvath-0466516.pdf

  7. A time-averaged regional model of the Hermean magnetic field

    Science.gov (United States)

    Thébault, E.; Langlais, B.; Oliveira, J. S.; Amit, H.; Leclercq, L.

    2018-03-01

    This paper presents the first regional model of the magnetic field of Mercury developed with mathematical continuous functions. The model has a horizontal spatial resolution of about 830 km at the surface of the planet, and it is derived without any a priori information about the geometry of the internal and external fields or regularization. It relies on an extensive dataset of the MESSENGER's measurements selected over its entire orbital lifetime between 2011 and 2015. A first order separation between the internal and the external fields over the Northern hemisphere is achieved under the assumption that the magnetic field measurements are acquired in a source free region within the magnetospheric cavity. When downward continued to the core-mantle boundary, the model confirms some of the general structures observed in previous studies such as the dominance of zonal field, the location of the North magnetic pole, and the global absence of significant small scale structures. The transformation of the regional model into a global spherical harmonic one provides an estimate for the axial quadrupole to axial dipole ratio of about g20/g10 = 0.27 . This is much lower than previous estimates of about 0.40. We note that it is possible to obtain a similar ratio provided that more weight is put on the location of the magnetic equator and less elsewhere.

  8. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,

  9. Tailored vs Black-Box Models for Forecasting Hourly Average Solar Irradiance

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Paulescu, M.; Badescu, V.

    2015-01-01

    Roč. 111, January (2015), s. 320-331 ISSN 0038-092X R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional support: RVO:67985807 Keywords : solar irradiance * forecasting * tilored statistical models Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.685, year: 2015

  10. Analytic Models of Domain-Averaged Fermi Holes: A New Tool for the

    Czech Academy of Sciences Publication Activity Database

    Ponec, Robert; Cooper, D.L.; Savin, A.

    2008-01-01

    Roč. 14, č. 11 (2008), s. 3338-3345 ISSN 0947-6539 R&D Projects: GA AV ČR(CZ) IAA4072403 Institutional research plan: CEZ:AV0Z40720504 Keywords : bond theory * fermi hole analysis * lewis model Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.454, year: 2008

  11. Metallurgical source-contribution analysis of PM10 annual average concentration: A dispersion modeling approach in moravian-silesian region

    Directory of Open Access Journals (Sweden)

    P. Jančík

    2013-10-01

    Full Text Available The goal of the article is to present analysis of metallurgical industry contribution to annual average PM10 concentrations in Moravian-Silesian based on means of the air pollution modelling in accord with the Czech reference methodology SYMOS´97.

  12. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  13. Modular model for Mercury's magnetospheric magnetic field confined within the average observed magnetopause.

    Science.gov (United States)

    Korth, Haje; Tsyganenko, Nikolai A; Johnson, Catherine L; Philpott, Lydia C; Anderson, Brian J; Al Asad, Manar M; Solomon, Sean C; McNutt, Ralph L

    2015-06-01

    Accurate knowledge of Mercury's magnetospheric magnetic field is required to understand the sources of the planet's internal field. We present the first model of Mercury's magnetospheric magnetic field confined within a magnetopause shape derived from Magnetometer observations by the MErcury Surface, Space ENvironment, GEochemistry, and Ranging spacecraft. The field of internal origin is approximated by a dipole of magnitude 190 nT R M 3 , where R M is Mercury's radius, offset northward by 479 km along the spin axis. External field sources include currents flowing on the magnetopause boundary and in the cross-tail current sheet. The cross-tail current is described by a disk-shaped current near the planet and a sheet current at larger (≳ 5  R M ) antisunward distances. The tail currents are constrained by minimizing the root-mean-square (RMS) residual between the model and the magnetic field observed within the magnetosphere. The magnetopause current contributions are derived by shielding the field of each module external to the magnetopause by minimizing the RMS normal component of the magnetic field at the magnetopause. The new model yields improvements over the previously developed paraboloid model in regions that are close to the magnetopause and the nightside magnetic equatorial plane. Magnetic field residuals remain that are distributed systematically over large areas and vary monotonically with magnetic activity. Further advances in empirical descriptions of Mercury's magnetospheric external field will need to account for the dependence of the tail and magnetopause currents on magnetic activity and additional sources within the magnetosphere associated with Birkeland currents and plasma distributions near the dayside magnetopause.

  14. Inconsistency in the average hydraulic models used in nuclear reactor design and safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jee Won; Roh, Gyu Hong; Choi, Hang Bok [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    One of important inconsistencies in the six-equation model predictions has been found to be the force experienced by a single bubble placed in a convergent stream of liquid. Various sets of governing equations yield different amount of forces to hold the bubble stationary in a convergent nozzle. By using the first order potential flow theory, it is found that the six-equation model can not be used to estimate the force experienced by a deformed bubble. The theoretical value of the particle stress of a bubble in a convergent nozzle flow has been found to be a function of the Weber number when bubble distortion is allowed. This force has been calculated by using different sets of governing equations and compared with the theoretical value. It is suggested in this study that the bubble size distribution function can be used to remove the presented inconsistency by relating the interfacial variables with different moments of the bubble size distribution function. This study also shows that the inconsistencies in the thermal-hydraulic governing equation can be removed by mechanistic modeling of the phasic interface. 11 refs., 3 figs. (Author)

  15. Inconsistency in the average hydraulic models used in nuclear reactor design and safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jee Won; Roh, Gyu Hong; Choi, Hang Bok [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    One of important inconsistencies in the six-equation model predictions has been found to be the force experienced by a single bubble placed in a convergent stream of liquid. Various sets of governing equations yield different amount of forces to hold the bubble stationary in a convergent nozzle. By using the first order potential flow theory, it is found that the six-equation model can not be used to estimate the force experienced by a deformed bubble. The theoretical value of the particle stress of a bubble in a convergent nozzle flow has been found to be a function of the Weber number when bubble distortion is allowed. This force has been calculated by using different sets of governing equations and compared with the theoretical value. It is suggested in this study that the bubble size distribution function can be used to remove the presented inconsistency by relating the interfacial variables with different moments of the bubble size distribution function. This study also shows that the inconsistencies in the thermal-hydraulic governing equation can be removed by mechanistic modeling of the phasic interface. 11 refs., 3 figs. (Author)

  16. Influence of Boussinesq coefficient on depth-averaged modelling of rapid flows

    Science.gov (United States)

    Yang, Fan; Liang, Dongfang; Xiao, Yang

    2018-04-01

    The traditional Alternating Direction Implicit (ADI) scheme has been proven to be incapable of modelling trans-critical flows. Its inherent lack of shock-capturing capability often results in spurious oscillations and computational instabilities. However, the ADI scheme is still widely adopted in flood modelling software, and various special treatments have been designed to stabilise the computation. Modification of the Boussinesq coefficient to adjust the amount of fluid inertia is a numerical treatment that allows the ADI scheme to be applicable to rapid flows. This study comprehensively examines the impact of this numerical treatment over a range of flow conditions. A shock-capturing TVD-MacCormack model is used to provide reference results. For unsteady flows over a frictionless bed, such as idealised dam-break floods, the results suggest that an increase in the value of the Boussinesq coefficient reduces the amplitude of the spurious oscillations. The opposite is observed for steady rapid flows over a frictional bed. Finally, a two-dimensional urban flooding phenomenon is presented, involving unsteady flow over a frictional bed. The results show that increasing the value of the Boussinesq coefficient can significantly reduce the numerical oscillations and reduce the predicted area of inundation. In order to stabilise the ADI computations, the Boussinesq coefficient could be judiciously raised or lowered depending on whether the rapid flow is steady or unsteady and whether the bed is frictional or frictionless. An increase in the Boussinesq coefficient generally leads to overprediction of the propagating speed of the flood wave over a frictionless bed, but the opposite is true when bed friction is significant.

  17. Variations in environmental tritium doses due to meteorological data averaging and uncertainties in pathway model parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kock, A.

    1996-05-01

    The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies.

  18. Variations in environmental tritium doses due to meteorological data averaging and uncertainties in pathway model parameters

    International Nuclear Information System (INIS)

    Kock, A.

    1996-05-01

    The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies

  19. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  20. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  1. Modelling the average velocity of propagation of the flame front in a gasoline engine with hydrogen additives

    Science.gov (United States)

    Smolenskaya, N. M.; Smolenskii, V. V.

    2018-01-01

    The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.

  2. Permafrost sub-grid heterogeneity of soil properties key for 3-D soil processes and future climate projections

    Directory of Open Access Journals (Sweden)

    Christian Beer

    2016-08-01

    Full Text Available There are massive carbon stocks stored in permafrost-affected soils due to the 3-D soil movement process called cryoturbation. For a reliable projection of the past, recent and future Arctic carbon balance, and hence climate, a reliable concept for representing cryoturbation in a land surface model (LSM is required. The basis of the underlying transport processes is pedon-scale heterogeneity of soil hydrological and thermal properties as well as insulating layers, such as snow and vegetation. Today we still lack a concept of how to reliably represent pedon-scale properties and processes in a LSM. One possibility could be a statistical approach. This perspective paper demonstrates the importance of sub-grid heterogeneity in permafrost soils as a pre-requisite to implement any lateral transport parametrization. Representing such heterogeneity at the sub-pixel size of a LSM is the next logical step of model advancements. As a result of a theoretical experiment, heterogeneity of thermal and hydrological soil properties alone lead to a remarkable initial sub-grid range of subsoil temperature of 2 deg C, and active-layer thickness of 150 cm in East Siberia. These results show the way forward in representing combined lateral and vertical transport of water and soil in LSMs.

  3. Retrospective cost adaptive Reynolds-averaged Navier-Stokes k-ω model for data-driven unsteady turbulent simulations

    Science.gov (United States)

    Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.

    2018-03-01

    This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.

  4. Atomic process calculations in hot dense plasmas using average atom models

    International Nuclear Information System (INIS)

    Velarde, G.; Aragones, J.M.; Gamez, L.; Honrubia, J.J.; Martinez-Val, J.M.; Minguez, E.; Ocana, J.L.; Perlado, J.M.; Serrano, J.F.

    1987-01-01

    During the past years, an important effort has been devoted in the authors Institute to develop the NORCLA code, which in the first version was characterized by the following features: one-dimensional lagrangian mesh; equilibrium between radiation, ion and electron species; local alpha energy deposition; neutron transport by the discrete ordinates method and analytical equation of state, opacities and conductivities. In the successive versions of NORCLA, EOS and electron conductivities were modified by the pressure ionization and degeneracy corrections; a module was also developed for computing the energy deposition of the incident ion beams coupled to the energy equation, and a code to calculate the alpha particle transport and energy deposition. Recently, a 3T version of the NORCLA code, with tabular EOS, opacities and conductivities, laser ray tracing and suprathermal electrons transport has been produced. In this article, the atomic physic models developed to determine more accurate the atomic data, such as EOS and opacities are explained, giving a brief description and a comparison of them. As a result of this development, a DENIM Atomic Data Library is being generated, taking some data and procedures from the SESAME Library. This library is presented, including a comparison of the opacity data for aluminium and iron at different densities and temperatures. Conclusions about this work are presented, and the ongoing developments summarized

  5. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    Science.gov (United States)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by

  6. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  7. Flow and transport simulation of Madeira River using three depth-averaged two-equation turbulence closure models

    Directory of Open Access Journals (Sweden)

    Li-ren Yu

    2012-03-01

    Full Text Available This paper describes a numerical simulation in the Amazon water system, aiming to develop a quasi-three-dimensional numerical tool for refined modeling of turbulent flow and passive transport of mass in natural waters. Three depth-averaged two-equation turbulence closure models, k˜−ε˜,k˜−w˜, and k˜−ω˜ , were used to close the non-simplified quasi-three dimensional hydrodynamic fundamental governing equations. The discretized equations were solved with the advanced multi-grid iterative method using non-orthogonal body-fitted coarse and fine grids with collocated variable arrangement. Except for steady flow computation, the processes of contaminant inpouring and plume development at the beginning of discharge, caused by a side-discharge of a tributary, have also been numerically investigated. The three depth-averaged two-equation closure models are all suitable for modeling strong mixing turbulence. The newly established turbulence models such as the k˜−ω˜ model, with a higher order of magnitude of the turbulence parameter, provide a possibility for improving computational precision.

  8. Numerical calculation of the dispersion of heat and material in rivers by means of a depth-averaged model

    International Nuclear Information System (INIS)

    Pavlovic, R.N.

    1981-01-01

    Nowadays, our rivers are polluted to an ever increasing degree by industrial and domestic discharges of waste heat and sewage. An important task of environmental protection is to predict the consequences of such pollutions in order to be able to plan and perform protective measures. For the solution of this problem a reliable mathematical model is very helpful. In the present paper a depth-averaged model is developed consisting of a two-dimensional elliptical model component for the direct near-field of a discharge and a two-dimensional parabolic separate model for the calculation of longer river distances further downstream. This model is exhaustively tested by application to a number of laboratory flows and real discharges to rivers. (orig./RW) [de

  9. Short-term electricity prices forecasting based on support vector regression and Auto-regressive integrated moving average modeling

    International Nuclear Information System (INIS)

    Che Jinxing; Wang Jianzhou

    2010-01-01

    In this paper, we present the use of different mathematical models to forecast electricity price under deregulated power. A successful prediction tool of electricity price can help both power producers and consumers plan their bidding strategies. Inspired by that the support vector regression (SVR) model, with the ε-insensitive loss function, admits of the residual within the boundary values of ε-tube, we propose a hybrid model that combines both SVR and Auto-regressive integrated moving average (ARIMA) models to take advantage of the unique strength of SVR and ARIMA models in nonlinear and linear modeling, which is called SVRARIMA. A nonlinear analysis of the time-series indicates the convenience of nonlinear modeling, the SVR is applied to capture the nonlinear patterns. ARIMA models have been successfully applied in solving the residuals regression estimation problems. The experimental results demonstrate that the model proposed outperforms the existing neural-network approaches, the traditional ARIMA models and other hybrid models based on the root mean square error and mean absolute percentage error.

  10. Adaptive radiotherapy with an average anatomy model: Evaluation and quantification of residual deformations in head and neck cancer patients

    International Nuclear Information System (INIS)

    Kranen, Simon van; Mencarelli, Angelo; Beek, Suzanne van; Rasch, Coen; Herk, Marcel van; Sonke, Jan-Jakob

    2013-01-01

    Background and purpose: To develop and validate an adaptive intervention strategy for radiotherapy of head-and-neck cancer that accounts for systematic deformations by modifying the planning-CT (pCT) to the average misalignments in daily cone beam CT (CBCT) measured with deformable registration (DR). Methods and materials: Daily CBCT scans (808 scans) for 25 patients were retrospectively registered to the pCT with B-spline DR. The average deformation vector field ( ) was used to deform the pCT for adaptive intervention. Two strategies were simulated: single intervention after 10 fractions and weekly intervention with an from the previous week. The model was geometrically validated with the residual misalignment of anatomical landmarks both on bony-anatomy (BA; automatically generated) and soft-tissue (ST; manually identified). Results: Systematic deformations were 2.5/3.4 mm vector length (BA/ST). Single intervention reduced deformations to 1.5/2.7 mm (BA/ST). Weekly intervention resulted in 1.0/2.2 mm (BA/ST) and accounted better for progressive changes. 15 patients had average systematic deformations >2 mm (BA): reductions were 1.1/1.9 mm (single/weekly BA). ST improvements were underestimated due to observer and registration variability. Conclusions: Adaptive intervention with a pCT modified to the average anatomy during treatment successfully reduces systematic deformations. The improved accuracy could possibly be exploited in margin reduction and/or dose escalation

  11. Applications of Analytical Self-Similar Solutions of Reynolds-Averaged Models for Instability-Induced Turbulent Mixing

    Science.gov (United States)

    Hartland, Tucker; Schilling, Oleg

    2017-11-01

    Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  12. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study.

    Science.gov (United States)

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang; Cao, Yang

    2016-08-16

    To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. A time-series study using regional death registry between 2009 and 2010. 8 districts in a large metropolitan area in Northern China. 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (-1.09 to 4.28 vs -1.08 to 3.93) and the PCs-based model (-2.23 to 4.07 vs -2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, -1.12 to 4.85 versus -1.11 versus 4.83. The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Yuxing [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Fan, Jiwen [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xiao, Heng [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego CA USA; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton VA USA; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Gustafson, William I. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-11-01

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32 km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.

  14. Structure and modeling of turbulence

    International Nuclear Information System (INIS)

    Novikov, E.A.

    1995-01-01

    The open-quotes vortex stringsclose quotes scale l s ∼ LRe -3/10 (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES)

  15. Single nuclear transfer strengths and sum rules in the interacting boson-fermion model and in the spectral averaging theory

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1991-01-01

    In the interacting boson-fermion model of collective nuclei, in the symmetry limits of the model appropriate for vibrational, rotational and γ-unstable nuclei, for one-particle transfer, the selection rules, model predictions for the allowed strengths and comparison of theory with experiment are briefly reviewed. In the spectral-averaging theory, with the specific example of orbit occupancies, the smoothed forms (linear or better ratio of Gaussians) as determined by central limit theorems, how they provide a good criterion for selecting effective interactions and the convolution structure of occupancy densities in huge spaces are described. Complementary information provided by nuclear models and statistical laws is broughtout. (author). 63 refs., 5 figs

  16. A depth-averaged debris-flow model that includes the effects of evolving dilatancy. I. physical basis

    Science.gov (United States)

    Iverson, Richard M.; George, David L.

    2014-01-01

    To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.

  17. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  18. Dual-component model of respiratory motion based on the periodic autoregressive moving average (periodic ARMA) method

    International Nuclear Information System (INIS)

    McCall, K C; Jeraj, R

    2007-01-01

    A new approach to the problem of modelling and predicting respiration motion has been implemented. This is a dual-component model, which describes the respiration motion as a non-periodic time series superimposed onto a periodic waveform. A periodic autoregressive moving average algorithm has been used to define a mathematical model of the periodic and non-periodic components of the respiration motion. The periodic components of the motion were found by projecting multiple inhale-exhale cycles onto a common subspace. The component of the respiration signal that is left after removing this periodicity is a partially autocorrelated time series and was modelled as an autoregressive moving average (ARMA) process. The accuracy of the periodic ARMA model with respect to fluctuation in amplitude and variation in length of cycles has been assessed. A respiration phantom was developed to simulate the inter-cycle variations seen in free-breathing and coached respiration patterns. At ±14% variability in cycle length and maximum amplitude of motion, the prediction errors were 4.8% of the total motion extent for a 0.5 s ahead prediction, and 9.4% at 1.0 s lag. The prediction errors increased to 11.6% at 0.5 s and 21.6% at 1.0 s when the respiration pattern had ±34% variations in both these parameters. Our results have shown that the accuracy of the periodic ARMA model is more strongly dependent on the variations in cycle length than the amplitude of the respiration cycles

  19. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    Science.gov (United States)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  20. Analysis of experimental data: The average shape of extreme wave forces on monopile foundations and the NewForce model

    DEFF Research Database (Denmark)

    Schløer, Signe; Bredmose, Henrik; Ghadirian, Amin

    2017-01-01

    Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values are co...... to the average shapes. For more nonlinear wave shapes, higher order terms has to be considered in order for the NewForce model to be able to predict the expected shapes.......Experiments with a stiff pile subjected to extreme wave forces typical of offshore wind farm storm conditions are considered. The exceedance probability curves of the nondimensional force peaks and crest heights are analysed. The average force time history normalised with their peak values...... are compared across the sea states. It is found that the force shapes show a clear similarity when grouped after the values of the normalised peak force, F/(ρghR2), normalised depth h/(gT2p) and presented in a normalised time scale t/Ta. For the largest force events, slamming can be seen as a distinct ‘hat...

  1. Predicting dissolution patterns in variable aperture fractures: 1. Development and evaluation of an enhanced depth-averaged computational model

    Energy Technology Data Exchange (ETDEWEB)

    Detwiler, R L; Rajaram, H

    2006-04-21

    Water-rock interactions within variable-aperture fractures can lead to dissolution of fracture surfaces and local alteration of fracture apertures, potentially transforming the transport properties of the fracture over time. Because fractures often provide dominant pathways for subsurface flow and transport, developing models that effectively quantify the role of dissolution on changing transport properties over a range of scales is critical to understanding potential impacts of natural and anthropogenic processes. Dissolution of fracture surfaces is controlled by surface-reaction kinetics and transport of reactants and products to and from the fracture surfaces. We present development and evaluation of a depth-averaged model of fracture flow and reactive transport that explicitly calculates local dissolution-induced alterations in fracture apertures. The model incorporates an effective mass transfer relationship that implicitly represents the transition from reaction-limited dissolution to transport-limited dissolution. We evaluate the model through direct comparison to previously reported physical experiments in transparent analog fractures fabricated by mating an inert, transparent rough surface with a smooth single crystal of potassium dihydrogen phosphate (KDP), which allowed direct measurement of fracture aperture during dissolution experiments using well-established light transmission techniques [Detwiler, et al., 2003]. Comparison of experiments and simulations at different flow rates demonstrate the relative impact of the dimensionless Peclet and Damkohler numbers on fracture dissolution and the ability of the computational model to simulate dissolution. Despite some discrepancies in the small-scale details of dissolution patterns, the simulations predict the evolution of large-scale features quite well for the different experimental conditions. This suggests that our depth-averaged approach to simulating fracture dissolution provides a useful approach for

  2. Towards Improving the Efficiency of Bayesian Model Averaging Analysis for Flow in Porous Media via the Probabilistic Collocation Method

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-04-01

    Full Text Available The characterization of flow in subsurface porous media is associated with high uncertainty. To better quantify the uncertainty of groundwater systems, it is necessary to consider the model uncertainty. Multi-model uncertainty analysis can be performed in the Bayesian model averaging (BMA framework. However, the BMA analysis via Monte Carlo method is time consuming because it requires many forward model evaluations. A computationally efficient BMA analysis framework is proposed by using the probabilistic collocation method to construct a response surface model, where the log hydraulic conductivity field and hydraulic head are expanded into polynomials through Karhunen–Loeve and polynomial chaos methods. A synthetic test is designed to validate the proposed response surface analysis method. The results show that the posterior model weight and the key statistics in BMA framework can be accurately estimated. The relative errors of mean and total variance in the BMA analysis results are just approximately 0.013% and 1.18%, but the proposed method can be 16 times more computationally efficient than the traditional BMA method.

  3. Neural networks prediction and fault diagnosis applied to stationary and non stationary ARMA (Autoregressive moving average) modeled time series

    International Nuclear Information System (INIS)

    Marseguerra, M.; Minoggio, S.; Rossi, A.; Zio, E.

    1992-01-01

    The correlated noise affecting many industrial plants under stationary or cyclo-stationary conditions - nuclear reactors included -has been successfully modeled by autoregressive moving average (ARMA) due to the versatility of this technique. The relatively recent neural network methods have similar features and much effort is being devoted to exploring their usefulness in forecasting and control. Identifying a signal by means of an ARMA model gives rise to the problem of selecting its correct order. Similar difficulties must be faced when applying neural network methods and, specifically, particular care must be given to the setting up of the appropriate network topology, the data normalization procedure and the learning code. In the present paper the capability of some neural networks of learning ARMA and seasonal ARMA processes is investigated. The results of the tested cases look promising since they indicate that the neural networks learn the underlying process with relative ease so that their forecasting capability may represent a convenient fault diagnosis tool. (Author)

  4. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  5. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  6. A priori study of subgrid-scale features in turbulent Rayleigh-Bénard convection

    Science.gov (United States)

    Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.

    2017-10-01

    At the crossroad between flow topology analysis and turbulence modeling, a priori studies are a reliable tool to understand the underlying physics of the subgrid-scale (SGS) motions in turbulent flows. In this paper, properties of the SGS features in the framework of a large-eddy simulation are studied for a turbulent Rayleigh-Bénard convection (RBC). To do so, data from direct numerical simulation (DNS) of a turbulent air-filled RBC in a rectangular cavity of aspect ratio unity and π spanwise open-ended distance are used at two Rayleigh numbers R a ∈{1 08,1 010 } [Dabbagh et al., "On the evolution of flow topology in turbulent Rayleigh-Bénard convection," Phys. Fluids 28, 115105 (2016)]. First, DNS at Ra = 108 is used to assess the performance of eddy-viscosity models such as QR, Wall-Adapting Local Eddy-viscosity (WALE), and the recent S3PQR-models proposed by Trias et al. ["Building proper invariants for eddy-viscosity subgrid-scale models," Phys. Fluids 27, 065103 (2015)]. The outcomes imply that the eddy-viscosity modeling smoothes the coarse-grained viscous straining and retrieves fairly well the effect of the kinetic unfiltered scales in order to reproduce the coherent large scales. However, these models fail to approach the exact evolution of the SGS heat flux and are incapable to reproduce well the further dominant rotational enstrophy pertaining to the buoyant production. Afterwards, the key ingredients of eddy-viscosity, νt, and eddy-diffusivity, κt, are calculated a priori and revealed positive prevalent values to maintain a turbulent wind essentially driven by the mean buoyant force at the sidewalls. The topological analysis suggests that the effective turbulent diffusion paradigm and the hypothesis of a constant turbulent Prandtl number are only applicable in the large-scale strain-dominated areas in the bulk. It is shown that the bulk-dominated rotational structures of vortex-stretching (and its synchronous viscous dissipative structures) hold

  7. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  8. Parametric Study of Flow Control Over a Hump Model Using an Unsteady Reynolds- Averaged Navier-Stokes Code

    Science.gov (United States)

    Rumsey, Christopher L.; Greenblatt, David

    2007-01-01

    This is an expanded version of a limited-length paper that appeared at the 5th International Symposium on Turbulence and Shear Flow Phenomena by the same authors. A computational study was performed for steady and oscillatory flow control over a hump model with flow separation to assess how well the steady and unsteady Reynolds-averaged Navier-Stokes equations predict trends due to Reynolds number, control magnitude, and control frequency. As demonstrated in earlier studies, the hump model case is useful because it clearly demonstrates a failing in all known turbulence models: they under-predict the turbulent shear stress in the separated region and consequently reattachment occurs too far downstream. In spite of this known failing, three different turbulence models were employed to determine if trends can be captured even though absolute levels are not. Overall the three turbulence models showed very similar trends as experiment for steady suction, but only agreed qualitatively with some of the trends for oscillatory control.

  9. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    Science.gov (United States)

    LöWe, H.; Helbig, N.

    2012-10-01

    We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.

  10. Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China.

    Science.gov (United States)

    Zhai, Binxu; Chen, Jianguo

    2018-04-18

    A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of

  11. An upscaled two-equation model of transport in porous media through unsteady-state closure of volume averaged formulations

    Science.gov (United States)

    Chaynikov, S.; Porta, G.; Riva, M.; Guadagnini, A.

    2012-04-01

    We focus on a theoretical analysis of nonreactive solute transport in porous media through the volume averaging technique. Darcy-scale transport models based on continuum formulations typically include large scale dispersive processes which are embedded in a pore-scale advection diffusion equation through a Fickian analogy. This formulation has been extensively questioned in the literature due to its inability to depict observed solute breakthrough curves in diverse settings, ranging from the laboratory to the field scales. The heterogeneity of the pore-scale velocity field is one of the key sources of uncertainties giving rise to anomalous (non-Fickian) dispersion in macro-scale porous systems. Some of the models which are employed to interpret observed non-Fickian solute behavior make use of a continuum formulation of the porous system which assumes a two-region description and includes a bimodal velocity distribution. A first class of these models comprises the so-called ''mobile-immobile'' conceptualization, where convective and dispersive transport mechanisms are considered to dominate within a high velocity region (mobile zone), while convective effects are neglected in a low velocity region (immobile zone). The mass exchange between these two regions is assumed to be controlled by a diffusive process and is macroscopically described by a first-order kinetic. An extension of these ideas is the two equation ''mobile-mobile'' model, where both transport mechanisms are taken into account in each region and a first-order mass exchange between regions is employed. Here, we provide an analytical derivation of two region "mobile-mobile" meso-scale models through a rigorous upscaling of the pore-scale advection diffusion equation. Among the available upscaling methodologies, we employ the Volume Averaging technique. In this approach, the heterogeneous porous medium is supposed to be pseudo-periodic, and can be represented through a (spatially) periodic unit cell

  12. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    Science.gov (United States)

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  13. MOTION ARTIFACT REDUCTION IN FUNCTIONAL NEAR INFRARED SPECTROSCOPY SIGNALS BY AUTOREGRESSIVE MOVING AVERAGE MODELING BASED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    MEHDI AMIAN

    2013-10-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.

  14. A statistical study of gyro-averaging effects in a reduced model of drift-wave transport

    Science.gov (United States)

    da Fonseca, J. D.; del-Castillo-Negrete, D.; Sokolov, I. M.; Caldas, I. L.

    2016-08-01

    A statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic drift-waves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K0, becomes K0J0(ρ ̂ ) , where J0 is the zeroth-order Bessel function and ρ ̂ is the Larmor radius. Assuming a Maxwellian probability density function (pdf) for ρ ̂ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturbation amplitude K0J0(ρ ̂ ) . Using these results, we compute the probability of loss of confinement (i.e., global chaos), Pc, and the probability of trapping in the main drift-wave resonance, Pt. It is shown that Pc provides an upper bound for the escape rate, and that Pt provides a good estimate of the particle trapping rate. The analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.

  15. A new LP formulation of the admission control problem modelled as an MDP under average reward criterion

    Science.gov (United States)

    Pietrabissa, Antonio

    2011-12-01

    The admission control problem can be modelled as a Markov decision process (MDP) under the average cost criterion and formulated as a linear programming (LP) problem. The LP formulation is attractive in the present and future communication networks, which support an increasing number of classes of service, since it can be used to explicitly control class-level requirements, such as class blocking probabilities. On the other hand, the LP formulation suffers from scalability problems as the number C of classes increases. This article proposes a new LP formulation, which, even if it does not introduce any approximation, is much more scalable: the problem size reduction with respect to the standard LP formulation is O((C + 1)2/2 C ). Theoretical and numerical simulation results prove the effectiveness of the proposed approach.

  16. Performance of Optimally Merged Multisatellite Precipitation Products Using the Dynamic Bayesian Model Averaging Scheme Over the Tibetan Plateau

    Science.gov (United States)

    Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua

    2018-01-01

    Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.

  17. Quadratic inner element subgrid scale discretisation of the Boltzmann transport equation

    International Nuclear Information System (INIS)

    Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.; Eaton, M.D.; Warner, P.

    2012-01-01

    This paper explores the application of the inner element subgrid scale method to the Boltzmann transport equation using quadratic basis functions. Previously, only linear basis functions for both the coarse scale and the fine scale were considered. This paper, therefore, analyses the advantages of using different coarse and subgrid basis functions for increasing the accuracy of the subgrid scale method. The transport of neutral particle radiation may be described by the Boltzmann transport equation (BTE) which, due to its 7 dimensional phase space, is computationally expensive to resolve. Multi-scale methods offer an approach to efficiently resolve the spatial dimensions of the BTE by separating the solution into its coarse and fine scales and formulating a solution whereby only the computationally efficient coarse scales need to be solved. In previous work an inner element subgrid scale method was developed that applied a linear continuous and discontinuous finite element method to represent the solution’s coarse and fine scale components. This approach was shown to generate efficient and stable solutions, and so this article continues its development by formulating higher order quadratic finite element expansions over the continuous and discontinuous scales. Here it is shown that a solution’s convergence can be improved significantly using higher order basis functions. Furthermore, by using linear finite elements to represent coarse scales in combination with quadratic fine scales, convergence can also be improved with only a modest increase in computational expense.

  18. Study of the binary mixtures of {monoglyme + (hexane, cyclohexane, octane, dodecane)} by ECM-average and PFP models

    International Nuclear Information System (INIS)

    Rivas, M.A.; Buep, A.H.; Iglesias, T.P.

    2015-01-01

    Highlights: • Polarization of the real mixture is less than that of the ideal mixture. • Molar excess volume does not exert the dominant effect on the polarization of the mixture. • Similar influence of molecular interactions on the behaviour of excess permittivity. • Excess molar volume is more influenced by the interactions than excess permittivity. - Abstract: Excess molar volumes and excess permittivity of binary mixtures involving monoglyme and alkanes, such as n-hexane, cyclohexane, n-octane and n-dodecane, were calculated from density and relative permittivity measurements for the entire composition range at several temperatures (288.15, 298.15 and 308.15) K and atmospheric pressure. The excess permittivity was calculated on the basis of a recent definition considering the ideal volume fraction. Empirical equations for describing the experimental data in terms of temperature and concentration are given. The experimental values of permittivity have been compared with those estimated by well-known models from literature. The results have indicated that better predictions are obtained when the volume change on mixing is incorporated in these calculations. The contribution of interactions to the excess permittivity was analysed by means of the ECM-average model. The Prigogine–Flory–Patterson (PFP) theory of the thermodynamics of solutions was used to shed light on the contribution of interactions to the excess molar volume. The work concludes with an interpretation of the information given by the theoretical models and the behaviour of both excess magnitudes

  19. Reduced fractal model for quantitative analysis of averaged micromotions in mesoscale: Characterization of blow-like signals

    International Nuclear Information System (INIS)

    Nigmatullin, Raoul R.; Toboev, Vyacheslav A.; Lino, Paolo; Maione, Guido

    2015-01-01

    Highlights: •A new approach describes fractal-branched systems with long-range fluctuations. •A reduced fractal model is proposed. •The approach is used to characterize blow-like signals. •The approach is tested on data from different fields. -- Abstract: It has been shown that many micromotions in the mesoscale region are averaged in accordance with their self-similar (geometrical/dynamical) structure. This distinctive feature helps to reduce a wide set of different micromotions describing relaxation/exchange processes to an averaged collective motion, expressed mathematically in a rather general form. This reduction opens new perspectives in description of different blow-like signals (BLS) in many complex systems. The main characteristic of these signals is a finite duration also when the generalized reduced function is used for their quantitative fitting. As an example, we describe quantitatively available signals that are generated by bronchial asthmatic people, songs by queen bees, and car engine valves operating in the idling regime. We develop a special treatment procedure based on the eigen-coordinates (ECs) method that allows to justify the generalized reduced fractal model (RFM) for description of BLS that can propagate in different complex systems. The obtained describing function is based on the self-similar properties of the different considered micromotions. This kind of cooperative model is proposed here for the first time. In spite of the fact that the nature of the dynamic processes that take place in fractal structure on a mesoscale level is not well understood, the parameters of the RFM fitting function can be used for construction of calibration curves, affected by various external/random factors. Then, the calculated set of the fitting parameters of these calibration curves can characterize BLS of different complex systems affected by those factors. Though the method to construct and analyze the calibration curves goes beyond the scope

  20. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  1. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    International Nuclear Information System (INIS)

    Bashahu, M.

    2003-01-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H d ) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K d and N/N d , Ne/8 or K t . Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  2. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    Energy Technology Data Exchange (ETDEWEB)

    Bashahu, M. [University of Burundi, Bujumbura (Burundi). Institute of Applied Pedagogy, Department of Physics and Technology

    2003-07-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H{sub d}) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K{sub d} and N/N{sub d}, Ne/8 or K{sub t}. Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  3. Analysis for average heat transfer empirical correlation of natural convection on the concentric vertical cylinder modelling of APWR

    International Nuclear Information System (INIS)

    Daddy Setyawan

    2011-01-01

    There are several passive safety systems on APWR reactor design. One of the passive safety system is the cooling system with natural circulation air on the surface of concentric vertical cylinder containment wall. Since the natural circulation air performance in the Passive Containment Cooling System (PCCS) application is related to safety, the cooling characteristics of natural circulation air on concentric vertical cylinder containment wall should be studied experimentally. This paper focuses on the experimental study of the heat transfer coefficient of natural circulation air with heat flux level varied on the characteristics of APWR concentric vertical cylinder containment wall. The procedure of this experimental study is composed of 4 stages as follows: the design of APWR containment with scaling 1:40, the assembling of APWR containment with its instrumentation, calibration and experimentation. The experimentation was conducted in the transient and steady-state with the variation of heat flux, from 119 W/m 2 until 575 W/m 2 . From The experimentation result obtained average heat transfer empirical correlation of natural convection Nu L = 0,008(Ra * L ) 0,68 for the concentric vertical cylinder geometry modelling of APWR. (author)

  4. Large Eddy Simulation and Reynolds-Averaged Navier-Stokes modeling of flow in a realistic pharyngeal airway model: an investigation of obstructive sleep apnea.

    Science.gov (United States)

    Mihaescu, Mihai; Murugappan, Shanmugam; Kalra, Maninder; Khosla, Sid; Gutmark, Ephraim

    2008-07-19

    Computational fluid dynamics techniques employing primarily steady Reynolds-Averaged Navier-Stokes (RANS) methodology have been recently used to characterize the transitional/turbulent flow field in human airways. The use of RANS implies that flow phenomena are averaged over time, the flow dynamics not being captured. Further, RANS uses two-equation turbulence models that are not adequate for predicting anisotropic flows, flows with high streamline curvature, or flows where separation occurs. A more accurate approach for such flow situations that occur in the human airway is Large Eddy Simulation (LES). The paper considers flow modeling in a pharyngeal airway model reconstructed from cross-sectional magnetic resonance scans of a patient with obstructive sleep apnea. The airway model is characterized by a maximum narrowing at the site of retropalatal pharynx. Two flow-modeling strategies are employed: steady RANS and the LES approach. In the RANS modeling framework both k-epsilon and k-omega turbulence models are used. The paper discusses the differences between the airflow characteristics obtained from the RANS and LES calculations. The largest discrepancies were found in the axial velocity distributions downstream of the minimum cross-sectional area. This region is characterized by flow separation and large radial velocity gradients across the developed shear layers. The largest difference in static pressure distributions on the airway walls was found between the LES and the k-epsilon data at the site of maximum narrowing in the retropalatal pharynx.

  5. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    Science.gov (United States)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  6. Longitudinal analysis of direct and indirect effects on average daily gain in rabbits using a structured antedependence model.

    Science.gov (United States)

    David, Ingrid; Sánchez, Juan-Pablo; Piles, Miriam

    2018-05-10

    Indirect genetic effects (IGE) are important components of various traits in several species. Although the intensity of social interactions between partners likely vary over time, very few genetic studies have investigated how IGE vary over time for traits under selection in livestock species. To overcome this issue, our aim was: (1) to analyze longitudinal records of average daily gain (ADG) in rabbits subjected to a 5-week period of feed restriction using a structured antedependence (SAD) model that includes IGE and (2) to evaluate, by simulation, the response to selection when IGE are present and genetic evaluation is based on a SAD model that includes IGE or not. The direct genetic variance for ADG (g/d) increased from week 1 to 3 [from 8.03 to 13.47 (g/d) 2 ] and then decreased [6.20 (g/d) 2 at week 5], while the indirect genetic variance decreased from week 1 to 4 [from 0.43 to 0.22 (g/d) 2 ]. The correlation between the direct genetic effects of different weeks was moderate to high (ranging from 0.46 to 0.86) and tended to decrease with time interval between measurements. The same trend was observed for IGE for weeks 2 to 5 (correlations ranging from 0.62 to 0.91). Estimates of the correlation between IGE of week 1 and IGE of the other weeks did not follow the same pattern and correlations were lower. Estimates of correlations between direct and indirect effects were negative at all times. After seven generations of simulated selection, the increase in ADG from selection on EBV from a SAD model that included IGE was higher (~ 30%) than when those effects were omitted. Indirect genetic effects are larger just after mixing animals at weaning than later in the fattening period, probably because of the establishment of social hierarchy that is generally observed at that time. Accounting for IGE in the selection criterion maximizes genetic progress.

  7. Modelling of an intermediate pressure microwave oxygen discharge reactor: from stationary two-dimensional to time-dependent global (volume-averaged) plasma models

    International Nuclear Information System (INIS)

    Kemaneci, Efe; Graef, Wouter; Rahimi, Sara; Van Dijk, Jan; Kroesen, Gerrit; Carbone, Emile; Jimenez-Diaz, Manuel

    2015-01-01

    A microwave-induced oxygen plasma is simulated using both stationary and time-resolved modelling strategies. The stationary model is spatially resolved and it is self-consistently coupled to the microwaves (Jimenez-Diaz et al 2012 J. Phys. D: Appl. Phys. 45 335204), whereas the time-resolved description is based on a global (volume-averaged) model (Kemaneci et al 2014 Plasma Sources Sci. Technol. 23 045002). We observe agreement of the global model data with several published measurements of microwave-induced oxygen plasmas in both continuous and modulated power inputs. Properties of the microwave plasma reactor are investigated and corresponding simulation data based on two distinct models shows agreement on the common parameters. The role of the square wave modulated power input is also investigated within the time-resolved description. (paper)

  8. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    Science.gov (United States)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  9. Autonomous Operation of Hybrid Microgrid with AC and DC Sub-Grids

    DEFF Research Database (Denmark)

    Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    the power flow among all the sources distributed throughout the two types of sub-grids, which certainly is tougher than previous efforts developed for only either ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc...... sources, ac sources and interlinking converters. Suitable control and normalization schemes are therefore developed for controlling them with results presented for showing the overall performance of the hybrid microgrid.......This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power electronic interfaces. The main challenge here is to manage...

  10. On the influence of cloud fraction diurnal cycle and sub-grid cloud optical thickness variability on all-sky direct aerosol radiative forcing

    International Nuclear Information System (INIS)

    Min, Min; Zhang, Zhibo

    2014-01-01

    The objective of this study is to understand how cloud fraction diurnal cycle and sub-grid cloud optical thickness variability influence the all-sky direct aerosol radiative forcing (DARF). We focus on the southeast Atlantic region where transported smoke is often observed above low-level water clouds during burning seasons. We use the CALIOP observations to derive the optical properties of aerosols. We developed two diurnal cloud fraction variation models. One is based on sinusoidal fitting of MODIS observations from Terra and Aqua satellites. The other is based on high-temporal frequency diurnal cloud fraction observations from SEVIRI on board of geostationary satellite. Both models indicate a strong cloud fraction diurnal cycle over the southeast Atlantic region. Sensitivity studies indicate that using a constant cloud fraction corresponding to Aqua local equatorial crossing time (1:30 PM) generally leads to an underestimated (less positive) diurnal mean DARF even if solar diurnal variation is considered. Using cloud fraction corresponding to Terra local equatorial crossing time (10:30 AM) generally leads overestimation. The biases are a typically around 10–20%, but up to more than 50%. The influence of sub-grid cloud optical thickness variability on DARF is studied utilizing the cloud optical thickness histogram available in MODIS Level-3 daily data. Similar to previous studies, we found the above-cloud smoke in the southeast Atlantic region has a strong warming effect at the top of the atmosphere. However, because of the plane-parallel albedo bias the warming effect of above-cloud smoke could be significantly overestimated if the grid-mean, instead of the full histogram, of cloud optical thickness is used in the computation. This bias generally increases with increasing above-cloud aerosol optical thickness and sub-grid cloud optical thickness inhomogeneity. Our results suggest that the cloud diurnal cycle and sub-grid cloud variability are important factors

  11. Impact of Sub-grid Soil Textural Properties on Simulations of Hydrological Fluxes at the Continental Scale Mississippi River Basin

    Science.gov (United States)

    Kumar, R.; Samaniego, L. E.; Livneh, B.

    2013-12-01

    Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked

  12. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences

    Directory of Open Access Journals (Sweden)

    Ji-Yong An

    2016-05-01

    Full Text Available Protein-Protein Interactions (PPIs play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM model and Average Blocks (AB to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM, reducing the influence of noise using a Principal Component Analysis (PCA, and using a Relevance Vector Machine (RVM based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed

  13. Fleet average NOx emission performance of 2004 model year light-duty vehicles, light-duty trucks and medium-duty passenger vehicles

    International Nuclear Information System (INIS)

    2006-05-01

    The On-Road Vehicle and Engine Emission Regulations came into effect on January 1, 2004. The regulations introduced more stringent national emission standards for on-road vehicles and engines, and also required that companies submit reports containing information concerning the company's fleets. This report presented a summary of the regulatory requirements relating to nitric oxide (NO x ) fleet average emissions for light-duty vehicles, light-duty trucks, and medium-duty passenger vehicles under the new regulations. The effectiveness of the Canadian fleet average NO x emission program at achieving environmental performance objectives was also evaluated. A summary of the fleet average NO x emission performance of individual companies was presented, as well as the overall Canadian fleet average of the 2004 model year based on data submitted by companies in their end of model year reports. A total of 21 companies submitted reports covering 2004 model year vehicles in 10 test groups, comprising 1,350,719 vehicles of the 2004 model year manufactured or imported for the purpose of sale in Canada. The average NO x value for the entire Canadian LDV/LDT fleet was 0.2016463 grams per mile. The average NO x values for the entire Canadian HLDT/MDPV fleet was 0.321976 grams per mile. It was concluded that the NO x values for both fleets were consistent with the environmental performance objectives of the regulations for the 2004 model year. 9 tabs

  14. Alpha-modeling strategy for LES of turbulent mixing

    NARCIS (Netherlands)

    Geurts, Bernard J.; Holm, Darryl D.; Drikakis, D.; Geurts, B.J.

    2002-01-01

    The α-modeling strategy is followed to derive a new subgrid parameterization of the turbulent stress tensor in large-eddy simulation (LES). The LES-α modeling yields an explicitly filtered subgrid parameterization which contains the filtered nonlinear gradient model as well as a model which

  15. Cloud-In-Cell modeling of shocked particle-laden flows at a ``SPARSE'' cost

    Science.gov (United States)

    Taverniers, Soren; Jacobs, Gustaaf; Sen, Oishik; Udaykumar, H. S.

    2017-11-01

    A common tool for enabling process-scale simulations of shocked particle-laden flows is Eulerian-Lagrangian Particle-Source-In-Cell (PSIC) modeling where each particle is traced in its Lagrangian frame and treated as a mathematical point. Its dynamics are governed by Stokes drag corrected for high Reynolds and Mach numbers. The computational burden is often reduced further through a ``Cloud-In-Cell'' (CIC) approach which amalgamates groups of physical particles into computational ``macro-particles''. CIC does not account for subgrid particle fluctuations, leading to erroneous predictions of cloud dynamics. A Subgrid Particle-Averaged Reynolds-Stress Equivalent (SPARSE) model is proposed that incorporates subgrid interphase velocity and temperature perturbations. A bivariate Gaussian source distribution, whose covariance captures the cloud's deformation to first order, accounts for the particles' momentum and energy influence on the carrier gas. SPARSE is validated by conducting tests on the interaction of a particle cloud with the accelerated flow behind a shock. The cloud's average dynamics and its deformation over time predicted with SPARSE converge to their counterparts computed with reference PSIC models as the number of Gaussians is increased from 1 to 16. This work was supported by AFOSR Grant No. FA9550-16-1-0008.

  16. Global (volume-averaged) model of inductively coupled chlorine plasma : influence of Cl wall recombination and external heating on continuous and pulse-modulated plasmas

    NARCIS (Netherlands)

    Kemaneci, E.H.; Carbone, E.A.D.; Booth, J.P.; Graef, W.A.A.D.; Dijk, van J.; Kroesen, G.M.W.

    An inductively coupled radio-frequency plasma in chlorine is investigated via a global (volume-averaged) model, both in continuous and square wave modulated power input modes. After the power is switched off (in a pulsed mode) an ion–ion plasma appears. In order to model this phenomenon, a novel

  17. FDTD calculation of whole-body average SAR in adult and child models for frequencies from 30 MHz to 3 GHz

    International Nuclear Information System (INIS)

    Wang Jianqing; Fujiwara, Osamu; Kodera, Sachiko; Watanabe, Soichi

    2006-01-01

    Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band

  18. FDTD calculation of whole-body average SAR in adult and child models for frequencies from 30 MHz to 3 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jianqing [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Fujiwara, Osamu [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Kodera, Sachiko [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Watanabe, Soichi [National Institute of Information and Communications Technology, Nukui-kitamachi, Koganei, Tokyo 184-8795 (Japan)

    2006-09-07

    Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band.

  19. Reconstruction of head-to-knee voxel model for Syrian adult male of average height and weight

    Directory of Open Access Journals (Sweden)

    Bashira Taleb

    2015-06-01

    Conclusion: Comparisons with SAFs data of Zubal model accentuated the fact that the organ masses and the specific anatomy have a significant effect on SAFs. SyrMan model can be considered as the first model built in the Middle East region, and it is an important step toward the Syrian Reference Man.

  20. Correction of Excessive Precipitation over Steep and High Mountains in a GCM: A Simple Method of Parameterizing the Thermal Effects of Subgrid Topographic Variation

    Science.gov (United States)

    Chao, Winston C.

    2015-01-01

    The excessive precipitation over steep and high mountains (EPSM) in GCMs and meso-scale models is due to a lack of parameterization of the thermal effects of the subgrid-scale topographic variation. These thermal effects drive subgrid-scale heated slope induced vertical circulations (SHVC). SHVC provide a ventilation effect of removing heat from the boundary layer of resolvable-scale mountain slopes and depositing it higher up. The lack of SHVC parameterization is the cause of EPSM. The author has previously proposed a method of parameterizing SHVC, here termed SHVC.1. Although this has been successful in avoiding EPSM, the drawback of SHVC.1 is that it suppresses convective type precipitation in the regions where it is applied. In this article we propose a new method of parameterizing SHVC, here termed SHVC.2. In SHVC.2 the potential temperature and mixing ratio of the boundary layer are changed when used as input to the cumulus parameterization scheme over mountainous regions. This allows the cumulus parameterization to assume the additional function of SHVC parameterization. SHVC.2 has been tested in NASA Goddard's GEOS-5 GCM. It achieves the primary goal of avoiding EPSM while also avoiding the suppression of convective-type precipitation in regions where it is applied.

  1. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    Science.gov (United States)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  2. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  3. Development of a new dynamic turbulent model, applications to two-dimensional and plane parallel flows

    International Nuclear Information System (INIS)

    Laval, Jean Philippe

    1999-01-01

    We developed a turbulent model based on asymptotic development of the Navier-Stokes equations within the hypothesis of non-local interactions at small scales. This model provides expressions of the turbulent Reynolds sub-grid stresses via estimates of the sub-grid velocities rather than velocities correlations as is usually done. The model involves the coupling of two dynamical equations: one for the resolved scales of motions, which depends upon the Reynolds stresses generated by the sub-grid motions, and one for the sub-grid scales of motions, which can be used to compute the sub-grid Reynolds stresses. The non-locality of interaction at sub-grid scales allows to model their evolution with a linear inhomogeneous equation where the forcing occurs via the energy cascade from resolved to sub-grid scales. This model was solved using a decomposition of sub-grid scales on Gabor's modes and implemented numerically in 2D with periodic boundary conditions. A particles method (PIC) was used to compute the sub-grid scales. The results were compared with results of direct simulations for several typical flows. The model was also applied to plane parallel flows. An analytical study of the equations allows a description of mean velocity profiles in agreement with experimental results and theoretical results based on the symmetries of the Navier-Stokes equation. Possible applications and improvements of the model are discussed in the conclusion. (author) [fr

  4. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  5. Analytical explicit formulas of average run length for long memory process with ARFIMA model on CUSUM control chart

    Directory of Open Access Journals (Sweden)

    Wilasinee Peerajit

    2017-12-01

    Full Text Available This paper proposes the explicit formulas for the derivation of exact formulas from Average Run Lengths (ARLs using integral equation on CUSUM control chart when observations are long memory processes with exponential white noise. The authors compared efficiency in terms of the percentage of absolute difference to a similar method to verify the accuracy of the ARLs between the values obtained by the explicit formulas and numerical integral equation (NIE method. The explicit formulas were based on Banach fixed point theorem which was used to guarantee the existence and uniqueness of the solution for ARFIMA(p,d,q. Results showed that the two methods are similar in good agreement with the percentage of absolute difference at less than 0.23%. Therefore, the explicit formulas are an efficient alternative for implementation in real applications because the computational CPU time for ARLs from the explicit formulas are 1 second preferable over the NIE method.

  6. Disease Mapping and Regression with Count Data in the Presence of Overdispersion and Spatial Autocorrelation: A Bayesian Model Averaging Approach

    Science.gov (United States)

    Mohebbi, Mohammadreza; Wolfe, Rory; Forbes, Andrew

    2014-01-01

    This paper applies the generalised linear model for modelling geographical variation to esophageal cancer incidence data in the Caspian region of Iran. The data have a complex and hierarchical structure that makes them suitable for hierarchical analysis using Bayesian techniques, but with care required to deal with problems arising from counts of events observed in small geographical areas when overdispersion and residual spatial autocorrelation are present. These considerations lead to nine regression models derived from using three probability distributions for count data: Poisson, generalised Poisson and negative binomial, and three different autocorrelation structures. We employ the framework of Bayesian variable selection and a Gibbs sampling based technique to identify significant cancer risk factors. The framework deals with situations where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. The evidence from applying the modelling methodology suggests that modelling strategies based on the use of generalised Poisson and negative binomial with spatial autocorrelation work well and provide a robust basis for inference. PMID:24413702

  7. Comparison of arma (autoregressive moving average) with mtm (markov transition matrix): modeling simulation and prediction of wind data

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2009-01-01

    A generalized theory of ARMA modeling, covering a wide range of researches. with model identification, order determination, estimation and diagnostic checking is presented. We evolved standardization of wind data to overcome non-stationarity. With our techniques on generating synthetic values of wind series using MTM, we modeled and simulated autocorrelated function (ACF). MTM is found relatively a better simulator as compared to ARMA. We used twenty year of wind data. MTM required fast computation and suitable algorithm for backward calculations to yield ACF values. We found ARMA (p, q) model suitableble for both large range (1-6 hours) and short range (1-2 hours). This indicates that forecast values can be considered for appropriate wind energy conversion system. (author)

  8. The Prediction of Exchange Rates with the Use of Auto-Regressive Integrated Moving-Average Models

    Directory of Open Access Journals (Sweden)

    Daniela Spiesová

    2014-10-01

    Full Text Available Currency market is recently the largest world market during the existence of which there have been many theories regarding the prediction of the development of exchange rates based on macroeconomic, microeconomic, statistic and other models. The aim of this paper is to identify the adequate model for the prediction of non-stationary time series of exchange rates and then use this model to predict the trend of the development of European currencies against Euro. The uniqueness of this paper is in the fact that there are many expert studies dealing with the prediction of the currency pairs rates of the American dollar with other currency but there is only a limited number of scientific studies concerned with the long-term prediction of European currencies with the help of the integrated ARMA models even though the development of exchange rates has a crucial impact on all levels of economy and its prediction is an important indicator for individual countries, banks, companies and businessmen as well as for investors. The results of this study confirm that to predict the conditional variance and then to estimate the future values of exchange rates, it is adequate to use the ARIMA (1,1,1 model without constant, or ARIMA [(1,7,1,(1,7] model, where in the long-term, the square root of the conditional variance inclines towards stable value.

  9. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    Science.gov (United States)

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  10. 77 FR 62623 - 2017 and Later Model Year Light-Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel...

    Science.gov (United States)

    2012-10-15

    ... changes to the regulations applicable to model years 2012-2016, with respect to air conditioner... standards for emissions of pollutants from new motor vehicles which emissions cause or contribute to air... same improvements in air conditioner efficiency. \\5\\ This is further broken down by 5.0 and 7.2 g/mi...

  11. 77 FR 2028 - 2017 and Later Model Year Light-Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel...

    Science.gov (United States)

    2012-01-13

    .../otaq/climate/regulations.htm or by searching the public dockets (NHTSA-2010-0131 (for the proposed rule... EPA's Web site at http://www.epa.gov/otaq/climate/regulations.htm . NHTSA and EPA will consider all... vehicles for model years 2017-2025. On May 21, 2010, President Obama issued a Presidential Memorandum...

  12. Diagnosing the average spatio-temporal impact of convective systems – Part 1: A methodology for evaluating climate models

    Directory of Open Access Journals (Sweden)

    M. S. Johnston

    2013-12-01

    Full Text Available An earlier method to determine the mean response of upper-tropospheric water to localised deep convective systems (DC systems is improved and applied to the EC-Earth climate model. Following Zelinka and Hartmann (2009, several fields related to moist processes and radiation from various satellites are composited with respect to the local maxima in rain rate to determine their spatio-temporal evolution with deep convection in the central Pacific Ocean. Major improvements to the earlier study are the isolation of DC systems in time so as to prevent multiple sampling of the same event, and a revised definition of the mean background state that allows for better characterisation of the DC-system-induced anomalies. The observed DC systems in this study propagate westward at ~4 m s−1. Both the upper-tropospheric relative humidity and the outgoing longwave radiation are substantially perturbed over a broad horizontal extent and for periods >30 h. The cloud fraction anomaly is fairly constant with height but small maximum can be seen around 200 hPa. The cloud ice water content anomaly is mostly confined to pressures greater than 150 hPa and reaches its maximum around 450 hPa, a few hours after the peak convection. Consistent with the large increase in upper-tropospheric cloud ice water content, albedo increases dramatically and persists about 30 h after peak convection. Applying the compositing technique to EC-Earth allows an assessment of the model representation of DC systems. The model captures the large-scale responses, most notably for outgoing longwave radiation, but there are a number of important differences. DC systems appear to propagate eastward in the model, suggesting a strong link to Kelvin waves instead of equatorial Rossby waves. The diurnal cycle in the model is more pronounced and appears to trigger new convection further to the west each time. Finally, the modelled ice water content anomaly peaks at pressures greater than 500 h

  13. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Science.gov (United States)

    Erbe, Malena; Gredler, Birgit; Seefried, Franz Reinhold; Bapst, Beat; Simianer, Henner

    2013-01-01

    Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]). The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20) cross-validation scenarios (50 replicates, random assignment) were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010), augmented by a weighting factor (w) based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text]) was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  14. A function accounting for training set size and marker density to model the average accuracy of genomic prediction.

    Directory of Open Access Journals (Sweden)

    Malena Erbe

    Full Text Available Prediction of genomic breeding values is of major practical relevance in dairy cattle breeding. Deterministic equations have been suggested to predict the accuracy of genomic breeding values in a given design which are based on training set size, reliability of phenotypes, and the number of independent chromosome segments ([Formula: see text]. The aim of our study was to find a general deterministic equation for the average accuracy of genomic breeding values that also accounts for marker density and can be fitted empirically. Two data sets of 5'698 Holstein Friesian bulls genotyped with 50 K SNPs and 1'332 Brown Swiss bulls genotyped with 50 K SNPs and imputed to ∼600 K SNPs were available. Different k-fold (k = 2-10, 15, 20 cross-validation scenarios (50 replicates, random assignment were performed using a genomic BLUP approach. A maximum likelihood approach was used to estimate the parameters of different prediction equations. The highest likelihood was obtained when using a modified form of the deterministic equation of Daetwyler et al. (2010, augmented by a weighting factor (w based on the assumption that the maximum achievable accuracy is [Formula: see text]. The proportion of genetic variance captured by the complete SNP sets ([Formula: see text] was 0.76 to 0.82 for Holstein Friesian and 0.72 to 0.75 for Brown Swiss. When modifying the number of SNPs, w was found to be proportional to the log of the marker density up to a limit which is population and trait specific and was found to be reached with ∼20'000 SNPs in the Brown Swiss population studied.

  15. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  16. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    Science.gov (United States)

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  17. QSPR models based on molecular mechanics and quantum chemical calculations. 1. Construction of Boltzmann averaged descriptors for alkanes, alcohols, diols, ethers and cyclic compounds

    DEFF Research Database (Denmark)

    Dyekjær, Jane Dannow; Rasmussen, Kjeld; Jonsdottir, Svava Osk

    2002-01-01

    Values for nine descriptors for QSPR (quantitative structure-property relationships) modeling of physical properties of 96 alkanes, alcohols, ethers, diols, triols and cyclic alkanes and alcohols in conjunction with the program Codessa are presented. The descriptors are Boltzmann-averaged by sele......Values for nine descriptors for QSPR (quantitative structure-property relationships) modeling of physical properties of 96 alkanes, alcohols, ethers, diols, triols and cyclic alkanes and alcohols in conjunction with the program Codessa are presented. The descriptors are Boltzmann...

  18. Performance of a Bounce-Averaged Global Model of Super-Thermal Electron Transport in the Earth's Magnetic Field

    Science.gov (United States)

    McGuire, Tim

    1998-01-01

    In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.

  19. A novel variational Bayes multiple locus Z-statistic for genome-wide association studies with Bayesian model averaging

    Science.gov (United States)

    Logsdon, Benjamin A.; Carty, Cara L.; Reiner, Alexander P.; Dai, James Y.; Kooperberg, Charles

    2012-01-01

    Motivation: For many complex traits, including height, the majority of variants identified by genome-wide association studies (GWAS) have small effects, leaving a significant proportion of the heritable variation unexplained. Although many penalized multiple regression methodologies have been proposed to increase the power to detect associations for complex genetic architectures, they generally lack mechanisms for false-positive control and diagnostics for model over-fitting. Our methodology is the first penalized multiple regression approach that explicitly controls Type I error rates and provide model over-fitting diagnostics through a novel normally distributed statistic defined for every marker within the GWAS, based on results from a variational Bayes spike regression algorithm. Results: We compare the performance of our method to the lasso and single marker analysis on simulated data and demonstrate that our approach has superior performance in terms of power and Type I error control. In addition, using the Women's Health Initiative (WHI) SNP Health Association Resource (SHARe) GWAS of African-Americans, we show that our method has power to detect additional novel associations with body height. These findings replicate by reaching a stringent cutoff of marginal association in a larger cohort. Availability: An R-package, including an implementation of our variational Bayes spike regression (vBsr) algorithm, is available at http://kooperberg.fhcrc.org/soft.html. Contact: blogsdon@fhcrc.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22563072

  20. A systematic comparison of two-equation Reynolds-averaged Navier-Stokes turbulence models applied to shock-cloud interactions

    Science.gov (United States)

    Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.

    2017-07-01

    Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.

  1. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    Science.gov (United States)

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  2. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  3. P-wave velocity changes in freezing hard low-porosity rocks: a laboratory-based time-average model

    Directory of Open Access Journals (Sweden)

    D. Draebing

    2012-10-01

    Full Text Available P-wave refraction seismics is a key method in permafrost research but its applicability to low-porosity rocks, which constitute alpine rock walls, has been denied in prior studies. These studies explain p-wave velocity changes in freezing rocks exclusively due to changing velocities of pore infill, i.e. water, air and ice. In existing models, no significant velocity increase is expected for low-porosity bedrock. We postulate, that mixing laws apply for high-porosity rocks, but freezing in confined space in low-porosity bedrock also alters physical rock matrix properties. In the laboratory, we measured p-wave velocities of 22 decimetre-large low-porosity (< 10% metamorphic, magmatic and sedimentary rock samples from permafrost sites with a natural texture (> 100 micro-fissures from 25 °C to −15 °C in 0.3 °C increments close to the freezing point. When freezing, p-wave velocity increases by 11–166% perpendicular to cleavage/bedding and equivalent to a matrix velocity increase from 11–200% coincident to an anisotropy decrease in most samples. The expansion of rigid bedrock upon freezing is restricted and ice pressure will increase matrix velocity and decrease anisotropy while changing velocities of the pore infill are insignificant. Here, we present a modified Timur's two-phase-equation implementing changes in matrix velocity dependent on lithology and demonstrate the general applicability of refraction seismics to differentiate frozen and unfrozen low-porosity bedrock.

  4. Site-specific dissociation dynamics of H2/D2 on Ag(111) and Co(0001) and the validity of the site-averaging model

    International Nuclear Information System (INIS)

    Hu, Xixi; Jiang, Bin; Xie, Daiqian; Guo, Hua

    2015-01-01

    Dissociative chemisorption of polyatomic molecules on metal surfaces involves high-dimensional dynamics, of which quantum mechanical treatments are computationally challenging. A promising reduced-dimensional approach approximates the full-dimensional dynamics by a weighted average of fixed-site results. To examine the performance of this site-averaging model, we investigate two distinct reactions, namely, hydrogen dissociation on Co(0001) and Ag(111), using accurate first principles potential energy surfaces (PESs). The former has a very low barrier of ∼0.05 eV while the latter is highly activated with a barrier of ∼1.15 eV. These two systems allow the investigation of not only site-specific dynamical behaviors but also the validity of the site-averaging model. It is found that the reactivity is not only controlled by the barrier height but also by the topography of the PES. Moreover, the agreement between the site-averaged and full-dimensional results is much better on Ag(111), though quantitative in neither system. Further quasi-classical trajectory calculations showed that the deviations can be attributed to dynamical steering effects, which are present in both reactions at all energies

  5. Determination of the Orientation and Dynamics of Ergosterol in Model Membranes Using Uniform 13C Labeling and Dynamically Averaged 13C Chemical Shift Anisotropies as Experimental Restraints

    Science.gov (United States)

    Soubias, O.; Jolibois, F.; Massou, S.; Milon, A.; Réat, V.

    2005-01-01

    A new strategy was established to determine the average orientation and dynamics of ergosterol in dimyristoylphosphatidylcholine model membranes. It is based on the analysis of chemical shift anisotropies (CSAs) averaged by the molecular dynamics. Static 13C CSA tensors were computed by quantum chemistry, using the gauge-including atomic-orbital approach within Hartree-Fock theory. Uniformly 13C-labeled ergosterol was purified from Pichia pastoris cells grown on labeled methanol. After reconstitution into dimyristoylphosphatidylcholine lipids, the complete 1H and 13C assignment of ergosterol's resonances was performed using a combination of magic-angle spinning two-dimensional experiments. Dynamically averaged CSAs were determined by standard side-band intensity analysis for isolated 13C resonances (C3 and ethylenic carbons) and by off-magic-angle spinning experiments for other carbons. A set of 18 constraints was thus obtained, from which the sterol's molecular order parameter and average orientation could be precisely defined. The validity of using computed CSAs in this strategy was verified on cholesterol model systems. This new method allowed us to quantify ergosterol's dynamics at three molar ratios: 16 mol % (Ld phase), 30 mol % (Lo phase), and 23 mol % (mixed phases). Contrary to cholesterol, ergosterol's molecular diffusion axis makes an important angle (14°) with the inertial axis of the rigid four-ring system. PMID:15923221

  6. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    Science.gov (United States)

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  7. Average current is better than peak current as therapeutic dosage for biphasic waveforms in a ventricular fibrillation pig model of cardiac arrest.

    Science.gov (United States)

    Chen, Bihua; Yu, Tao; Ristagno, Giuseppe; Quan, Weilun; Li, Yongqin

    2014-10-01

    Defibrillation current has been shown to be a clinically more relevant dosing unit than energy. However, the effects of average and peak current in determining shock outcome are still undetermined. The aim of this study was to investigate the relationship between average current, peak current and defibrillation success when different biphasic waveforms were employed. Ventricular fibrillation (VF) was electrically induced in 22 domestic male pigs. Animals were then randomized to receive defibrillation using one of two different biphasic waveforms. A grouped up-and-down defibrillation threshold-testing protocol was used to maintain the average success rate of 50% in the neighborhood. In 14 animals (Study A), defibrillations were accomplished with either biphasic truncated exponential (BTE) or rectilinear biphasic waveforms. In eight animals (Study B), shocks were delivered using two BTE waveforms that had identical peak current but different waveform durations. Both average and peak currents were associated with defibrillation success when BTE and rectilinear waveforms were investigated. However, when pathway impedance was less than 90Ω for the BTE waveform, bivariate correlation coefficient was 0.36 (p=0.001) for the average current, but only 0.21 (p=0.06) for the peak current in Study A. In Study B, a high defibrillation success (67.9% vs. 38.8%, pcurrent (14.9±2.1A vs. 13.5±1.7A, pcurrent unchanged. In this porcine model of VF, average current was better than peak current to be an adequate parameter to describe the therapeutic dosage when biphasic defibrillation waveforms were used. The institutional protocol number: P0805. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Elucidating fluctuating diffusivity in center-of-mass motion of polymer models with time-averaged mean-square-displacement tensor

    Science.gov (United States)

    Miyaguchi, Tomoshige

    2017-10-01

    There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.

  9. Exploring Modeling Options and Conversion of Average Response to Appropriate Vibration Envelopes for a Typical Cylindrical Vehicle Panel with Rib-stiffened Design

    Science.gov (United States)

    Harrison, Phil; LaVerde, Bruce; Teague, David

    2009-01-01

    Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.

  10. Calibrating the simple biosphere model for Amazonian tropical forest using field and remote sensing data. I - Average calibration with field data

    Science.gov (United States)

    Sellers, Piers J.; Shuttleworth, W. James; Dorman, Jeff L.; Dalcher, Amnon; Roberts, John M.

    1989-01-01

    Using meteorological and hydrological measurements taken in and above the central-Amazon-basin tropical forest, calibration of the Sellers et al. (1986) simple biosphere (SiB) model are described. The SiB model is a one-dimensional soil-vegetation-atmosphere model designed for use within GCMs models, representing the vegetation cover by analogy with processes operating within a single representative plant. The experimental systems and the procedures used to obtain field data are described, together with the specification of the physiological parameterization required to provide an average description of data. It was found that some of the existing literature on stomatal behavior for tropical species is inconsistent with the observed behavior of the complete canopy in Amazonia, and that the rainfall interception store of the canopy is considerably smaller than originally specified in the SiB model.

  11. Comparison of a vertically-averaged and a vertically-resolved model for hyporheic flow beneath a pool-riffle bedform

    Science.gov (United States)

    Ibrahim, Ahmad; Steffler, Peter; She, Yuntong

    2018-02-01

    The interaction between surface water and groundwater through the hyporheic zone is recognized to be important as it impacts the water quantity and quality in both flow systems. Three-dimensional (3D) modeling is the most complete representation of a real-world hyporheic zone. However, 3D modeling requires extreme computational power and efforts; the sophistication is often significantly compromised by not being able to obtain the required input data accurately. Simplifications are therefore often needed. The objective of this study was to assess the accuracy of the vertically-averaged approximation compared to a more complete vertically-resolved model of the hyporheic zone. The groundwater flow was modeled by either a simple one-dimensional (1D) Dupuit approach or a two-dimensional (2D) horizontal/vertical model in boundary fitted coordinates, with the latter considered as a reference model. Both groundwater models were coupled with a 1D surface water model via the surface water depth. Applying the two models to an idealized pool-riffle sequence showed that the 1D Dupuit approximation gave comparable results in determining the characteristics of the hyporheic zone to the reference model when the stratum thickness is not very large compared to the surface water depth. Conditions under which the 1D model can provide reliable estimate of the seepage discharge, upwelling/downwelling discharges and locations, the hyporheic flow, and the residence time were determined.

  12. Point-by-point model description of average prompt neutron data as a function of total kinetic energy of fission fragments

    International Nuclear Information System (INIS)

    Tudora, A.

    2013-01-01

    The experimental data of average prompt neutron multiplicity as a function of total kinetic energy of fragments <ν>(TKE) exhibit, especially in the case of 252 Cf(SF), different slopes dTKE/dν and different behaviours at low TKE values. The Point-by-Point (PbP) model can describe these different behaviours. The higher slope dTKE/dν and the flattening of <ν> at low TKE exhibited by a part of experimental data sets is very well reproduced when the PbP multi-parametric matrix ν(A,TKE) is averaged over a double distribution Y(A,TKE). The lower slope and the almost linear behaviour over the entire TKE range exhibited by other data sets is well described when the same matrix ν(A,TKE) is averaged over a single distribution Y(A). In the case of average prompt neutron energy in SCM as a function of TKE, different dTKE/dε slopes are also obtained by averaging the same PbP matrix ε(A,TKE) over Y(A,TKE) and over Y(A). The results are exemplified for 3 fissioning systems benefiting of experimental data as a function of TKE: 252 Cf(SF), 235 U(n th ,f) and 239 Pu(n th ,f). In the case of 234 U(n,f) for the first time it was possible to calculate <ν>(TKE) and <ε>(TKE) at many incident energies by averaging the PbP multi-parametric matrices over the experimental Y(A,TKE) distributions recently measured at IRMM for 14 incident energies in the range 0.3- 5 MeV. The results revealed that the slope dTKE/dν does not vary with the incident energy and the flattening of <ν> at low TKE values is more pronounced at low incident energies. The average model parameters dependences on TKE resulted from the PbP treatment allow the use of the most probable fragmentation approach, having the great advantage to provide results at many TKE values in a very short computing time compared to PbP and Monte Carlo treatments. (author)

  13. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA and Generalized Regression Neural Network (GRNN in Forecasting Hepatitis Incidence in Heng County, China.

    Directory of Open Access Journals (Sweden)

    Wudi Wei

    Full Text Available Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease.The autoregressive integrated moving average (ARIMA model and the generalized regression neural network (GRNN model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE, root mean square error (RMSE, mean absolute percentage error (MAPE and mean square error (MSE, were used to compare the performance among the three models.The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2(1,1,112 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models.The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County.

  14. Accounting for disagreements on average cone loss rates in retinitis pigmentosa with a new kinetic model: Its relevance for clinical trials.

    Science.gov (United States)

    Baumgartner, W A; Baumgartner, A M

    2016-04-01

    Since 1985, at least nine studies of the average rate of cone loss in retinitis pigmentosa (RP) populations have yielded conflicting average rate constant values (-k), differing by 90-160%. This is surprising, since, except for the first two investigations, the Harvard or Johns Hopkins' protocols used in these studies were identical with respect to: use of the same exponential decline model, calculation of average -k from individual patient k values, monitoring patients over similarly large time frames, and excluding data exhibiting floor and ceiling effects. A detailed analysis of Harvard's and Hopkins' protocols and data revealed two subtle differences: (i) Hopkins' use of half-life t0.5 (or t(1/e)) for expressing patient cone-loss rates rather than k as used by Harvard; (ii) Harvard obtaining substantially more +k from improving fields due to dormant-cone recovery effects and "small -k" values than Hopkins' ("small -k" is defined as less than -0.040 year(-1)), e.g., 16% +k, 31% small -k, vs. Hopkins' 3% and 6% respectively. Since t0.5=0.693/k, it follows that when k=0, or is very small, t0.5 (or t(1/e)) is respectively infinity or a very large number. This unfortunate mathematical property (which also prevents t0.5 (t(1/e)) histogram construction corresponding to -k to +k) caused Hopkins' to delete all "small -k" and all +k due to "strong leverage". Naturally this contributed to Hopkins' larger average -k. Difference (ii) led us to re-evaluate the Harvard/Hopkins' exponential unchanging -k model. In its place we propose a model of increasing biochemical stresses from dying rods on cones during RP progression: increasing oxidative stresses and trophic factor deficiencies (e.g., RdCVF), and RPE malfunction. Our kinetic analysis showed rod loss to follow exponential kinetics with unchanging -k due to constant genetic stresses, thereby providing a theoretical basis for Clarke et al.'s empirical observation of such kinetics with eleven animal models of RP. In

  15. Hybrid support vector regression and autoregressive integrated moving average models improved by particle swarm optimization for property crime rates forecasting with economic indicators.

    Science.gov (United States)

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina

    2013-01-01

    Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  16. Hybrid Support Vector Regression and Autoregressive Integrated Moving Average Models Improved by Particle Swarm Optimization for Property Crime Rates Forecasting with Economic Indicators

    Directory of Open Access Journals (Sweden)

    Razana Alwee

    2013-01-01

    Full Text Available Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR and autoregressive integrated moving average (ARIMA to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.

  17. Evaluation of WRF Simulations With Different Selections of Subgrid Orographic Drag Over the Tibetan Plateau

    Science.gov (United States)

    Zhou, X.; Beljaars, A.; Wang, Y.; Huang, B.; Lin, C.; Chen, Y.; Wu, H.

    2017-09-01

    Weather Research and Forecasting (WRF) simulations with different selections of subgrid orographic drag over the Tibetan Plateau have been evaluated with observation and ERA-Interim reanalysis. Results show that the subgrid orographic drag schemes, especially the turbulent orographic form drag (TOFD) scheme, efficiently reduce the 10 m wind speed bias and RMS error with respect to station measurements. With the combination of gravity wave, flow blocking and TOFD schemes, wind speed is simulated more realistically than with the individual schemes only. Improvements are also seen in the 2 m air temperature and surface pressure. The gravity wave drag, flow blocking drag, and TOFD schemes combined have the smallest station mean bias (-2.05°C in 2 m air temperature and 1.27 hPa in surface pressure) and RMS error (3.59°C in 2 m air temperature and 2.37 hPa in surface pressure). Meanwhile, the TOFD scheme contributes more to the improvements than the gravity wave drag and flow blocking schemes. The improvements are more pronounced at low levels of the atmosphere than at high levels due to the stronger drag enhancement on the low-level flow. The reduced near-surface cold bias and high-pressure bias over the Tibetan Plateau are the result of changes in the low-level wind components associated with the geostrophic balance. The enhanced drag directly leads to weakened westerlies but also enhances the a-geostrophic flow in this case reducing (enhancing) the northerlies (southerlies), which bring more warm air across the Himalaya Mountain ranges from South Asia (bring less cold air from the north) to the interior Tibetan Plateau.

  18. Effects of Resolution on the Simulation of Boundary-layer Clouds and the Partition of Kinetic Energy to Subgrid Scales

    Directory of Open Access Journals (Sweden)

    Anning Cheng

    2010-02-01

    Full Text Available Seven boundary-layer cloud cases are simulated with UCLA-LES (The University of California, Los Angeles – large eddy simulation model with different horizontal and vertical gridspacing to investigate how the results depend on gridspacing. Some variables are more sensitive to horizontal gridspacing, while others are more sensitive to vertical gridspacing, and still others are sensitive to both horizontal and vertical gridspacings with similar or opposite trends. For cloud-related variables having the opposite dependence on horizontal and vertical gridspacings, changing the gridspacing proportionally in both directions gives the appearance of convergence. In this study, we mainly discuss the impact of subgrid-scale (SGS kinetic energy (KE on the simulations with coarsening of horizontal and vertical gridspacings. A running-mean operator is used to separate the KE of the high-resolution benchmark simulations into that of resolved scales of coarse-resolution simulations and that of SGSs. The diagnosed SGS KE is compared with that parameterized by the Smagorinsky-Lilly SGS scheme at various gridspacings. It is found that the parameterized SGS KE for the coarse-resolution simulations is usually underestimated but the resolved KE is unrealistically large, compared to benchmark simulations. However, the sum of resolved and SGS KEs is about the same for simulations with various gridspacings. The partitioning of SGS and resolved heat and moisture transports is consistent with that of SGS and resolved KE, which means that the parameterized transports are underestimated but resolved-scale transports are overestimated. On the whole, energy shifts to large-scales as the horizontal gridspacing becomes coarse, hence the size of clouds and the resolved circulation increase, the clouds become more stratiform-like with an increase in cloud fraction, cloud liquid-water path and surface precipitation; when coarse vertical gridspacing is used, cloud sizes do not

  19. FOREGROUND MODEL AND ANTENNA CALIBRATION ERRORS IN THE MEASUREMENT OF THE SKY-AVERAGED λ21 cm SIGNAL AT z∼ 20

    Energy Technology Data Exchange (ETDEWEB)

    Bernardi, G. [SKA SA, 3rd Floor, The Park, Park Road, Pinelands, 7405 (South Africa); McQuinn, M. [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Greenhill, L. J., E-mail: gbernardi@ska.ac.za [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2015-01-20

    The most promising near-term observable of the cosmic dark age prior to widespread reionization (z ∼ 15-200) is the sky-averaged λ21 cm background arising from hydrogen in the intergalactic medium. Though an individual antenna could in principle detect the line signature, data analysis must separate foregrounds that are orders of magnitude brighter than the λ21 cm background (but that are anticipated to vary monotonically and gradually with frequency, e.g., they are considered {sup s}pectrally smooth{sup )}. Using more physically motivated models for foregrounds than in previous studies, we show that the intrinsic spectral smoothness of the foregrounds is likely not a concern, and that data analysis for an ideal antenna should be able to detect the λ21 cm signal after subtracting a ∼fifth-order polynomial in log ν. However, we find that the foreground signal is corrupted by the angular and frequency-dependent response of a real antenna. The frequency dependence complicates modeling of foregrounds commonly based on the assumption of spectral smoothness. Our calculations focus on the Large-aperture Experiment to detect the Dark Age, which combines both radiometric and interferometric measurements. We show that statistical uncertainty remaining after fitting antenna gain patterns to interferometric measurements is not anticipated to compromise extraction of the λ21 cm signal for a range of cosmological models after fitting a seventh-order polynomial to radiometric data. Our results generalize to most efforts to measure the sky-averaged spectrum.

  20. Automatic individual arterial input functions calculated from PCA outperform manual and population-averaged approaches for the pharmacokinetic modeling of DCE-MR images.

    Science.gov (United States)

    Sanz-Requena, Roberto; Prats-Montalbán, José Manuel; Martí-Bonmatí, Luis; Alberich-Bayarri, Ángel; García-Martí, Gracián; Pérez, Rosario; Ferrer, Alberto

    2015-08-01

    To introduce a segmentation method to calculate an automatic arterial input function (AIF) based on principal component analysis (PCA) of dynamic contrast enhanced MR (DCE-MR) imaging and compare it with individual manually selected and population-averaged AIFs using calculated pharmacokinetic parameters. The study included 65 individuals with prostate examinations (27 tumors and 38 controls). Manual AIFs were individually extracted and also averaged to obtain a population AIF. Automatic AIFs were individually obtained by applying PCA to volumetric DCE-MR imaging data and finding the highest correlation of the PCs with a reference AIF. Variability was assessed using coefficients of variation and repeated measures tests. The different AIFs were used as inputs to the pharmacokinetic model and correlation coefficients, Bland-Altman plots and analysis of variance tests were obtained to compare the results. Automatic PCA-based AIFs were successfully extracted in all cases. The manual and PCA-based AIFs showed good correlation (r between pharmacokinetic parameters ranging from 0.74 to 0.95), with differences below the manual individual variability (RMSCV up to 27.3%). The population-averaged AIF showed larger differences (r from 0.30 to 0.61). The automatic PCA-based approach minimizes the variability associated to obtaining individual volume-based AIFs in DCE-MR studies of the prostate. © 2014 Wiley Periodicals, Inc.

  1. Using autoregressive integrated moving average (ARIMA) models to predict and monitor the number of beds occupied during a SARS outbreak in a tertiary hospital in Singapore.

    Science.gov (United States)

    Earnest, Arul; Chen, Mark I; Ng, Donald; Sin, Leo Yee

    2005-05-11

    The main objective of this study is to apply autoregressive integrated moving average (ARIMA) models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases) and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. We found that the ARIMA (1,0,3) model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE) for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious diseases as well.

  2. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki

    2003-01-01

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  3. Model selection and averaging in the assessment of the drivers of household food waste to reduce the probability of false positives

    Science.gov (United States)

    Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce

    2018-01-01

    Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions. PMID:29389949

  4. Model selection and averaging in the assessment of the drivers of household food waste to reduce the probability of false positives.

    Science.gov (United States)

    Grainger, Matthew James; Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce

    2018-01-01

    Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions.

  5. Numerical Study of Natural Gas/Diesel Reactivity Controlled Compression Ignition Combustion with Large Eddy Simulation and Reynolds-Averaged Navier–Stokes Model

    Directory of Open Access Journals (Sweden)

    Amir-Hasan Kakaee

    2018-03-01

    Full Text Available In the current study, a comparative study is performed using Large Eddy Simulation (LES and Reynolds-averaged Navier–Stokes (RANS turbulence models on a natural gas/diesel Reactivity Controlled Compression Ignition (RCCI engine. The numerical results are validated against the available research work in the literature. The RNG (Re-Normalization Group k − ε and dynamic structure models are employed to model turbulent flow for RANS and LES simulations, respectively. Parameters like the premixed natural gas mass fraction, the second start of injection timing (SOI2 of diesel and the engine speed are studied to compare performance of RANS and LES models on combustion and pollutant emissions prediction. The results obtained showed that the LES and RANS model give almost similar predictions of cylinder pressure and heat release rate at lower natural gas mass fractions and late SOI2 timings. However, the LES showed improved capability to predict the natural gas auto-ignition and pollutant emissions prediction compared to RANS model especially at higher natural gas mass fractions.

  6. Middle and long-term prediction of UT1-UTC based on combination of Gray Model and Autoregressive Integrated Moving Average

    Science.gov (United States)

    Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing

    2017-02-01

    UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.

  7. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  8. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    International Nuclear Information System (INIS)

    Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C.J.

    2016-01-01

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has

  9. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C.J.

    2016-11-01

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach

  10. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    -basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.

  11. Sub-grid scale representation of vegetation in global land surface schemes: implications for estimation of the terrestrial carbon sink

    Directory of Open Access Journals (Sweden)

    J. R. Melton

    2014-02-01

    Full Text Available Terrestrial ecosystem models commonly represent vegetation in terms of plant functional types (PFTs and use their vegetation attributes in calculations of the energy and water balance as well as to investigate the terrestrial carbon cycle. Sub-grid scale variability of PFTs in these models is represented using different approaches with the "composite" and "mosaic" approaches being the two end-members. The impact of these two approaches on the global carbon balance has been investigated with the Canadian Terrestrial Ecosystem Model (CTEM v 1.2 coupled to the Canadian Land Surface Scheme (CLASS v 3.6. In the composite (single-tile approach, the vegetation attributes of different PFTs present in a grid cell are aggregated and used in calculations to determine the resulting physical environmental conditions (soil moisture, soil temperature, etc. that are common to all PFTs. In the mosaic (multi-tile approach, energy and water balance calculations are performed separately for each PFT tile and each tile's physical land surface environmental conditions evolve independently. Pre-industrial equilibrium CLASS-CTEM simulations yield global totals of vegetation biomass, net primary productivity, and soil carbon that compare reasonably well with observation-based estimates and differ by less than 5% between the mosaic and composite configurations. However, on a regional scale the two approaches can differ by > 30%, especially in areas with high heterogeneity in land cover. Simulations over the historical period (1959–2005 show different responses to evolving climate and carbon dioxide concentrations from the two approaches. The cumulative global terrestrial carbon sink estimated over the 1959–2005 period (excluding land use change (LUC effects differs by around 5% between the two approaches (96.3 and 101.3 Pg, for the mosaic and composite approaches, respectively and compares well with the observation-based estimate of 82.2 ± 35 Pg C over the same

  12. Adaptation of model proteins from cold to hot environments involves continuous and small adjustments of average parameters related to amino acid composition.

    Science.gov (United States)

    De Vendittis, Emmanuele; Castellano, Immacolata; Cotugno, Roberta; Ruocco, Maria Rosaria; Raimo, Gennaro; Masullo, Mariorosario

    2008-01-07

    The growth temperature adaptation of six model proteins has been studied in 42 microorganisms belonging to eubacterial and archaeal kingdoms, covering optimum growth temperatures from 7 to 103 degrees C. The selected proteins include three elongation factors involved in translation, the enzymes glyceraldehyde-3-phosphate dehydrogenase and superoxide dismutase, the cell division protein FtsZ. The common strategy of protein adaptation from cold to hot environments implies the occurrence of small changes in the amino acid composition, without altering the overall structure of the macromolecule. These continuous adjustments were investigated through parameters related to the amino acid composition of each protein. The average value per residue of mass, volume and accessible surface area allowed an evaluation of the usage of bulky residues, whereas the average hydrophobicity reflected that of hydrophobic residues. The specific proportion of bulky and hydrophobic residues in each protein almost linearly increased with the temperature of the host microorganism. This finding agrees with the structural and functional properties exhibited by proteins in differently adapted sources, thus explaining the great compactness or the high flexibility exhibited by (hyper)thermophilic or psychrophilic proteins, respectively. Indeed, heat-adapted proteins incline toward the usage of heavier-size and more hydrophobic residues with respect to mesophiles, whereas the cold-adapted macromolecules show the opposite behavior with a certain preference for smaller-size and less hydrophobic residues. An investigation on the different increase of bulky residues along with the growth temperature observed in the six model proteins suggests the relevance of the possible different role and/or structure organization played by protein domains. The significance of the linear correlations between growth temperature and parameters related to the amino acid composition improved when the analysis was

  13. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    Science.gov (United States)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  14. A time dependent zonally averaged energy balance model to be incorporated into IMAGE (Integrated Model to Assess the Greenhouse Effect). Collaborative Paper

    International Nuclear Information System (INIS)

    Jonas, M.; Olendrzynski, K.; Elzen, M. den

    1991-10-01

    The Intergovernmental Panel on Climate Change (IPCC) is placing increasing emphasis on the use of time-dependent impact models that are linked with energy-emission accounting frameworks and models that predict in a time-dependent fashion important variables such as atmospheric concentrations of greenhouse gases, surface temperature and precipitation. Integrating these tools (greenhouse gas emission strategies, atmospheric processes, ecological impacts) into what is called an integrated assessment model will assist policymakers in the IPCC and elsewhere to assess the impacts of a wide variety of emission strategies. The Integrated Model to Assess the Greenhouse Effect (IMAGE; developed at RIVM) represents such an integrated assessment model which already calculates historical and future effects of greenhouse gas emissions on global surface temperature, sea level rise and other ecological and socioeconomic impacts. However, to be linked to environmental impact models such as the Global Vegetation Model and the Timber Assessment Model, both of which are under development at RIVM and IIASA, IMAGE needs to be regionalized in terms of temperature and precipitation output. These key parameters will then enable the above environmental impact models to be run in a time-dependent mode. In this paper we lay the scientific and numerical basis for a two-dimensional Energy Balance Model (EBM) to be integrated into the climate module of IMAGE which will ultimately provide scenarios of surface temperature and precipitation, resolved with respect to latitude and height. This paper will deal specifically with temperature; following papers will deal with precipitation. So far, the relatively simple EBM set up in this paper resolves mean annual surface temperatures on a regional scale defined by 10 deg latitude bands. In addition, we can concentrate on the implementation of the EBM into IMAGE, i.e., on the steering mechanism itself. Both reasons justify the time and effort put into

  15. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  16. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  17. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  18. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  19. The Efficiency of OLS Estimators of Structural Parameters in a Simple Linear Regression Model in the Calibration of the Averages Scheme

    Directory of Open Access Journals (Sweden)

    Kowal Robert

    2016-12-01

    Full Text Available A simple linear regression model is one of the pillars of classic econometrics. Multiple areas of research function within its scope. One of the many fundamental questions in the model concerns proving the efficiency of the most commonly used OLS estimators and examining their properties. In the literature of the subject one can find taking back to this scope and certain solutions in that regard. Methodically, they are borrowed from the multiple regression model or also from a boundary partial model. Not everything, however, is here complete and consistent. In the paper a completely new scheme is proposed, based on the implementation of the Cauchy-Schwarz inequality in the arrangement of the constraint aggregated from calibrated appropriately secondary constraints of unbiasedness which in a result of choice the appropriate calibrator for each variable directly leads to showing this property. A separate range-is a matter of choice of such a calibrator. These deliberations, on account of the volume and kinds of the calibration, were divided into a few parts. In the one the efficiency of OLS estimators is proven in a mixed scheme of the calibration by averages, that is preliminary, and in the most basic frames of the proposed methodology. In these frames the future outlines and general premises constituting the base of more distant generalizations are being created.

  20. A Study of Single- and Double-Averaged Second-Order Models to Evaluate Third-Body Perturbation Considering Elliptic Orbits for the Perturbing Body

    Directory of Open Access Journals (Sweden)

    R. C. Domingos

    2013-01-01

    Full Text Available The equations for the variations of the Keplerian elements of the orbit of a spacecraft perturbed by a third body are developed using a single average over the motion of the spacecraft, considering an elliptic orbit for the disturbing body. A comparison is made between this approach and the more used double averaged technique, as well as with the full elliptic restricted three-body problem. The disturbing function is expanded in Legendre polynomials up to the second order in both cases. The equations of motion are obtained from the planetary equations, and several numerical simulations are made to show the evolution of the orbit of the spacecraft. Some characteristics known from the circular perturbing body are studied: circular, elliptic equatorial, and frozen orbits. Different initial eccentricities for the perturbed body are considered, since the effect of this variable is one of the goals of the present study. The results show the impact of this parameter as well as the differences between both models compared to the full elliptic restricted three-body problem. Regions below, near, and above the critical angle of the third-body perturbation are considered, as well as different altitudes for the orbit of the spacecraft.

  1. Development of realistic high-resolution whole-body voxel models of Japanese adult males and females of average height and weight, and application of models to radio-frequency electromagnetic-field dosimetry

    International Nuclear Information System (INIS)

    Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio

    2004-01-01

    With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method

  2. Average-atom model for two-temperature states and ionic transport properties of aluminum in the warm dense matter regime

    Science.gov (United States)

    Hou, Yong; Fu, Yongsheng; Bredow, Richard; Kang, Dongdong; Redmer, Ronald; Yuan, Jianmin

    2017-03-01

    The average-atom model combined with the hyper-netted chain approximation is an efficient tool for electronic and ionic structure calculations for warm dense matter. Here we generalize this method in order to describe non-equilibrium states with different electron and ion temperature as produced in laser-matter interactions on ultra-short time scales. In particular, the electron-ion and ion-ion correlation effects are considered when calculating the electron structure. We derive an effective ion-ion pair-potential using the electron densities in the framework of temperature-depended density functional theory. Using this ion-ion potential we perform molecular dynamics simulations in order to determine the ionic transport properties such as the ionic diffusion coefficient and the shear viscosity through the ionic velocity autocorrelation functions.

  3. Using autoregressive integrated moving average (ARIMA models to predict and monitor the number of beds occupied during a SARS outbreak in a tertiary hospital in Singapore

    Directory of Open Access Journals (Sweden)

    Earnest Arul

    2005-05-01

    Full Text Available Abstract Background The main objective of this study is to apply autoregressive integrated moving average (ARIMA models to make real-time predictions on the number of beds occupied in Tan Tock Seng Hospital, during the recent SARS outbreak. Methods This is a retrospective study design. Hospital admission and occupancy data for isolation beds was collected from Tan Tock Seng hospital for the period 14th March 2003 to 31st May 2003. The main outcome measure was daily number of isolation beds occupied by SARS patients. Among the covariates considered were daily number of people screened, daily number of people admitted (including observation, suspect and probable cases and days from the most recent significant event discovery. We utilized the following strategy for the analysis. Firstly, we split the outbreak data into two. Data from 14th March to 21st April 2003 was used for model development. We used structural ARIMA models in an attempt to model the number of beds occupied. Estimation is via the maximum likelihood method using the Kalman filter. For the ARIMA model parameters, we considered the simplest parsimonious lowest order model. Results We found that the ARIMA (1,0,3 model was able to describe and predict the number of beds occupied during the SARS outbreak well. The mean absolute percentage error (MAPE for the training set and validation set were 5.7% and 8.6% respectively, which we found was reasonable for use in the hospital setting. Furthermore, the model also provided three-day forecasts of the number of beds required. Total number of admissions and probable cases admitted on the previous day were also found to be independent prognostic factors of bed occupancy. Conclusion ARIMA models provide useful tools for administrators and clinicians in planning for real-time bed capacity during an outbreak of an infectious disease such as SARS. The model could well be used in planning for bed-capacity during outbreaks of other infectious

  4. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  5. Analysis of subgrid scale mixing using a hybrid LES-Monte-Carlo PDF method

    International Nuclear Information System (INIS)

    Olbricht, C.; Hahn, F.; Sadiki, A.; Janicka, J.

    2007-01-01

    This contribution introduces a hybrid LES-Monte-Carlo method for a coupled solution of the flow and the multi-dimensional scalar joint pdf in two complex mixing devices. For this purpose an Eulerian Monte-Carlo method is used. First, a complex mixing device (jet-in-crossflow, JIC) is presented in which the stochastic convergence and the coherency between the scalar field solution obtained via finite-volume methods and that from the stochastic solution of the pdf for the hybrid method are evaluated. Results are compared to experimental data. Secondly, an extensive investigation of the micromixing on the basis of assumed shape and transported SGS-pdfs in a configuration with practical relevance is carried out. This consists of a mixing chamber with two opposite rows of jets penetrating a crossflow (multi-jet-in-crossflow, MJIC). Some numerical results are compared to available experimental data and to RANS based results. It turns out that the hybrid LES-Monte-Carlo method could achieve a detailed analysis of the mixing at the subgrid level

  6. Time line cell tracking for the approximation of lagrangian coherent structures with subgrid accuracy

    KAUST Repository

    Kuhn, Alexander

    2013-12-05

    Lagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  7. A nonlinear structural subgrid-scale closure for compressible MHD. I. Derivation and energy dissipation properties

    Energy Technology Data Exchange (ETDEWEB)

    Vlaykov, Dimitar G., E-mail: Dimitar.Vlaykov@ds.mpg.de [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Dynamik und Selbstorganisation, Am Faßberg 17, D-37077 Göttingen (Germany); Grete, Philipp [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Schmidt, Wolfram [Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, D-21029 Hamburg (Germany); Schleicher, Dominik R. G. [Departamento de Astronomía, Facultad Ciencias Físicas y Matemáticas, Universidad de Concepción, Av. Esteban Iturra s/n Barrio Universitario, Casilla 160-C (Chile)

    2016-06-15

    Compressible magnetohydrodynamic (MHD) turbulence is ubiquitous in astrophysical phenomena ranging from the intergalactic to the stellar scales. In studying them, numerical simulations are nearly inescapable, due to the large degree of nonlinearity involved. However, the dynamical ranges of these phenomena are much larger than what is computationally accessible. In large eddy simulations (LESs), the resulting limited resolution effects are addressed explicitly by introducing to the equations of motion additional terms associated with the unresolved, subgrid-scale dynamics. This renders the system unclosed. We derive a set of nonlinear structural closures for the ideal MHD LES equations with particular emphasis on the effects of compressibility. The closures are based on a gradient expansion of the finite-resolution operator [W. K. Yeo (CUP, 1993)] and require no assumptions about the nature of the flow or magnetic field. Thus, the scope of their applicability ranges from the sub- to the hyper-sonic and -Alfvénic regimes. The closures support spectral energy cascades both up and down-scale, as well as direct transfer between kinetic and magnetic resolved and unresolved energy budgets. They implicitly take into account the local geometry, and in particular, the anisotropy of the flow. Their properties are a priori validated in Paper II [P. Grete et al., Phys. Plasmas 23, 062317 (2016)] against alternative closures available in the literature with respect to a wide range of simulation data of homogeneous and isotropic turbulence.

  8. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  9. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    Science.gov (United States)

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  10. Atmospheric Boundary Layer Modeling for Combined Meteorology and Air Quality Systems

    Science.gov (United States)

    Atmospheric Eulerian grid models for mesoscale and larger applications require sub-grid models for turbulent vertical exchange processes, particularly within the Planetary Boundary Layer (PSL). In combined meteorology and air quality modeling systems consistent PSL modeling of wi...

  11. Long-term climate: interpretation of the Vostok Antarctica drilling data, development of a zonally averaged dynamic model of the atmosphere

    International Nuclear Information System (INIS)

    Genthon, Christophe

    1988-01-01

    In a first part of the dissertation, data related to the evolution of the Earth climate and environment are analysed in order to elucidate some possible mechanisms of climatic transition on the time scale of the recent Pleistocene glacial to interglacial cycles. The Vostok (East Antarctica) temperature and the carbon dioxide atmospheric concentration (global) profiles, obtained on a time range of ∼160000 years from analysing a deep ice core drilled at Vostok are the basis of this study. A simple model for statistically reconstructing the temperature profile is implemented, the input functions of which are selected from hypotheses on the forcing of the climate at Vostok. These hypotheses are supported by the presence of the astronomical theory of paleo-climate's characteristic spectral bands in the two signals, by the high correlation between the variations of temperature with that of CO 2 , and by the fair coherence of these records with other paleo-climatic series from ocean sediment cores. An important contribution of the thermal greenhouse effect of CO 2 is suggested through this approach, in reasonable agreement with results obtained by other researchers using general circulation models. We also describe a dynamical model of the atmosphere which uses the zonally averaged primitive equations. This model is being developed as part of a project on modelling the globally coupled climatic System (atmosphere, ocean, cryo-sphere). For both the dynamical and diabatic physical parts, it is a reduction to two dimensions (latitude and altitude) of a general circulation atmospheric model. The large scale turbulence of the middle and high latitudes cannot be resolved with such a configuration: we describe parametrizations of the transports of sensible and latent heat by the baro-clinic eddies, and a preliminary study of the transport of momentum. These parametrizations make use of the mixing length theory, the mixing coefficients being determined from the local

  12. Image-processing of time-averaged interface distributions representing CCFL characteristics in a large scale model of a PWR hot-leg pipe geometry

    International Nuclear Information System (INIS)

    Al Issa, Suleiman; Macián-Juan, Rafael

    2017-01-01

    Highlights: • CCFL characteristics are investigated in PWR large-scale hot-leg pipe geometry. • Image processing of air-water interface produced time-averaged interface distributions. • Time-averages provide a comparative method of CCFL characteristics among different studies. • CCFL correlations depend upon the range of investigated water delivery for Dh ≫ 50 mm. • 1D codes are incapable of investigating CCFL because of lack of interface distribution. - Abstract: Countercurrent Flow Limitation (CCFL) was experimentally investigated in a 1/3.9 downscaled COLLIDER facility with a 190 mm pipe’s diameter using air/water at 1 atmospheric pressure. Previous investigations provided knowledge over the onset of CCFL mechanisms. In current article, CCFL characteristics at the COLLIDER facility are measured and discussed along with time-averaged distributions of the air/water interface for a selected matrix of liquid/gas velocities. The article demonstrates the time-averaged interface as a useful method to identify CCFL characteristics at quasi-stationary flow conditions eliminating variations that appears in single images, and showing essential comparative flow features such as: the degree of restriction at the bend, the extension and the intensity of the two-phase mixing zones, and the average water level within the horizontal part and the steam generator. Consequently, making it possible to compare interface distributions obtained at different investigations. The distributions are also beneficial for CFD validations of CCFL as the instant chaotic gas/liquid interface is impossible to reproduce in CFD simulations. The current study shows that final CCFL characteristics curve (and the corresponding CCFL correlation) depends upon the covered measuring range of water delivery. It also shows that a hydraulic diameter should be sufficiently larger than 50 mm in order to obtain CCFL characteristics comparable to the 1:1 scale data (namely the UPTF data). Finally

  13. Comparison of four large-eddy simulation research codes and effects of model coefficient and inflow turbulence in actuator-line-based wind turbine modeling

    DEFF Research Database (Denmark)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Yilmaz, Ali Emre

    2018-01-01

    to match closely for all codes. The value of the Smagorinsky coefficient in the subgrid-scale turbulence model is shown to have a negligible effect on the time-averaged loads along the blades. Conversely, the breakdown location of the wake is strongly dependent on the Smagorinsky coefficient in uniform...... coefficient has a negligible effect on the wake profiles. It is concluded that for LES of wind turbines and wind farms using ALM, careful implementation and extensive cross-verification among codes can result in highly reproducible predictions. Moreover, the characteristics of the inflow turbulence appear...

  14. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  15. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  16. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  17. LOW-MASS GALAXY FORMATION IN COSMOLOGICAL ADAPTIVE MESH REFINEMENT SIMULATIONS: THE EFFECTS OF VARYING THE SUB-GRID PHYSICS PARAMETERS

    International Nuclear Information System (INIS)

    ColIn, Pedro; Vazquez-Semadeni, Enrique; Avila-Reese, Vladimir; Valenzuela, Octavio; Ceverino, Daniel

    2010-01-01

    We present numerical simulations aimed at exploring the effects of varying the sub-grid physics parameters on the evolution and the properties of the galaxy formed in a low-mass dark matter halo (∼7 x 10 10 h -1 M sun at redshift z = 0). The simulations are run within a cosmological setting with a nominal resolution of 218 pc comoving and are stopped at z = 0.43. For simulations that cannot resolve individual molecular clouds, we propose the criterion that the threshold density for star formation, n SF , should be chosen such that the column density of the star-forming cells equals the threshold value for molecule formation, N ∼ 10 21 cm -2 , or ∼8 M sun pc -2 . In all of our simulations, an extended old/intermediate-age stellar halo and a more compact younger stellar disk are formed, and in most cases, the halo's specific angular momentum is slightly larger than that of the galaxy, and sensitive to the SF/feedback parameters. We found that a non-negligible fraction of the halo stars are formed in situ in a spheroidal distribution. Changes in the sub-grid physics parameters affect significantly and in a complex way the evolution and properties of the galaxy: (1) lower threshold densities n SF produce larger stellar effective radii R e , less peaked circular velocity curves V c (R), and greater amounts of low-density and hot gas in the disk mid-plane; (2) when stellar feedback is modeled by temporarily switching off radiative cooling in the star-forming regions, R e increases (by a factor of ∼2 in our particular model), the circular velocity curve becomes flatter, and a complex multi-phase gaseous disk structure develops; (3) a more efficient local conversion of gas mass to stars, measured by a stellar particle mass distribution biased toward larger values, increases the strength of the feedback energy injection-driving outflows and inducing burstier SF histories; (4) if feedback is too strong, gas loss by galactic outflows-which are easier to produce in low

  18. Predicting debris-flow initiation and run-out with a depth-averaged two-phase model and adaptive numerical methods

    Science.gov (United States)

    George, D. L.; Iverson, R. M.

    2012-12-01

    Numerically simulating debris-flow motion presents many challenges due to the complicated physics of flowing granular-fluid mixtures, the diversity of spatial scales (ranging from a characteristic particle size to the extent of the debris flow deposit), and the unpredictability of the flow domain prior to a simulation. Accurately predicting debris-flows requires models that are complex enough to represent the dominant effects of granular-fluid interaction, while remaining mathematically and computationally tractable. We have developed a two-phase depth-averaged mathematical model for debris-flow initiation and subsequent motion. Additionally, we have developed software that numerically solves the model equations efficiently on large domains. A unique feature of the mathematical model is that it includes the feedback between pore-fluid pressure and the evolution of the solid grain volume fraction, a process that regulates flow resistance. This feature endows the model with the ability to represent the transition from a stationary mass to a dynamic flow. With traditional approaches, slope stability analysis and flow simulation are treated separately, and the latter models are often initialized with force balances that are unrealistically far from equilibrium. Additionally, our new model relies on relatively few dimensionless parameters that are functions of well-known material properties constrained by physical data (eg. hydraulic permeability, pore-fluid viscosity, debris compressibility, Coulomb friction coefficient, etc.). We have developed numerical methods and software for accurately solving the model equations. By employing adaptive mesh refinement (AMR), the software can efficiently resolve an evolving debris flow as it advances through irregular topography, without needing terrain-fit computational meshes. The AMR algorithms utilize multiple levels of grid resolutions, so that computationally inexpensive coarse grids can be used where the flow is absent, and

  19. Large Eddy/Reynolds-Averaged Navier-Stokes Simulations of CUBRC Base Heating Experiments

    Science.gov (United States)

    Salazar, Giovanni; Edwards, Jack R.; Amar, Adam J.

    2012-01-01

    ven with great advances in computational techniques and computing power during recent decades, the modeling of unsteady separated flows, such as those encountered in the wake of a re-entry vehicle, continues to be one of the most challenging problems in CFD. Of most interest to the aerothermodynamics community is accurately predicting transient heating loads on the base of a blunt body, which would result in reduced uncertainties and safety margins when designing a re-entry vehicle. However, the prediction of heat transfer can vary widely depending on the turbulence model employed. Therefore, selecting a turbulence model which realistically captures as much of the flow physics as possible will result in improved results. Reynolds Averaged Navier Stokes (RANS) models have become increasingly popular due to their good performance with attached flows, and the relatively quick turnaround time to obtain results. However, RANS methods cannot accurately simulate unsteady separated wake flows, and running direct numerical simulation (DNS) on such complex flows is currently too computationally expensive. Large Eddy Simulation (LES) techniques allow for the computation of the large eddies, which contain most of the Reynolds stress, while modeling the smaller (subgrid) eddies. This results in models which are more computationally expensive than RANS methods, but not as prohibitive as DNS. By complimenting an LES approach with a RANS model, a hybrid LES/RANS method resolves the larger turbulent scales away from surfaces with LES, and switches to a RANS model inside boundary layers. As pointed out by Bertin et al., this type of hybrid approach has shown a lot of promise for predicting turbulent flows, but work is needed to verify that these models work well in hypersonic flows. The very limited amounts of flight and experimental data available presents an additional challenge for researchers. Recently, a joint study by NASA and CUBRC has focused on collecting heat transfer data

  20. Folding model analyses of 12C-12C and 16O-16O elastic scattering using the density-dependent LOCV-averaged effective interaction

    Science.gov (United States)

    Rahmat, M.; Modarres, M.

    2018-03-01

    The averaged effective two-body interaction (AEI), which can be generated through the lowest order constrained variational (LOCV) method for symmetric nuclear matter (SNM) with the input [Reid68, Ann. Phys. 50, 411 (1968), 10.1016/0003-4916(68)90126-7] nucleon-nucleon potential, is used as the effective nucleon-nucleon potential in the folding model to describe the heavy-ion (HI) elastic scattering cross sections. The elastic scattering cross sections of 12C-12C and 16O-16O systems are calculated in the above framework. The results are compared with the corresponding calculations coming from the fitting procedures with the input finite range D D M 3 Y 1 -Reid potential and the available experimental data at different incident energies. It is shown that a reasonable description of the elastic 12C-12C and 16O-16O scattering data at the low and medium energies can be obtained by using the above LOCV AEI, without any need to define a parametrized density-dependent function in the effective nucleon-nucleon potential, which is formally considered in the typical D D M 3 Y 1 -Reid interactions.

  1. Optimal Skin-to-Stone Distance Is a Positive Predictor for Successful Outcomes in Upper Ureter Calculi following Extracorporeal Shock Wave Lithotripsy: A Bayesian Model Averaging Approach.

    Directory of Open Access Journals (Sweden)

    Kang Su Cho

    Full Text Available To investigate whether skin-to-stone distance (SSD, which remains controversial in patients with ureter stones, can be a predicting factor for one session success following extracorporeal shock wave lithotripsy (ESWL in patients with upper ureter stones.We retrospectively reviewed the medical records of 1,519 patients who underwent their first ESWL between January 2005 and December 2013. Among these patients, 492 had upper ureter stones that measured 4-20 mm and were eligible for our analyses. Maximal stone length, mean stone density (HU, and SSD were determined on pretreatment non-contrast computed tomography (NCCT. For subgroup analyses, patients were divided into four groups. Group 1 consisted of patients with SSD<25th percentile, group 2 consisted of patients with SSD in the 25th to 50th percentile, group 3 patients had SSD in the 50th to 75th percentile, and group 4 patients had SSD≥75th percentile.In analyses of group 2 patients versus others, there were no statistical differences in mean age, stone length and density. However, the one session success rate in group 2 was higher than other groups (77.9% vs. 67.0%; P = 0.032. The multivariate logistic regression model revealed that shorter stone length, lower stone density, and the group 2 SSD were positive predictors for successful outcomes in ESWL. Using the Bayesian model-averaging approach, longer stone length, lower stone density, and group 2 SSD can be also positive predictors for successful outcomes following ESWL.Our data indicate that a group 2 SSD of approximately 10 cm is a positive predictor for success following ESWL.

  2. Subgrid scale modeling in large-Eddy simulation of turbulent combustion using premixed fdlamelet chemistry

    NARCIS (Netherlands)

    Vreman, A.W.; Oijen, van J.A.; Goey, de L.P.H.; Bastiaans, R.J.M.

    2009-01-01

    Large-eddy simulation (LES) of turbulent combustion with premixed flamelets is investigated in this paper. The approach solves the filtered Navier-Stokes equations supplemented with two transport equations, one for the mixture fraction and another for a progress variable. The LES premixed flamelet

  3. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    Science.gov (United States)

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  4. Comparison of population-averaged and cluster-specific models for the analysis of cluster randomized trials with missing binary outcomes: a simulation study

    Directory of Open Access Journals (Sweden)

    Ma Jinhui

    2013-01-01

    Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.

  5. THOR: A New Higher-Order Closure Assumed PDF Subgrid-Scale Parameterization; Evaluation and Application to Low Cloud Feedbacks

    Science.gov (United States)

    Firl, G. J.; Randall, D. A.

    2013-12-01

    The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been

  6. Collaborative Research: Lagrangian Modeling of Dispersion in the Planetary Boundary Layer

    National Research Council Canada - National Science Library

    Weil, Jeffrey

    2003-01-01

    ...), using Lagrangian "particle" models coupled with large-eddy simulation (LES) fields. A one-particle model for the mean concentration field was enhanced by a theoretically improved treatment of the LES subgrid-scale (SGS) velocities...

  7. Geotail observations of plasma sheet ion composition over 16 years: On variations of average plasma ion mass and O+ triggering substorm model

    Science.gov (United States)

    Nosé, M.; Ieda, A.; Christon, S. P.

    2009-07-01

    We examined long-term variations of ion composition in the plasma sheet, using energetic (9.4-212.1 keV/e) ion flux data obtained by the suprathermal ion composition spectrometer (STICS) sensor of the energetic particle and ion composition (EPIC) instrument on board the Geotail spacecraft. EPIC/STICS observations are available from 17 October 1992 for more than 16 years, covering the declining phase of solar cycle 22, all of solar cycle 23, and the early phase of solar cycle 24. This unprecedented long-term data set revealed that (1) the He+/H+ and O+/H+ flux ratios in the plasma sheet were dependent on the F10.7 index; (2) the F10.7 index dependence is stronger for O+/H+ than He+/H+; (3) the O+/H+ flux ratio is also weakly correlated with the ΣKp index; and (4) the He2+/H+ flux ratio in the plasma sheet appeared to show no long-term trend. From these results, we derived empirical equations related to plasma sheet ion composition and the F10.7 index and estimated that the average plasma ion mass changes from ˜1.1 amu during solar minimum to ˜2.8 amu during solar maximum. In such a case, the Alfvén velocity during solar maximum decreases to ˜60% of the solar minimum value. Thus, physical processes in the plasma sheet are considered to be much different between solar minimum and solar maximum. We also compared long-term variation of the plasma sheet ion composition with that of the substorm occurrence rate, which is evaluated by the number of Pi2 pulsations. No correlation or negative correlation was found between them. This result contradicts the O+ triggering substorm model, in which heavy ions in the plasma sheet increase the growth rate of the linear ion tearing mode and play an important role in localization and initiation of substorms. In contrast, O+ ions in the plasma sheet may prevent occurrence of substorms.

  8. Spatial averaging of fields from half-wave dipole antennas and corresponding SAR calculations in the NORMAN human voxel model between 65 MHz and 2 GHz.

    Science.gov (United States)

    Findlay, R P; Dimbylow, P J

    2009-04-21

    If an antenna is located close to a person, the electric and magnetic fields produced by the antenna will vary in the region occupied by the human body. To obtain a mean value of the field for comparison with reference levels, the Institute of Electrical and Electronic Engineers (IEEE) and International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommend spatially averaging the squares of the field strength over the height the body. This study attempts to assess the validity and accuracy of spatial averaging when used for half-wave dipoles at frequencies between 65 MHz and 2 GHz and distances of lambda/2, lambda/4 and lambda/8 from the body. The differences between mean electric field values calculated using ten field measurements and that of the true averaged value were approximately 15% in the 600 MHz to 2 GHz range. The results presented suggest that the use of modern survey equipment, which takes hundreds rather than tens of measurements, is advisable to arrive at a sufficiently accurate mean field value. Whole-body averaged and peak localized SAR values, normalized to calculated spatially averaged fields, were calculated for the NORMAN voxel phantom. It was found that the reference levels were conservative for all whole-body SAR values, but not for localized SAR, particularly in the 1-2 GHz region when the dipole was positioned very close to the body. However, if the maximum field is used for normalization of calculated SAR as opposed to the lower spatially averaged value, the reference levels provide a conservative estimate of the localized SAR basic restriction for all frequencies studied.

  9. Development of a Regional Lidar-Derived Above-Ground Biomass Model with Bayesian Model Averaging for Use in Ponderosa Pine and Mixed Conifer Forests in Arizona and New Mexico, USA

    Directory of Open Access Journals (Sweden)

    Karis Tenneson

    2018-03-01

    Full Text Available Historical forest management practices in the southwestern US have left forests prone to high-severity, stand-replacement fires. Reducing the cost of forest-fire management and reintroducing fire to the landscape without negative impact depends on detailed knowledge of stand composition, in particular, above-ground biomass (AGB. Lidar-based modeling techniques provide opportunities to increase ability of managers to monitor AGB and other forest metrics at reduced cost. We developed a regional lidar-based statistical model to estimate AGB for Ponderosa pine and mixed conifer forest systems of the southwestern USA, using previously collected field data. Model selection was performed using Bayesian model averaging (BMA to reduce researcher bias, fully explore the model space, and avoid overfitting. The selected model includes measures of canopy height, canopy density, and height distribution. The model selected with BMA explains 71% of the variability in field-estimates of AGB, and the RMSE of the two independent validation data sets are 23.25 and 32.82 Mg/ha. The regional model is structured in accordance with previously described local models, and performs equivalently to these smaller scale models. We have demonstrated the effectiveness of lidar for developing cost-effective, robust regional AGB models for monitoring and planning adaptively at the landscape scale.

  10. Physical modelling of interactions between interfaces and turbulence; Modelisation physique des interactions entre interfaces et turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Toutant, A

    2006-12-15

    The complex interactions between interfaces and turbulence strongly impact the flow properties. Unfortunately, Direct Numerical Simulations (DNS) have to entail a number of degrees of freedom proportional to the third power of the Reynolds number to correctly describe the flow behaviour. This extremely hard constraint makes it impossible to use DNS for industrial applications. Our strategy consists in using and improving DNS method in order to develop the Interfaces and Sub-grid Scales concept. ISS is a two-phase equivalent to the single-phase Large Eddy Simulation (LES) concept. The challenge of ISS is to integrate the two-way coupling phenomenon into sub-grid models. Applying a space filter, we have exhibited correlations or sub-grid terms that require closures. We have shown that, in two-phase flows, the presence of a discontinuity leads to specific sub-grid terms. Comparing the maximum of the norm of the sub-grid terms with the maximum of the norm of the advection tensor, we have found that sub-grid terms related to interfacial forces and viscous effect are negligible. Consequently, in the momentum balance, only the sub-grid terms related to inertia have to be closed. Thanks to a priori tests performed on several DNS data, we demonstrate that the scale similarity hypothesis, reinterpreted near discontinuity, provides sub-grid models that take into account the two-way coupling phenomenon. These models correspond to the first step of our work. Indeed, in this step, interfaces are smooth and, interactions between interfaces and turbulence occur in a transition zone where each physical variable varies sharply but continuously. The next challenge has been to determine the jump conditions across the sharp equivalent interface corresponding to the sub-grid models of the transition zone. We have used the matched asymptotic expansion method to obtain the jump conditions. The first tests on the velocity of the sharp equivalent interface are very promising (author)

  11. Modelling transport and deposition of caesium and iodine from the Chernobyl accident using the DREAM model

    Directory of Open Access Journals (Sweden)

    J. Brandt

    2002-01-01

    Full Text Available A tracer model, DREAM (the Danish Rimpuff and Eulerian Accidental release Model, has been developed for modelling transport, dispersion and deposition (wet and dry of radioactive material from accidental releases, as the Chernobyl accident. The model is a combination of a Lagrangian model, that includes the near source dispersion, and an Eulerian model describing the long-range transport. The performance of the transport model has previously been tested within the European Tracer Experiment, ETEX, which included transport and dispersion of an inert, non-depositing tracer from a controlled release. The focus of this paper is the model performance with respect to the total deposition of  137Cs, 134Cs and 131I from the Chernobyl accident, using different relatively simple and comprehensive parameterizations for dry- and wet deposition. The performance, compared to measurements, of using different combinations of two different wet deposition parameterizations and three different parameterizations of dry deposition has been evaluated, using different statistical tests. The best model performance, compared to measurements, is obtained when parameterizing the total deposition combined of a simple method for dry deposition and a subgrid-scale averaging scheme for wet deposition based on relative humidities. The same major conclusion is obtained for all the three different radioactive isotopes and using two different deposition measurement databases. Large differences are seen in the results obtained by using the two different parameterizations of wet deposition based on precipitation rates and relative humidities, respectively. The parameterization based on subgrid-scale averaging is, in all cases, performing better than the parameterization based on precipitation rates. This indicates that the in-cloud scavenging process is more important than the below cloud scavenging process for the submicron particles and that the precipitation rates are

  12. Introducing Subrid-scale Cloud Feedbacks to Radiation for Regional Meteorological and Cllimate Modeling

    Science.gov (United States)

    Convection systems and associated cloudiness directly influence regional and local radiation budgets, and dynamics and thermodynamics through feedbacks. However, most subgrid-scale convective parameterizations in regional weather and climate models do not consider cumulus cloud ...

  13. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  14. The model evaluation of subsonic aircraft effect on the ozone and radiative forcing

    Energy Technology Data Exchange (ETDEWEB)

    Rozanov, E; Zubov, V; Egorova, T; Ozolin, Y [Main Geophysical Observatory, St.Petersburg (Russian Federation)

    1998-12-31

    Two dimensional transient zonally averaged model was used for the evaluation of the effect of subsonic aircraft exhausts upon the ozone, trace gases and radiation in the troposphere and lower stratosphere. The mesoscale transformation of gas composition was included on the base of the box model simulations. It has been found that the transformation of the exhausted gases in sub-grid scale is able to influence the results of the modelling. The radiative forcing caused by gas, sulfate aerosol, soot and contrails changes was estimated as big as 0.12-0.15 W/m{sup 2} (0.08 W/m{sup 2} globally and annually averaged). (author) 10 refs.

  15. The model evaluation of subsonic aircraft effect on the ozone and radiative forcing

    Energy Technology Data Exchange (ETDEWEB)

    Rozanov, E.; Zubov, V.; Egorova, T.; Ozolin, Y. [Main Geophysical Observatory, St.Petersburg (Russian Federation)

    1997-12-31

    Two dimensional transient zonally averaged model was used for the evaluation of the effect of subsonic aircraft exhausts upon the ozone, trace gases and radiation in the troposphere and lower stratosphere. The mesoscale transformation of gas composition was included on the base of the box model simulations. It has been found that the transformation of the exhausted gases in sub-grid scale is able to influence the results of the modelling. The radiative forcing caused by gas, sulfate aerosol, soot and contrails changes was estimated as big as 0.12-0.15 W/m{sup 2} (0.08 W/m{sup 2} globally and annually averaged). (author) 10 refs.

  16. The Effectiveness of a Cohort Model as a Predictor of Grade Point Average and Graduation Status of Pre-Health Sciences Students in a Public Community College

    Science.gov (United States)

    Brandon, Elvis Nash

    2017-01-01

    There is a college completion crisis in the United States. In today's competitive job market, health sciences students cannot afford to fail in their educational attainment. The purpose of this study was to determine if participation in the cohort model is a predictor of the success of public community college pre-health sciences students.…

  17. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  18. Bayesian model averaging in vector autoregressive processes with an investigation of stability of the US great ratios and risk of a liquidity trap in the USA, UK and Japan

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2007-01-01

    textabstractA Bayesian model averaging procedure is presented within the class of vector autoregressive (VAR) processes and applied to two empirical issues. First, stability of the "Great Ratios" in U.S. macro-economic time series is investigated, together with the presence and e¤ects of permanent

  19. Ion induced scintillation in organic solids: development of an average track model,degradation of the scintillation intensity and dosimetric applications

    International Nuclear Information System (INIS)

    Broggio, D.

    2004-12-01

    This work deals with a specific aspect of the ion-matter interaction: the scintillation induced by ions in organic materials. In the first chapter we tackle the issue in a theoretical way by proposing a method to compute the radial doses within the framework of the mean track model. We have developed a model based on the Lewis transport equation and on the Spencer distribution of the loss energy in order to take into account the transport of secondary electrons in a more realistic way. In the second chapter we study the physical mechanisms that trigger ion-induced scintillation. Ion-induced scintillation is featured by the dependence in charge number of the intensity of scintillation for ions with same energy loss and by the saturation of the scintillation efficiency for ions with high stopping-power. We have applied our model of radial doses to ion-induced scintillation. In the third chapter we study the gradual degradation of the scintillation intensity and ion-induced chemical damages. In the last chapter we propose a prototype of dosimeters based on the combination of scintillators and optical fibers that allows the real-time measurement of the dose delivered by a carbon ion beam in therapeutical use conditions. This dosimeter gives the relationship between the dose and the scintillation intensity but its accuracy is not yet sufficient for uses in radiotherapy. (A.C.)

  20. Light-duty vehicle fuel economy improvements, 1979--1998: A consumer purchase model of corporate average fuel economy, fuel price, and income effects

    Science.gov (United States)

    Chien, David Michael

    2000-10-01

    The Energy Policy and Conservation Act of 1975, which created fuel economy standards for automobiles and light trucks, was passed by Congress in response to the rapid rise in world oil prices as a result of the 1973 oil crisis. The standards were first implemented in 1978 for automobiles and 1979 for light trucks, and began with initial standards of 18 MPG for automobiles and 17.2 MPG for light trucks. The current fuel economy standards for 1998 have been held constant at 27.5 MPG for automobiles and 20.5 MPG for light trucks since 1990--1991. While actual new automobile fuel economy has almost doubled from 14 MPG in 1974 to 27.2 MPG in 1994, it is reasonable to ask if the CAFE standards are still needed. Each year Congress attempts to pass another increase in the Corporate Average Fuel Economy (CAFE) standard and fails. Many have called for the abolition of CAFE standards citing the ineffectiveness of the standards in the past. In order to determine whether CAFE standards should be increased, held constant, or repealed, an evaluation of the effectiveness of the CAFE standards to date must be established. Because fuel prices were rising concurrently with the CAFE standards, many authors have attributed the rapid rise in new car fuel economy solely to fuel prices. The purpose of this dissertation is to re-examine the determinants of new car fuel economy via three effects: CAFE regulations, fuel price, and income effects. By measuring the marginal effects of the three fuel economy determinants upon consumers and manufacturers choices, for fuel economy, an estimate was made of the influence of each upon new fuel economy. The conclusions of this dissertation present some clear signals to policymakers: CAFE standards have been very effective in increasing fuel economy from 1979 to 1998. Furthermore, they have been the main cause of fuel economy improvement, with income being a much smaller component. Furthermore, this dissertation has suggested that fuel prices have

  1. Ensemble averaged multi-phase Eulerian model for columnar/equiaxed solidification of a binary alloy: II. Simulation of the columnar-to-equiaxed transition (CET)

    International Nuclear Information System (INIS)

    Ciobanas, A I; Fautrelle, Y

    2007-01-01

    A new multi-phase Eulerian model for the columnar and equiaxed dendritic solidification has been developed. In this paper we first focus on the numerical simulation of quasi-steady solidification experiments in order to obtain corresponding CET maps. We have identified three main zones on the CET map: the pure columnar, the pure equiaxed zone and finally the mixed columnar+equiaxed zone. The mixed c/e zone was further quantified by means of a columnar fraction ε c which quantifies in a rigorous way the two coexisting structures. Since it intrinsically includes the solutal and the mechanical blocking effects, the new ensemble model unifies the semi-empirical Hunt's approach (pure mechanical blocking mechanism) and the Martorano et al approach (pure solutal blocking mechanism). Secondly the present model was used to simulate unidirectional solidification experiments. It was found that the columnar front evolved in a quasi-steady state until a time very close to the critical CET moment. It is also found that the equiaxed nucleation undercooling is close to the maximum columnar dendrite tip undercooling and that the CET is virtually independent of the equiaxed zone ahead of the columnar front. If the equiaxed zone is not taken into account it is observed that the columnar front velocity exhibits a sudden increase at the beginning of the solidification followed by a quasi-plateau corresponding to a quasi-state at the columnar tips and finally, above a critical time, an oscillatory evolution. The beginning of the oscillatory evolution of the columnar front was well correlated with the CET position measured in the experiments. We also find that this oscillatory evolution of the columnar front is very favourable for the fragmentation of the columnar dendrites and thus for the CET. In this respect, it seems that the unsteady regime of the columnar front with respect to the local cooling conditions represents the main cause for the CET phenomena, at least for the non

  2. Synergies Between Grace and Regional Atmospheric Modeling Efforts

    Science.gov (United States)

    Kusche, J.; Springer, A.; Ohlwein, C.; Hartung, K.; Longuevergne, L.; Kollet, S. J.; Keune, J.; Dobslaw, H.; Forootan, E.; Eicker, A.

    2014-12-01

    In the meteorological community, efforts converge towards implementation of high-resolution (precipitation, evapotranspiration and runoff data; confirming that the model does favorably at representing observations. We show that after GRACE-derived bias correction, basin-average hydrological conditions prior to 2002 can be reconstructed better than before. Next, comparing GRACE with CLM forced by EURO-CORDEX simulations allows identifying processes needing improvement in the model. Finally, we compare COSMO-EU atmospheric pressure, a proxy for mass corrections in satellite gravimetry, with ERA-Interim over Europe at timescales shorter/longer than 1 month, and spatial scales below/above ERA resolution. We find differences between regional and global model more pronounced at high frequencies, with magnitude at sub-grid scale and larger scale corresponding to 1-3 hPa (1-3 cm EWH); relevant for the assessment of post-GRACE concepts.

  3. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  4. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  5. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  6. Average male and female virtual dummy model (BioRID and EvaRID) simulations with two seat concepts in the Euro NCAP low severity rear impact test configuration.

    Science.gov (United States)

    Linder, Astrid; Holmqvist, Kristian; Svensson, Mats Y

    2018-05-01

    Soft tissue neck injuries, also referred to as whiplash injuries, which can lead to long term suffering accounts for more than 60% of the cost of all injuries leading to permanent medical impairment for the insurance companies, with respect to injuries sustained in vehicle crashes. These injuries are sustained in all impact directions, however they are most common in rear impacts. Injury statistics have since the mid-1960s consistently shown that females are subject to a higher risk of sustaining this type of injury than males, on average twice the risk of injury. Furthermore, some recently developed anti-whiplash systems have revealed they provide less protection for females than males. The protection of both males and females should be addresses equally when designing and evaluating vehicle safety systems to ensure maximum safety for everyone. This is currently not the case. The norm for crash test dummies representing humans in crash test laboratories is an average male. The female part of the population is not represented in tests performed by consumer information organisations such as NCAP or in regulatory tests due to the absence of a physical dummy representing an average female. Recently, the world first virtual model of an average female crash test dummy was developed. In this study, simulations were run with both this model and an average male dummy model, seated in a simplified model of a vehicle seat. The results of the simulations were compared to earlier published results from simulations run in the same test set-up with a vehicle concepts seat. The three crash pulse severities of the Euro NCAP low severity rear impact test were applied. The motion of the neck, head and upper torso were analysed in addition to the accelerations and the Neck Injury Criterion (NIC). Furthermore, the response of the virtual models was compared to the response of volunteers as well as the average male model, to that of the response of a physical dummy model. Simulations

  7. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  8. On Parametric Sensitivity of Reynolds-Averaged Navier-Stokes SST Turbulence Model: 2D Hypersonic Shock-Wave Boundary Layer Interactions

    Science.gov (United States)

    Brown, James L.

    2014-01-01

    Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.

  9. Site-specific dissociation dynamics of H{sub 2}/D{sub 2} on Ag(111) and Co(0001) and the validity of the site-averaging model

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Xixi [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Institute of Theoretical and Computational Chemistry, Key Laboratory of Mesoscopic Chemistry, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing 210093 (China); Jiang, Bin [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China); Xie, Daiqian, E-mail: dqxie@nju.edu.cn, E-mail: hguo@unm.edu [Institute of Theoretical and Computational Chemistry, Key Laboratory of Mesoscopic Chemistry, School of Chemistry and Chemical Engineering, Nanjing University, Nanjing 210093 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Guo, Hua, E-mail: dqxie@nju.edu.cn, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2015-09-21

    Dissociative chemisorption of polyatomic molecules on metal surfaces involves high-dimensional dynamics, of which quantum mechanical treatments are computationally challenging. A promising reduced-dimensional approach approximates the full-dimensional dynamics by a weighted average of fixed-site results. To examine the performance of this site-averaging model, we investigate two distinct reactions, namely, hydrogen dissociation on Co(0001) and Ag(111), using accurate first principles potential energy surfaces (PESs). The former has a very low barrier of ∼0.05 eV while the latter is highly activated with a barrier of ∼1.15 eV. These two systems allow the investigation of not only site-specific dynamical behaviors but also the validity of the site-averaging model. It is found that the reactivity is not only controlled by the barrier height but also by the topography of the PES. Moreover, the agreement between the site-averaged and full-dimensional results is much better on Ag(111), though quantitative in neither system. Further quasi-classical trajectory calculations showed that the deviations can be attributed to dynamical steering effects, which are present in both reactions at all energies.

  10. Urban runoff (URO) process for MODFLOW 2005: simulation of sub-grid scale urban hydrologic processes in Broward County, FL

    Science.gov (United States)

    Decker, Jeremy D.; Hughes, J.D.

    2013-01-01

    Climate change and sea-level rise could cause substantial changes in urban runoff and flooding in low-lying coast landscapes. A major challenge for local government officials and decision makers is to translate the potential global effects of climate change into actionable and cost-effective adaptation and mitigation strategies at county and municipal scales. A MODFLOW process is used to represent sub-grid scale hydrology in urban settings to help address these issues. Coupled interception, surface water, depression, and unsaturated zone storage are represented. A two-dimensional diffusive wave approximation is used to represent overland flow. Three different options for representing infiltration and recharge are presented. Additional features include structure, barrier, and culvert flow between adjacent cells, specified stage boundaries, critical flow boundaries, source/sink surface-water terms, and the bi-directional runoff to MODFLOW Surface-Water Routing process. Some abilities of the Urban RunOff (URO) process are demonstrated with a synthetic problem using four land uses and varying cell coverages. Precipitation from a hypothetical storm was applied and cell by cell surface-water depth, groundwater level, infiltration rate, and groundwater recharge rate are shown. Results indicate the URO process has the ability to produce time-varying, water-content dependent infiltration and leakage, and successfully interacts with MODFLOW.

  11. Sensitivities of simulated satellite views of clouds to subgrid-scale overlap and condensate heterogeneity

    Energy Technology Data Exchange (ETDEWEB)

    Hillman, Benjamin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marchand, Roger T. [Univ. of Washington, Seattle, WA (United States); Ackerman, Thomas P. [Univ. of Washington, Seattle, WA (United States)

    2017-08-01

    Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4 km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.

  12. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  13. Final Report: Systematic Development of a Subgrid Scaling Framework to Improve Land Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dickinson, Robert Earl [Univ. of Texas, Austin, TX (United States)

    2016-07-11

    We carried out research to development improvements of the land component of climate models and to understand the role of land in climate variability and change. A highlight was the development of a 3D canopy radiation model. More than a dozen publications resulted.

  14. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  15. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  16. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  17. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.

  18. Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations

    KAUST Repository

    Iliev, Oleg P.; Lazarov, Raytcho D.; Willems, Joerg

    2010-01-01

    We present a two-scale finite element method for solving Brinkman's equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We

  19. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  20. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  1. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  2. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  3. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  4. Large-eddy simulation of ethanol spray combustion using a finite-rate combustion model

    Energy Technology Data Exchange (ETDEWEB)

    Li, K.; Zhou, L.X. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics; Chan, C.K. [Hong Kong Polytechnic Univ. (China). Dept. of Applied Mathematics

    2013-07-01

    Large-eddy simulation of spray combustion is under its rapid development, but the combustion models are less validated by detailed experimental data. In this paper, large-eddy simulation of ethanol-air spray combustion was made using an Eulerian-Lagrangian approach, a subgrid-scale kinetic energy stress model, and a finite-rate combustion model. The simulation results are validated in detail by experiments. The LES obtained statistically averaged temperature is in agreement with the experimental results in most regions. The instantaneous LES results show the coherent structures of the shear region near the high-temperature flame zone and the fuel vapor concentration map, indicating the droplets are concentrated in this shear region. The droplet sizes are found to be in the range of 20-100{mu}m. The instantaneous temperature map shows the close interaction between the coherent structures and the combustion reaction.

  5. Isolating Numerical Error Effects in LES Using DNS-Derived Sub-Grid Closures

    Science.gov (United States)

    Edoh, Ayaboe; Karagozian, Ann

    2017-11-01

    The prospect of employing an explicitly-defined filter in Large-Eddy Simulations (LES) provides the opportunity to reduce the interaction of numerical/modeling errors and offers the chance to carry out grid-converged assessments, important for model development. By utilizing a quasi a priori evaluation method - wherein the LES is assisted by closures derived from a fully-resolved computation - it then becomes possible to understand the combined impacts of filter construction (e.g., filter width, spectral sharpness) and discretization choice on the solution accuracy. The present work looks at calculations of the compressible LES Navier-Stokes system and considers discrete filtering formulations in conjunction with high-order finite differencing schemes. Accuracy of the overall method construction is compared to a consistently-filtered exact solution, and lessons are extended to a posteriori (i.e., non-assisted) evaluations. Supported by ERC, Inc. (PS150006) and AFOSR (Dr. Chiping Li).

  6. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  7. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  8. High resolution modelling of extreme precipitation events in urban areas

    Science.gov (United States)

    Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave

    2015-04-01

    The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with

  9. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  10. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  11. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  12. A practical approach to compute short-wave irradiance interacting with subgrid-scale buildings

    Energy Technology Data Exchange (ETDEWEB)

    Sievers, Uwe; Frueh, Barbara [Deutscher Wetterdienst, Offenbach am Main (Germany)

    2012-08-15

    A numerical approach for the calculation of short-wave irradiances at the ground as well as the walls and roofs of buildings in an environment with unresolved built-up is presented. In this radiative parameterization scheme the properties of the unresolved built-up are assigned to settlement types which are characterized by mean values of the volume density of the buildings and their wall area density. Therefore it is named wall area approach. In the vertical direction the range of building heights may be subdivided into several layers. In the case of non-uniform building heights the shadowing of the lower roofs by the taller buildings is taken into account. The method includes the approximate calculation of sky view and sun view factors. For an idealized building arrangement it is shown that the obtained approximate factors are in good agreement with exact calculations just as for the comparison of the calculated and measured effective albedo values. For arrangements with isolated single buildings the presented wall area approach yields a better agreement with the observations than similar methods where the unresolved built-up is characterized by the aspect ratio of a representative street canyon (aspect ratio approach). In the limiting case where the built-up is well represented by an ensemble of idealized street canyons both approaches become equivalent. The presented short-wave radiation scheme is part of the microscale atmospheric model MUKLIMO 3 where it contributes to the calculation of surface temperatures on the basis of energy-flux equilibrium conditions. (orig.)

  13. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  14. A NEW COMBINED LOCAL AND NON-LOCAL PBL MODEL FOR METEOROLOGY AND AIR QUALITY MODELING

    Science.gov (United States)

    A new version of the Asymmetric Convective Model (ACM) has been developed to describe sub-grid vertical turbulent transport in both meteorology models and air quality models. The new version (ACM2) combines the non-local convective mixing of the original ACM with local eddy diff...

  15. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  16. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  17. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  18. Effect of LES models on the entrainment of a passive scalar in a turbulent planar jet

    Science.gov (United States)

    Chambel Lopes, Diogo; da Silva, Carlos; Reis, Ricardo; Raman, Venkat

    2011-11-01

    Direct and large-eddy simulations (DNS/LES) of turbulent planar jets are used to study the role of subgrid-scale models in the integral characteristics of the passive scalar mixing in a jet. Specifically the effect of subgrid-scale models in the jet spreading rate and centreline passive scalar decay rates are assessed and compared. The modelling of the subgrid-scale fluxes is particularly challenging in the turbulent/nonturbulent (T/NT) region that divides the two regions in the jet flow: the outer region where the flow is irrotational and the inner region where the flow is turbulent. It has been shown that important Reynolds stresses exist near the T/NT interface and that these stresses determine in part the mixing and combustion rates in jets. The subgrid scales of motion near the T/NT interface are far from equilibrium and contain an important fraction of the total kinetic energy. Model constants used in several subgrid-scale models such as the Smagorinsky and the gradient models need to be corrected near the jet edge. The procedure used to obtain the dynamic Smagorinsky constant is not able to cope with the intermittent nature of this region.

  19. SU-F-T-202: An Evaluation Method of Lifetime Attributable Risk for Comparing Between Proton Beam Therapy and Intensity Modulated X-Ray Therapy for Pediatric Cancer Patients by Averaging Four Dose-Response Models for Carcinoma Induction

    Energy Technology Data Exchange (ETDEWEB)

    Tamura, M; Shirato, H [Department of Radiation Oncology, Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido (Japan); Ito, Y [Department of Biostatistics, Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido (Japan); Sakurai, H; Mizumoto, M; Kamizawa, S [Proton Medical Research Center, University of Tsukuba, Tsukuba, Ibaraki (Japan); Murayama, S; Yamashita, H [Proton Therapy Division, Shizuoka Cancer Center Hospital, Nagaizumi, Shizuoka (Japan); Takao, S; Suzuki, R [Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido (Japan)

    2016-06-15

    Purpose: To examine how much lifetime attributable risk (LAR) as an in silico surrogate marker of radiation-induced secondary cancer would be lowered by using proton beam therapy (PBT) in place of intensity modulated x-ray therapy (IMXT) in pediatric patients. Methods: From 242 pediatric patients with cancers who were treated with PBT, 26 patients were selected by random sampling after stratification into four categories: a) brain, head, and neck, b) thoracic, c) abdominal, and d) whole craniospinal (WCNS) irradiation. IMXT was re-planned using the same computed tomography and region of interest. Using dose volume histogram (DVH) of PBT and IMXT, the LAR of Schneider et al. was calculated for the same patient. The published four dose-response models for carcinoma induction: i) full model, ii) bell-shaped model, iii) plateau model, and ix) linear model were tested for organs at risk. In the case that more than one dose-response model was available, the LAR for this patient was calculated by averaging LAR for each dose-response model. Results: Calculation of the LARs of PBT and IMXT based on DVH was feasible for all patients. The mean±standard deviation of the cumulative LAR difference between PBT and IMXT for the four categories was a) 0.77±0.44% (n=7, p=0.0037), b) 23.1±17.2%,(n=8, p=0.0067), c) 16.4±19.8% (n=8, p=0.0525), and d) 49.9±21.2% (n=3, p=0.0275, one tailed t-test), respectively. The LAR was significantly lower by PBT than IMXT for the the brain, head, and neck region, thoracic region, and whole craniospinal irradiation. Conclusion: In pediatric patients who had undergone PBT, the LAR of PBT was significantly lower than the LAR of IMXT estimated by in silico modeling. This method was suggested to be useful as an in silico surrogate marker of secondary cancer induced by different radiotherapy techniques. This research was supported by the Translational Research Network Program, JSPS KAKENHI Grant No. 15H04768 and the Global Institution for

  20. SU-F-T-202: An Evaluation Method of Lifetime Attributable Risk for Comparing Between Proton Beam Therapy and Intensity Modulated X-Ray Therapy for Pediatric Cancer Patients by Averaging Four Dose-Response Models for Carcinoma Induction

    International Nuclear Information System (INIS)

    Tamura, M; Shirato, H; Ito, Y; Sakurai, H; Mizumoto, M; Kamizawa, S; Murayama, S; Yamashita, H; Takao, S; Suzuki, R

    2016-01-01

    Purpose: To examine how much lifetime attributable risk (LAR) as an in silico surrogate marker of radiation-induced secondary cancer would be lowered by using proton beam therapy (PBT) in place of intensity modulated x-ray therapy (IMXT) in pediatric patients. Methods: From 242 pediatric patients with cancers who were treated with PBT, 26 patients were selected by random sampling after stratification into four categories: a) brain, head, and neck, b) thoracic, c) abdominal, and d) whole craniospinal (WCNS) irradiation. IMXT was re-planned using the same computed tomography and region of interest. Using dose volume histogram (DVH) of PBT and IMXT, the LAR of Schneider et al. was calculated for the same patient. The published four dose-response models for carcinoma induction: i) full model, ii) bell-shaped model, iii) plateau model, and ix) linear model were tested for organs at risk. In the case that more than one dose-response model was available, the LAR for this patient was calculated by averaging LAR for each dose-response model. Results: Calculation of the LARs of PBT and IMXT based on DVH was feasible for all patients. The mean±standard deviation of the cumulative LAR difference between PBT and IMXT for the four categories was a) 0.77±0.44% (n=7, p=0.0037), b) 23.1±17.2%,(n=8, p=0.0067), c) 16.4±19.8% (n=8, p=0.0525), and d) 49.9±21.2% (n=3, p=0.0275, one tailed t-test), respectively. The LAR was significantly lower by PBT than IMXT for the the brain, head, and neck region, thoracic region, and whole craniospinal irradiation. Conclusion: In pediatric patients who had undergone PBT, the LAR of PBT was significantly lower than the LAR of IMXT estimated by in silico modeling. This method was suggested to be useful as an in silico surrogate marker of secondary cancer induced by different radiotherapy techniques. This research was supported by the Translational Research Network Program, JSPS KAKENHI Grant No. 15H04768 and the Global Institution for

  1. MODELLING OF TURBULENT WAKE FOR TWO WIND TURBINES

    Directory of Open Access Journals (Sweden)

    Arina S. Kryuchkova

    2018-01-01

    Full Text Available The construction of several large wind farms (The Ulyanovsk region, the Republic of Adygea, the Kaliningrad region, the North of the Russian Federation is planned on the territory of the Russian Federation in 2018–2020. The tasks, connected with the design of new wind farms, are currently important. One of the possible direction in the design is connected with