WorldWideScience

Sample records for subgrid averaged models

  1. Recursive renormalization group theory based subgrid modeling

    Science.gov (United States)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  2. Sub-Grid Scale Plume Modeling

    Directory of Open Access Journals (Sweden)

    Greg Yarwood

    2011-08-01

    Full Text Available Multi-pollutant chemical transport models (CTMs are being routinely used to predict the impacts of emission controls on the concentrations and deposition of primary and secondary pollutants. While these models have a fairly comprehensive treatment of the governing atmospheric processes, they are unable to correctly represent processes that occur at very fine scales, such as the near-source transport and chemistry of emissions from elevated point sources, because of their relatively coarse horizontal resolution. Several different approaches have been used to address this limitation, such as using fine grids, adaptive grids, hybrid modeling, or an embedded sub-grid scale plume model, i.e., plume-in-grid (PinG modeling. In this paper, we first discuss the relative merits of these various approaches used to resolve sub-grid scale effects in grid models, and then focus on PinG modeling which has been very effective in addressing the problems listed above. We start with a history and review of PinG modeling from its initial applications for ozone modeling in the Urban Airshed Model (UAM in the early 1980s using a relatively simple plume model, to more sophisticated and state-of-the-science plume models, that include a full treatment of gas-phase, aerosol, and cloud chemistry, embedded in contemporary models such as CMAQ, CAMx, and WRF-Chem. We present examples of some typical results from PinG modeling for a variety of applications, discuss the implications of PinG on model predictions of source attribution, and discuss possible future developments and applications for PinG modeling.

  3. Subgrid Modeling Geomorphological and Ecological Processes in Salt Marsh Evolution

    Science.gov (United States)

    Shi, F.; Kirby, J. T., Jr.; Wu, G.; Abdolali, A.; Deb, M.

    2016-12-01

    Numerical modeling a long-term evolution of salt marshes is challenging because it requires an extensive use of computational resources. Due to the presence of narrow tidal creeks, variations of salt marsh topography can be significant over spatial length scales on the order of a meter. With growing availability of high-resolution bathymetry measurements, like LiDAR-derived DEM data, it is increasingly desirable to run a high-resolution model in a large domain and for a long period of time to get trends of sedimentation patterns, morphological change and marsh evolution. However, high spatial-resolution poses a big challenge in both computational time and memory storage, when simulating a salt marsh with dimensions of up to O(100 km^2) with a small time step. In this study, we have developed a so-called Pre-storage, Sub-grid Model (PSM, Wu et al., 2015) for simulating flooding and draining processes in salt marshes. The simulation of Brokenbridge salt marsh, Delaware, shows that, with the combination of the sub-grid model and the pre-storage method, over 2 orders of magnitude computational speed-up can be achieved with minimal loss of model accuracy. We recently extended PSM to include a sediment transport component and models for biomass growth and sedimentation in the sub-grid model framework. The sediment transport model is formulated based on a newly derived sub-grid sediment concentration equation following Defina's (2000) area-averaging procedure. Suspended sediment transport is modeled by the advection-diffusion equation in the coarse grid level, but the local erosion and sedimentation rates are integrated over the sub-grid level. The morphological model is based on the existing morphological model in NearCoM (Shi et al., 2013), extended to include organic production from the biomass model. The vegetation biomass is predicted by a simple logistic equation model proposed by Marani et al. (2010). The biomass component is loosely coupled with hydrodynamic and

  4. A new downscaling method for sub-grid turbulence modeling

    Directory of Open Access Journals (Sweden)

    L. Rottner

    2017-06-01

    Full Text Available In this study we explore a new way to model sub-grid turbulence using particle systems. The ability of particle systems to model small-scale turbulence is evaluated using high-resolution numerical simulations. These high-resolution data are averaged to produce a coarse-grid velocity field, which is then used to drive a complete particle-system-based downscaling. Wind fluctuations and turbulent kinetic energy are compared between the particle simulations and the high-resolution simulation. Despite the simplicity of the physical model used to drive the particles, the results show that the particle system is able to represent the average field. It is shown that this system is able to reproduce much finer turbulent structures than the numerical high-resolution simulations. In addition, this study provides an estimate of the effective spatial and temporal resolution of the numerical models. This highlights the need for higher-resolution simulations in order to evaluate the very fine turbulent structures predicted by the particle systems. Finally, a study of the influence of the forcing scale on the particle system is presented.

  5. Simple subgrid scale stresses models for homogeneous isotropic turbulence

    Science.gov (United States)

    Aupoix, B.; Cousteix, J.

    Large eddy simulations employing the filtering of Navier-Stokes equations highlight stresses, related to the interaction between large scales below the cut and small scales above it, which have been designated 'subgrid scale stresses'. Their effects include both the energy flux through the cut and a component of viscous diffusion. The eddy viscosity introduced in the subgrid scale models which give the correct energy flux through the cut by comparison with spectral closures is shown to depend only on the small scales. The Smagorinsky (1963) model can only be obtained if the cut lies in the middle of the inertial range. A novel model which takes the small scales into account statistically, and includes the effects of viscosity, is proposed and compared with classical models for the Comte-Bellot and Corrsin (1971) experiment.

  6. Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD

    Science.gov (United States)

    Agostinelli, Giulia; Baglietto, Emilio

    2017-11-01

    The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.

  7. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    CERN Document Server

    Silvis, Maurits H; Verstappen, Roel

    2016-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is p...

  8. Subgrid Parameterization of the Soil Moisture Storage Capacity for a Distributed Rainfall-Runoff Model

    Directory of Open Access Journals (Sweden)

    Weijian Guo

    2015-05-01

    Full Text Available Spatial variability plays an important role in nonlinear hydrologic processes. Due to the limitation of computational efficiency and data resolution, subgrid variability is usually assumed to be uniform for most grid-based rainfall-runoff models, which leads to the scale-dependence of model performances. In this paper, the scale effect on the Grid-Xinanjiang model was examined. The bias of the estimation of precipitation, runoff, evapotranspiration and soil moisture at the different grid scales, along with the scale-dependence of the effective parameters, highlights the importance of well representing the subgrid variability. This paper presents a subgrid parameterization method to incorporate the subgrid variability of the soil storage capacity, which is a key variable that controls runoff generation and partitioning in the Grid-Xinanjiang model. In light of the similar spatial pattern and physical basis, the soil storage capacity is correlated with the topographic index, whose spatial distribution can more readily be measured. A beta distribution is introduced to represent the spatial distribution of the soil storage capacity within the grid. The results derived from the Yanduhe Basin show that the proposed subgrid parameterization method can effectively correct the watershed soil storage capacity curve. Compared to the original Grid-Xinanjiang model, the model performances are quite consistent at the different grid scales when the subgrid variability is incorporated. This subgrid parameterization method reduces the recalibration necessity when the Digital Elevation Model (DEM resolution is changed. Moreover, it improves the potential for the application of the distributed model in the ungauged basin.

  9. Sub-Grid Modeling of Electrokinetic Effects in Micro Flows

    Science.gov (United States)

    Chen, C. P.

    2005-01-01

    Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this

  10. Combination of Lidar Elevations, Bathymetric Data, and Urban Infrastructure in a Sub-Grid Model for Predicting Inundation in New York City during Hurricane Sandy

    CERN Document Server

    Loftis, Jon Derek; Hamilton, Stuart E; Forrest, David R

    2014-01-01

    We present the geospatial methods in conjunction with results of a newly developed storm surge and sub-grid inundation model which was applied in New York City during Hurricane Sandy in 2012. Sub-grid modeling takes a novel approach for partial wetting and drying within grid cells, eschewing the conventional hydrodynamic modeling method by nesting a sub-grid containing high-resolution lidar topography and fine scale bathymetry within each computational grid cell. In doing so, the sub-grid modeling method is heavily dependent on building and street configuration provided by the DEM. The results of spatial comparisons between the sub-grid model and FEMA's maximum inundation extents in New York City yielded an unparalleled absolute mean distance difference of 38m and an average of 75% areal spatial match. An in-depth error analysis reveals that the modeled extent contour is well correlated with the FEMA extent contour in most areas, except in several distinct areas where differences in special features cause sig...

  11. Exploring nonlinear subgrid-scale models and new characteristic length scales for large-eddy simulation

    NARCIS (Netherlands)

    Silvis, Maurits H.; Trias, F. Xavier; Abkar, M.; Bae, H.J.; Lozano-Duran, A.; Verstappen, R.W.C.P.; Moin, Parviz; Urzay, Javier

    2016-01-01

    We study subgrid-scale modeling for large-eddy simulation of anisotropic turbulent flows on anisotropic grids. In particular, we show how the addition of a velocity-gradient-based nonlinear model term to an eddy viscosity model provides a better representation of energy transfer. This is shown to

  12. Enhancing Representation of Subgrid Land Surface Characteristics in the Community Land Model

    Science.gov (United States)

    Ke, Y.; Coleman, A.; Leung, L.; Huang, M.; Li, H.; Wigmosta, M. S.

    2011-12-01

    The Community Land Model (CLM) is the land surface model used in the Community Earth System Model (CESM). In CLM each grid cell is composed of subgrid land units, snow/soil columns and plant functional types (PFTs). In the current version of CLM (CLM4.0), land surface parameters such as vegetated/non-vegetated land cover and surface characteristics including fractional glacier, lake, wetland, urban area, and PFT, and its associated leaf area index (LAI), stem area index (SAI), and canopy top and bottom heights are provided at 0.5° or coarser resolution. This study aims to enhance the representation of the land surface data by (1) creating higher resolution (0.05° or higher) global land surface parameters, and (2) developing an effective and accurate subgrid classification scheme for elevation and PFTs so that variations of land surface processes due to the subgrid distribution of PFTs and elevation can be represented in CLM. To achieve higher-resolution global land surface parameters, MODIS 500m land cover product (MCD12Q1) collected in 2005 was used to generate percentage of glacier, lake, wetland, and urban area and fractional PFTs at 0.05° resolution. Spatially and temporally continuous and consistent global LAI data re-processed and improved from MOD15A2 (http://globalchange.bnu.edu.cn/research/lai), combined with the PFT data, was used to create LAI, SAI, and, canopy top and bottom height data. 30-second soil texture data was obtained from a hybrid 30-second State Soil Geographic Database (STATSGO) and the 5-minute Food and Agriculture Organization two-layer 16-category soil texture dataset. The relationship between global distribution of PFTs and 1-km resolution elevation data is being analyzed to develop a subgrid classification of PFT and elevation. Statistical analysis is being conducted to compare different subgrid classification methods to select a method that explains the highest percentage of subgrid variance in both PFT and elevation distribution

  13. Quantification of marine aerosol subgrid variability and its correlation with clouds based on high-resolution regional modeling: Quantifying Aerosol Subgrid Variability

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guangxing; Qian, Yun; Yan, Huiping; Zhao, Chun; Ghan, Steven J.; Easter, Richard C.; Zhang, Kai

    2017-06-16

    One limitation of most global climate models (GCMs) is that with the horizontal resolutions they typically employ, they cannot resolve the subgrid variability (SGV) of clouds and aerosols, adding extra uncertainties to the aerosol radiative forcing estimation. To inform the development of an aerosol subgrid variability parameterization, here we analyze the aerosol SGV over the southern Pacific Ocean simulated by the high-resolution Weather Research and Forecasting model coupled to Chemistry. We find that within a typical GCM grid, the aerosol mass subgrid standard deviation is 15% of the grid-box mean mass near the surface on a 1 month mean basis. The fraction can increase to 50% in the free troposphere. The relationships between the sea-salt mass concentration, meteorological variables, and sea-salt emission rate are investigated in both the clear and cloudy portion. Under clear-sky conditions, marine aerosol subgrid standard deviation is highly correlated with the standard deviations of vertical velocity, cloud water mixing ratio, and sea-salt emission rates near the surface. It is also strongly connected to the grid box mean aerosol in the free troposphere (between 2 km and 4 km). In the cloudy area, interstitial sea-salt aerosol mass concentrations are smaller, but higher correlation is found between the subgrid standard deviations of aerosol mass and vertical velocity. Additionally, we find that decreasing the model grid resolution can reduce the marine aerosol SGV but strengthen the correlations between the aerosol SGV and the total water mixing ratio (sum of water vapor, cloud liquid, and cloud ice mixing ratios).

  14. Resolving terrestrial ecosystem processes along a subgrid topographic gradient for an earth-system model

    Science.gov (United States)

    Subin, Z M; Milly, Paul C.D.; Sulman, B N; Malyshev, Sergey; Shevliakova, E

    2014-01-01

    Soil moisture is a crucial control on surface water and energy fluxes, vegetation, and soil carbon cycling. Earth-system models (ESMs) generally represent an areal-average soil-moisture state in gridcells at scales of 50–200 km and as a result are not able to capture the nonlinear effects of topographically-controlled subgrid heterogeneity in soil moisture, in particular where wetlands are present. We addressed this deficiency by building a subgrid representation of hillslope-scale topographic gradients, TiHy (Tiled-hillslope Hydrology), into the Geophysical Fluid Dynamics Laboratory (GFDL) land model (LM3). LM3-TiHy models one or more representative hillslope geometries for each gridcell by discretizing them into land model tiles hydrologically coupled along an upland-to-lowland gradient. Each tile has its own surface fluxes, vegetation, and vertically-resolved state variables for soil physics and biogeochemistry. LM3-TiHy simulates a gradient in soil moisture and water-table depth between uplands and lowlands in each gridcell. Three hillslope hydrological regimes appear in non-permafrost regions in the model: wet and poorly-drained, wet and well-drained, and dry; with large, small, and zero wetland area predicted, respectively. Compared to the untiled LM3 in stand-alone experiments, LM3-TiHy simulates similar surface energy and water fluxes in the gridcell-mean. However, in marginally wet regions around the globe, LM3-TiHy simulates shallow groundwater in lowlands, leading to higher evapotranspiration, lower surface temperature, and higher leaf area compared to uplands in the same gridcells. Moreover, more than four-fold larger soil carbon concentrations are simulated globally in lowlands as compared with uplands. We compared water-table depths to those simulated by a recent global model-observational synthesis, and we compared wetland and inundated areas diagnosed from the model to observational datasets. The comparisons demonstrate that LM3-TiHy has the

  15. On the development of a subgrid CFD model for fire extinguishment

    Energy Technology Data Exchange (ETDEWEB)

    TIESZEN,SHELDON R.; LOPEZ,AMALIA R.

    2000-02-02

    A subgrid model is presented for use in CFD fire simulations to account for thermal suppressants and strain. The extinguishment criteria is based on the ratio of a local fluid-mechanics time-scale to a local chemical time-scale compared to an empirically-determined critical Damkohler number. Local extinction occurs if this time scale is exceeded, global fire extinguishment occurs when local extinction has occurred for all combusting cells. The fluid mechanics time scale is based on the Kolmogorov time scale and the chemical time scale is based on blowout of a perfectly stirred reactor. The input to the reactor is based on cell averaged temperatures, assumed stoichiometric fuel/air composition, and cell averaged suppressant concentrations including combustion products. A detailed chemical mechanism is employed. The chemical time-scale is precalculated and mixing rules are used to reduce the composition space that must be parameterized. Comparisons with experimental data for fire extinguishment in a flame-stabilizing, backward-facing step geometry indicates that the model is conservative for this condition.

  16. Multifractal subgrid-scale modeling within a variational multiscale method for large-eddy simulation of turbulent flow

    Science.gov (United States)

    Rasthofer, U.; Gravemeier, V.

    2013-02-01

    Multifractal subgrid-scale modeling within a variational multiscale method is proposed for large-eddy simulation of turbulent flow. In the multifractal subgrid-scale modeling approach, the subgrid-scale velocity is evaluated from a multifractal description of the subgrid-scale vorticity, which is based on the multifractal scale similarity of gradient fields in turbulent flow. The multifractal subgrid-scale modeling approach is integrated into a variational multiscale formulation, which constitutes a new application of the variational multiscale concept. A focus of this study is on the application of the multifractal subgrid-scale modeling approach to wall-bounded turbulent flow. Therefore, a near-wall limit of the multifractal subgrid-scale modeling approach is derived in this work. The novel computational approach of multifractal subgrid-scale modeling within a variational multiscale formulation is applied to turbulent channel flow at various Reynolds numbers, turbulent flow over a backward-facing step and turbulent flow past a square-section cylinder, which are three of the most important and widely-used benchmark examples for wall-bounded turbulent flow. All results presented in this study confirm a very good performance of the proposed method. Compared to a dynamic Smagorinsky model and a residual-based variational multiscale method, improved results are obtained. Moreover, it is demonstrated that the subgrid-scale energy transfer incorporated by the proposed method very well approximates the expected energy transfer as obtained from appropriately filtered direct numerical simulation data. The computational cost is notably reduced compared to a dynamic Smagorinsky model and only marginally increased compared to a residual-based variational multiscale method.

  17. Stochastic fields method for sub-grid scale emission heterogeneity in mesoscale atmospheric dispersion models

    OpenAIRE

    M. Cassiani; Vinuesa, J.F.; Galmarini, S.; Denby, B

    2010-01-01

    The stochastic fields method for turbulent reacting flows has been applied to the issue of sub-grid scale emission heterogeneity in a mesoscale model. This method is a solution technique for the probability density function (PDF) transport equation and can be seen as a straightforward extension of currently used mesoscale dispersion models. It has been implemented in an existing mesoscale model and the results are compared with Large-Eddy Simulation (LES) data devised to test specifically the...

  18. Analysis of subgrid models of heat convection by symmetry group theory

    Science.gov (United States)

    Razafindralandy, Dina; Hamdouni, Aziz

    2007-04-01

    Symmetries, i.e. transformations which leave the set of the solutions of the Navier-Stokes equations unchanged, play an important role in turbulence (conservation laws, wall laws, …). They should not be destroyed by turbulence models. The symmetries of the heat convection equations are then presented, for a non-isothermal fluid. Next, common subgrid stress tensor and flux models are analyzed, using the symmetry approach. To cite this article: D. Razafindralandy, A. Hamdouni, C. R. Mecanique 335 (2007).

  19. Unsteady Flame Embedding (UFE) Subgrid Model for Turbulent Premixed Combustion Simulations

    KAUST Repository

    El-Asrag, Hossam

    2010-01-04

    We present a formulation for an unsteady subgrid model for premixed combustion in the flamelet regime. Since chemistry occurs at the unresolvable scales, it is necessary to introduce a subgrid model that accounts for the multi-scale nature of the problem using the information available on the resolved scales. Most of the current models are based on the laminar flamelet concept, and often neglect the unsteady effects. The proposed model\\'s primary objective is to encompass many of the flame/turbulence interactions unsteady features and history effects. In addition it provides a dynamic and accurate approach for computing the subgrid flame propagation velocity. The unsteady flame embedding approach (UFE) treats the flame as an ensemble of locally one-dimensional flames. A set of elemental one dimensional flames is used to describe the turbulent flame structure at the subgrid level. The stretched flame calculations are performed on the stagnation line of a strained flame using the unsteady filtered strain rate computed from the resolved- grid. The flame iso-surface is tracked using an accurate high-order level set formulation to propagate the flame interface at the coarse resolution with minimum numerical diffusion. In this paper the solver and the model components are introduced and used to investigate two unsteady flames with different Lewis numbers in the thin reaction zone regime. The results show that the UFE model captures the unsteady flame-turbulence interactions and the flame propagation speed reasonably well. Higher propagation speed is observed for the lower than unity Lewis number flame because of the impact of differential diffusion.

  20. Stochastic fields method for sub-grid scale emission heterogeneity in mesoscale atmospheric dispersion models

    Directory of Open Access Journals (Sweden)

    M. Cassiani

    2010-01-01

    Full Text Available The stochastic fields method for turbulent reacting flows has been applied to the issue of sub-grid scale emission heterogeneity in a mesoscale model. This method is a solution technique for the probability density function (PDF transport equation and can be seen as a straightforward extension of currently used mesoscale dispersion models. It has been implemented in an existing mesoscale model and the results are compared with Large-Eddy Simulation (LES data devised to test specifically the effect of sub-grid scale emission heterogeneity on boundary layer concentration fluctuations. The sub-grid scale emission variability is assimilated in the model as a PDF of the emissions. The stochastic fields method shows excellent agreement with the LES data without adjustment of the constants used in the mesoscale model. The stochastic fields method is a stochastic solution of the transport equations for the concentration PDF of dispersing scalars, therefore it possesses the ability to handle chemistry of any complexity without the need to introduce additional closures for the high order statistics of chemical species. This study shows for the first time the feasibility of applying this method to mesoscale chemical transport models.

  1. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  2. Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach

    Science.gov (United States)

    Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.

    2017-08-01

    Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a

  3. Large eddy simulation of flow over a wall-mounted cube: Comparison of different semi dynamic subgrid scale models

    Directory of Open Access Journals (Sweden)

    M Nooroullahi

    2016-09-01

    Full Text Available In this paper the ability of different semi dynamic subgrid scale models for large eddy simulation was studied in a challenging test case. The semi dynamic subgrid scale models were examined in this investigation is Selective Structure model, Coherent structure model, Wall Adaptive Large Eddy model. The test case is a simulation of flow over a wall-mounted cube in a channel. The results of these models were compared to structure function model, dynamic models and experimental data at Reynolds number 40000. Results show that these semi dynamic models could improve the ability of numerical simulation in comparison with other models which use a constant coefficient for simulation of subgrid scale viscosity. In addition, these models don't have the instability problems of dynamic models.

  4. A distributed Grid-Xinanjiang model with integration of subgrid variability of soil storage capacity

    Directory of Open Access Journals (Sweden)

    Wei-jian Guo

    2016-04-01

    Full Text Available Realistic hydrological response is sensitive to the spatial variability of landscape properties. For a grid-based distributed rainfall-runoff model with a hypothesis of a uniform grid, the high-frequency information within a grid cell will be gradually lost as the resolution of the digital elevation model (DEM grows coarser. Therefore, the performance of a hydrological model is usually scale-dependent. This study used the Grid-Xinanjiang (GXAJ model as an example to investigate the effects of subgrid variability on hydrological response at different scales. With the aim of producing a more reasonable hydrological response and spatial description of the landscape properties, a new distributed rainfall-runoff model integrating the subgrid variability (the GXAJSV model was developed. In this model, the topographic index is used as an auxiliary variable correlated with the soil storage capacity. The incomplete beta distribution is suggested for simulating the probability distribution of the soil storage capacity within the raster grid. The Yaogu Basin in China was selected for model calibration and validation at different spatial scales. Results demonstrated that the proposed model can effectively eliminate the scale dependence of the GXAJ model and produce a more reasonable hydrological response.

  5. Lagrangian scheme to model subgrid-scale mixing and spreading in heterogeneous porous media

    Science.gov (United States)

    Herrera, P. A.; Cortínez, J. M.; Valocchi, A. J.

    2017-04-01

    Small-scale heterogeneity of permeability controls spreading, dilution, and mixing of solute plumes at large scale. However, conventional numerical simulations of solute transport are unable to resolve scales of heterogeneity below the grid scale. We propose a Lagrangian numerical approach to implement closure models to account for subgrid-scale spreading and mixing in Darcy-scale numerical simulations of solute transport in mildly heterogeneous porous media. The novelty of the proposed approach is that it considers two different dispersion coefficients to account for advective spreading mechanisms and local-scale dispersion. Using results of benchmark numerical simulations, we demonstrate that the proposed approach is able to model subgrid-scale spreading and mixing provided there is a correct choice of block-scale dispersion coefficient. We also demonstrate that for short travel times it is only possible to account for spreading or mixing using a single block-scale dispersion coefficient. Moreover, we show that it is necessary to use time-dependent dispersion coefficients to obtain correct mixing rates. On the contrary, for travel times that are large in comparison to the typical dispersive time scale, it is possible to use a single expression to compute the block-dispersion coefficient, which is equal to the asymptotic limit of the block-scale macrodispersion coefficient proposed by Rubin et al. (1999). Our approach provides a flexible and efficient way to model subgrid-scale mixing in numerical models of large-scale solute transport in heterogeneous aquifers. We expect that these findings will help to better understand the applicability of the advection-dispersion-equation (ADE) to simulate solute transport at the Darcy scale in heterogeneous porous media.

  6. A Fast and Accurate Scheme for Sea Ice Dynamics with a Stochastic Subgrid Model

    Science.gov (United States)

    Seinen, C.; Khouider, B.

    2016-12-01

    Sea ice physics is a very complex process occurring over a wide range of scales; such as local melting or large scale drift. At the current grid resolution of Global Climate Models (GCMs), we are able to resolve large scale sea ice dynamics but uncertainty remains due to subgrid physics and potential dynamic feedback, especially due to the formation of melt ponds. Recent work in atmospheric science has shown success of Markov Jump stochastic subgrid models in the representation of clouds and convection and their feedback into the large scales. There has been a push to implement these methods in other parts of the Earth System and for the cryosphere in particular but in order to test these methods, efficient and accurate solvers are required for the resolved large scale sea-ice dynamics. We present a second order accurate scheme, in both time and space, for the sea ice momentum equation (SIME) with a Jacobian Free Newton Krylov (JFNK) solver. SIME is a highly nonlinear equation due to sea ice rheology terms appearing in the stress tensor. The most commonly accepted formulation, introduced by Hibler, allows sea-ice to resist significant stresses in compression but significantly less in tension. The relationship also leads to large changes in internal stresses from small changes in velocity fields. These non-linearities have resulted in the use of implicit methods for SIME and a JFNK solver was recently introduced and used to gain efficiency. However, the method used so far is only first order accurate in time. Here we expand the JFNK approach to a Crank-Nicholson discretization of SIME. This fully second order scheme is achieved with no increase in computational cost and will allow efficient testing and development of subgrid stochastic models of sea ice in the near future.

  7. Effects of Implementing Subgrid-Scale Cloud-Radiation Interactions in a Regional Climate Model

    Science.gov (United States)

    Herwehe, J. A.; Alapaty, K.; Otte, T.; Nolte, C. G.

    2012-12-01

    Interactions between atmospheric radiation, clouds, and aerosols are the most important processes that determine the climate and its variability. In regional scale models, when used at relatively coarse spatial resolutions (e.g., larger than 1 km), convective cumulus clouds need to be parameterized as subgrid-scale clouds. Like many groups, our regional climate modeling group at the EPA uses the Weather Research & Forecasting model (WRF) as a regional climate model (RCM). One of the findings from our RCM studies is that the summertime convective systems simulated by the WRF model are highly energetic, leading to excessive surface precipitation. We also found that the WRF model does not consider the interactions between convective clouds and radiation, thereby omitting an important process that drives the climate. Thus, the subgrid-scale cloudiness associated with convective clouds (from shallow cumuli to thunderstorms) does not exist and radiation passes through the atmosphere nearly unimpeded, potentially leading to overly energetic convection. This also has implications for air quality modeling systems that are dependent upon cloud properties from the WRF model, as the failure to account for subgrid-scale cloudiness can lead to problems such as the underrepresentation of aqueous chemistry processes within clouds and the overprediction of ozone from overactive photolysis. In an effort to advance the climate science of the cloud-aerosol-radiation (CAR) interactions in RCM systems, as a first step we have focused on linking the cumulus clouds with the radiation processes. To this end, our research group has implemented into WRF's Kain-Fritsch (KF) cumulus parameterization a cloudiness formulation that is widely used in global earth system models (e.g., CESM/CAM5). Estimated grid-scale cloudiness and associated condensate are adjusted to account for the subgrid clouds and then passed to WRF's Rapid Radiative Transfer Model - Global (RRTMG) radiation schemes to affect

  8. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    Science.gov (United States)

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  9. Mixture Model Averaging for Clustering

    OpenAIRE

    Wei, Yuhong; McNicholas, Paul D.

    2012-01-01

    In mixture model-based clustering applications, it is common to fit several models from a family and report clustering results from only the `best' one. In such circumstances, selection of this best model is achieved using a model selection criterion, most often the Bayesian information criterion. Rather than throw away all but the best model, we average multiple models that are in some sense close to the best one, thereby producing a weighted average of clustering results. Two (weighted) ave...

  10. An investigation of the sub-grid variability of trace gases and aerosols for global climate modeling

    Directory of Open Access Journals (Sweden)

    Y. Qian

    2010-07-01

    Full Text Available One fundamental property and limitation of grid based models is their inability to identify spatial details smaller than the grid cell size. While decades of work have gone into developing sub-grid treatments for clouds and land surface processes in climate models, the quantitative understanding of sub-grid processes and variability for aerosols and their precursors is much poorer. In this study, WRF-Chem is used to simulate the trace gases and aerosols over central Mexico during the 2006 MILAGRO field campaign, with multiple spatial resolutions and emission/terrain scenarios. Our analysis focuses on quantifying the sub-grid variability (SGV of trace gases and aerosols within a typical global climate model grid cell, i.e. 75×75 km2.

    Our results suggest that a simulation with 3-km horizontal grid spacing adequately reproduces the overall transport and mixing of trace gases and aerosols downwind of Mexico City, while 75-km horizontal grid spacing is insufficient to represent local emission and terrain-induced flows along the mountain ridge, subsequently affecting the transport and mixing of plumes from nearby sources. Therefore, the coarse model grid cell average may not correctly represent aerosol properties measured over polluted areas. Probability density functions (PDFs for trace gases and aerosols show that secondary trace gases and aerosols, such as O3, sulfate, ammonium, and nitrate, are more likely to have a relatively uniform probability distribution (i.e. smaller SGV over a narrow range of concentration values. Mostly inert and long-lived trace gases and aerosols, such as CO and BC, are more likely to have broad and skewed distributions (i.e. larger SGV over polluted regions. Over remote areas, all trace gases and aerosols are more uniformly distributed compared to polluted areas. Both CO and O3 SGV vertical profiles are nearly constant within the PBL during daytime, indicating that trace gases

  11. A dynamic subgrid scale model for Large Eddy Simulations based on the Mori-Zwanzig formalism

    Science.gov (United States)

    Parish, Eric J.; Duraisamy, Karthik

    2017-11-01

    The development of reduced models for complex multiscale problems remains one of the principal challenges in computational physics. The optimal prediction framework of Chorin et al. [1], which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived reduced models of dynamical systems. Several promising models have emerged from the optimal prediction community and have found application in molecular dynamics and turbulent flows. In this work, a new M-Z-based closure model that addresses some of the deficiencies of existing methods is developed. The model is constructed by exploiting similarities between two levels of coarse-graining via the Germano identity of fluid mechanics and by assuming that memory effects have a finite temporal support. The appeal of the proposed model, which will be referred to as the 'dynamic-MZ-τ' model, is that it is parameter-free and has a structural form imposed by the mathematics of the coarse-graining process (rather than the phenomenological assumptions made by the modeler, such as in classical subgrid scale models). To promote the applicability of M-Z models in general, two procedures are presented to compute the resulting model form, helping to bypass the tedious error-prone algebra that has proven to be a hindrance to the construction of M-Z-based models for complex dynamical systems. While the new formulation is applicable to the solution of general partial differential equations, demonstrations are presented in the context of Large Eddy Simulation closures for the Burgers equation, decaying homogeneous turbulence, and turbulent channel flow. The performance of the model and validity of the underlying assumptions are investigated in detail.

  12. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    Science.gov (United States)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  13. The Storm Surge and Sub-Grid Inundation Modeling in New York City during Hurricane Sandy

    Directory of Open Access Journals (Sweden)

    Harry V. Wang

    2014-03-01

    Full Text Available Hurricane Sandy inflicted heavy damage in New York City and the New Jersey coast as the second costliest storm in history. A large-scale, unstructured grid storm tide model, Semi-implicit Eulerian Lagrangian Finite Element (SELFE, was used to hindcast water level variation during Hurricane Sandy in the mid-Atlantic portion of the U.S. East Coast. The model was forced by eight tidal constituents at the model’s open boundary, 1500 km away from the coast, and the wind and pressure fields from atmospheric model Regional Atmospheric Modeling System (RAMS provided by Weatherflow Inc. The comparisons of the modeled storm tide with the NOAA gauge stations from Montauk, NY, Long Island Sound, encompassing New York Harbor, Atlantic City, NJ, to Duck, NC, were in good agreement, with an overall root mean square error and relative error in the order of 15–20 cm and 5%–7%, respectively. Furthermore, using large-scale model outputs as the boundary conditions, a separate sub-grid model that incorporates LIDAR data for the major portion of the New York City was also set up to investigate the detailed inundation process. The model results compared favorably with USGS’ Hurricane Sandy Mapper database in terms of its timing, local inundation area, and the depth of the flooding water. The street-level inundation with water bypassing the city building was created and the maximum extent of horizontal inundation was calculated, which was within 30 m of the data-derived estimate by USGS.

  14. Acceleration of inertial particles in wall bounded flows: DNS and LES with stochastic modelling of the subgrid acceleration

    Energy Technology Data Exchange (ETDEWEB)

    Zamansky, Remi; Vinkovic, Ivana; Gorokhovski, Mikhael, E-mail: ivana.vinkovic@univ-lyonl.fr [Laboratoire de Mecanique des Fluides et d' Acoustique CNRS UMR 5509 Ecole Centrale de Lyon, 36, av. Guy de Collongue, 69134 Ecully Cedex (France)

    2011-12-22

    Inertial particle acceleration statistics are analyzed using DNS for turbulent channel flow. Along with effects recognized in homogeneous isotropic turbulence, an additional effect is observed due to high and low speed vortical structures aligned with the channel wall. In response to those structures, particles with moderate inertia experience strong longitudinal acceleration variations. DNS is also used in order to assess LES-SSAM (Subgrid Stochastic Acceleration Model), in which an approximation to the instantaneous non-filtered velocity field is given by simulation of both, filtered and residual, accelerations. This approach allow to have access to the intermittency of the flow at subgrid scale. Advantages of LES-SSAM in predicting particle dynamics in the channel flow at a high Reynolds number are shown.

  15. A scale-aware subgrid model for quasi-geostrophic turbulence

    Science.gov (United States)

    Bachman, Scott D.; Fox-Kemper, Baylor; Pearson, Brodie

    2017-02-01

    This paper introduces two methods for dynamically prescribing eddy-induced diffusivity, advection, and viscosity appropriate for primitive equation models with resolutions permitting the forward potential enstrophy cascade of quasi-geostrophic dynamics, such as operational ocean models and high-resolution climate models with O>(25>) km horizontal resolution and finer. Where quasi-geostrophic dynamics fail (e.g., the equator, boundary layers, and deep convection), the method reverts to scalings based on a matched two-dimensional enstrophy cascade. A principle advantage is that these subgrid models are scale-aware, meaning that the model is suitable over a range of grid resolutions: from mesoscale grids that just permit baroclinic instabilities to grids below the submesoscale where ageostrophic effects dominate. Two approaches are presented here using Large Eddy Simulation (LES) techniques adapted for three-dimensional rotating, stratified turbulence. The simpler approach has one nondimensional parameter, Λ, which has an optimal value near 1. The second approach dynamically optimizes Λ during simulation using a test filter. The new methods are tested in an idealized scenario by varying the grid resolution, and their use improves the spectra of potential enstrophy and energy in comparison to extant schemes. The new methods keep the gridscale Reynolds and Péclet numbers near 1 throughout the domain, which confers robust numerical stability and minimal spurious diapycnal mixing. Although there are no explicit parameters in the dynamic approach, there is strong sensitivity to the choice of test filter. Designing test filters for heterogeneous ocean turbulence adds cost and uncertainty, and we find the dynamic method does not noticeably improve over setting Λ = 1.

  16. Sub-grid combustion modeling for compressible two-phase reacting flows

    Science.gov (United States)

    Sankaran, Vaidyanathan

    2003-06-01

    A generic formulation for modeling the turbulent combustion in compressible, high Reynolds number, two-phase; reacting flows has been developed and validated. A sub-grid mixing/combustion model called Linear Eddy Mixing (LEM) model has been extended to compressible flows and used inside the framework of Large Eddy Simulation (LES) in this LES-LEM approach. The LES-LEM approach is based on the proposition that the basic mechanistic distinction between the convective and the molecular effects should be preserved for accurate prediction of complex flow-fields such as those encountered in many combustion systems. Liquid droplets (represented by computational parcels) are tracked using the Lagrangian approach wherein the Newton's equation of motion for the discrete particles are integrated explicitly in the Eulerian gas field. The gas phase LES velocity fields are used to estimate the instantaneous gas velocity at the droplet location. Drag effects due to the droplets on the gas phase and the heat transfer between the gas and the liquid phase are explicitly included. Thus, full coupling is achieved between the two phases in the simulation. Validation of the compressible LES-LEM approach is conducted by simulating the flow-field in an operational General Electric Aircraft Engines combustor (LM6000). The results predicted using the proposed approach compares well with the experiments and a conventional (G-equation) thin-flame model. Particle tracking algorithms used in the present study are validated by simulating droplet laden temporal mixing layers. Quantitative and qualitative comparison with the results of spectral DNS exhibits good agreement. Simulations using the current LES-LEM for freely propagating partially premixed flame in a droplet-laden isotropic turbulent field correctly captures the flame structure in the partially premixed flames. Due to the strong spatial variation of equivalence ratio a broad flame similar to a premixed flame is realized. The current

  17. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    Energy Technology Data Exchange (ETDEWEB)

    Buschman, Francis X., E-mail: Francis.Buschman@unnpp.gov; Aumiller, David L.

    2017-02-15

    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  18. From Detailed Description of Chemical Reacting Carbon Particles to Subgrid Models for CFD

    Directory of Open Access Journals (Sweden)

    Schulze S.

    2013-04-01

    Full Text Available This work is devoted to the development and validation of a sub-model for the partial oxidation of a spherical char particle moving in an air/steam atmosphere. The particle diameter is 2 mm. The coal particle is represented by moisture- and ash-free nonporous carbon while the coal rank is implemented using semi-global reaction rate expressions taken from the literature. The submodel includes six gaseous chemical species (O2, CO2, CO, H2O, H2, N2. Three heterogeneous reactions are employed, along with two homogeneous semi-global reactions, namely carbon monoxide oxidation and the water-gas-shift reaction. The distinguishing feature of the subgrid model is that it takes into account the influence of homogeneous reactions on integral characteristics such as carbon combustion rates and particle temperature. The sub-model was validated by comparing its results with a comprehensive CFD-based model resolving the issues of bulk flow and boundary layer around the particle. In this model, the Navier-Stokes equations coupled with the energy and species conservation equations were used to solve the problem by means of the pseudo-steady state approach. At the surface of the particle, the balance of mass, energy and species concentration was applied including the effect of the Stefan flow and heat loss due to radiation at the surface of the particle. Good agreement was achieved between the sub-model and the CFD-based model. Additionally, the CFD-based model was verified against experimental data published in the literature (Makino et al. (2003 Combust. Flame 132, 743-753. Good agreement was achieved between numerically predicted and experimentally obtained data for input conditions corresponding to the kinetically controlled regime. The maximal discrepancy (10% between the experiments and the numerical results was observed in the diffusion-controlled regime. Finally, we discuss the influence of the Reynolds number, the ambient O2 mass fraction and the ambient

  19. Assessment of subgrid-scale models with a large-eddy simulation-dedicated experimental database: The pulsatile impinging jet in turbulent cross-flow

    Science.gov (United States)

    Baya Toda, Hubert; Cabrit, Olivier; Truffin, Karine; Bruneaux, Gilles; Nicoud, Franck

    2014-07-01

    Large-Eddy Simulation (LES) in complex geometries and industrial applications like piston engines, gas turbines, or aircraft engines requires the use of advanced subgrid-scale (SGS) models able to take into account the main flow features and the turbulence anisotropy. Keeping this goal in mind, this paper reports a LES-dedicated experiment of a pulsatile hot-jet impinging a flat-plate in the presence of a cold turbulent cross-flow. Unlike commonly used academic test cases, this configuration involves different flow features encountered in complex configurations: shear/rotating regions, stagnation point, wall-turbulence, and the propagation of a vortex ring along the wall. This experiment was also designed with the aim to use quantitative and nonintrusive optical diagnostics such as Particle Image Velocimetry, and to easily perform a LES involving a relatively simple geometry and well-controlled boundary conditions. Hence, two eddy-viscosity-based SGS models are investigated: the dynamic Smagorinsky model [M. Germano, U. Piomelli, P. Moin, and W. Cabot, "A dynamic subgrid-scale eddy viscosity model," Phys. Fluids A 3(7), 1760-1765 (1991)] and the σ-model [F. Nicoud, H. B. Toda, O. Cabrit, S. Bose, and J. Lee, "Using singular values to build a subgrid-scale model for large eddy simulations," Phys. Fluids 23(8), 085106 (2011)]. Both models give similar results during the first phase of the experiment. However, it was found that the dynamic Smagorinsky model could not accurately predict the vortex-ring propagation, while the σ-model provides a better agreement with the experimental measurements. Setting aside the implementation of the dynamic procedure (implemented here in its simplest form, i.e., without averaging over homogeneous directions and with clipping of negative values to ensure numerical stability), it is suggested that the mitigated predictions of the dynamic Smagorinsky model are due to the dynamic constant, which strongly depends on the mesh resolution

  20. Aerosol indirect effects in the ECHAM5-HAM2 climate model with subgrid cloud microphysics in a stochastic framework

    Science.gov (United States)

    Tonttila, Juha; Räisänen, Petri; Järvinen, Heikki

    2015-04-01

    Representing cloud properties in global climate models remains a challenging topic, which to a large extent is due to cloud processes acting on spatial scales much smaller than the typical model grid resolution. Several attempts have been made to alleviate this problem. One such method was introduced in the ECHAM5-HAM2 climate model by Tonttila et al. (2013), where cloud microphysical properties, along with the processes of cloud droplet activation and autoconversion, were computed using an ensemble of stochastic subcolumns within the climate model grid columns. Moreover, the subcolumns were sampled for radiative transfer using the Monte Carlo Independent Column Approximation approach. The same model version is used in this work (Tonttila et al. 2014), where 5-year nudged integrations are performed with a series of different model configurations. Each run is performed twice, once with pre-industrial (PI, year 1750) aerosol emission conditions and once with present-day (PD, year 2000) conditions, based on the AEROCOM emission inventories. The differences between PI and PD simulations are used to estimate the impact of anthropogenic aerosols on clouds and the aerosol indirect effect (AIE). One of the key results is that when both cloud activation and autoconversion are computed in the subcolumn space, the aerosol-induced PI-to-PD change in the global-mean liquid water path is up to 19 % smaller than in the reference with grid-scale computations. Together with similar changes in the cloud droplet number concentration, this influences the cloud radiative effects and thus the AIE, which is estimated as the difference in the net cloud radiative effect between PI and PD conditions. Accordingly, the AIE is reduced by 14 %, from 1.59 W m-2 in the reference model version to 1.37 W m-2 in the experimental model configuration. The results of this work explicitly show that careful consideration of the subgrid variability in cloud microphysical properties and consistent

  1. Model Validation for Propulsion - On the TFNS and LES Subgrid Models for a Bluff Body Stabilized Flame

    Science.gov (United States)

    Wey, Thomas

    2017-01-01

    With advances in computational power and availability of distributed computers, the use of even the most complex of turbulent chemical interaction models in combustors and coupled analysis of combustors and turbines is now possible and more and more affordable for realistic geometries. Recent more stringent emission standards have enticed the development of more fuel-efficient and low-emission combustion system for aircraft gas turbine applications. It is known that the NOx emissions tend to increase dramatically with increasing flame temperature. It is well known that the major difficulty, when modeling the turbulence-chemistry interaction, lies in the high non-linearity of the reaction rate expressed in terms of the temperature and species mass fractions. The transport filtered density function (FDF) model and the linear eddy model (LEM), which both use local instantaneous values of the temperature and mass fractions, have been shown to often provide more accurate results of turbulent combustion. In the present, the time-filtered Navier-Stokes (TFNS) approach capable of capturing unsteady flow structures important for turbulent mixing in the combustion chamber and two different subgrid models, LEM-like and EUPDF-like, capable of emulating the major processes occurring in the turbulence-chemistry interaction will be used to perform reacting flow simulations of a selected test case. The selected test case from the Volvo Validation Rig was documented by Sjunnesson.

  2. One-equation sub-grid scale (SGS) modelling for Euler-Euler large eddy simulation (EELES) of dispersed bubbly flow

    NARCIS (Netherlands)

    Niceno, B.; Dhotre, M.T.; Deen, N.G.

    2008-01-01

    In this work, we have presented a one-equation model for sub-grid scale (SGS) kinetic energy and applied it for an Euler-Euler large eddy simulation (EELES) of a bubble column reactor. The one-equation model for SGS kinetic energy shows improved predictions over the state-of-the-art dynamic

  3. A Subgrid Parameterization for Wind Turbines in Weather Prediction Models with an Application to Wind Resource Limits

    Directory of Open Access Journals (Sweden)

    B. H. Fiedler

    2014-01-01

    Full Text Available A subgrid parameterization is offered for representing wind turbines in weather prediction models. The parameterization models the drag and mixing the turbines cause in the atmosphere, as well as the electrical power production the wind causes in the wind turbines. The documentation of the parameterization is complete; it does not require knowledge of proprietary data of wind turbine characteristics. The parameterization is applied to a study of wind resource limits in a hypothetical giant wind farm. The simulated production density was found not to exceed 1 W m−2, peaking at a deployed capacity density of 5 W m−2 and decreasing slightly as capacity density increased to 20 W m−2.

  4. A new mixed subgrid-scale model for large eddy simulation of turbulent drag-reducing flows of viscoelastic fluids

    Science.gov (United States)

    Li, Feng-Chen; Wang, Lu; Cai, Wei-Hua

    2015-07-01

    A mixed subgrid-scale (SGS) model based on coherent structures and temporal approximate deconvolution (MCT) is proposed for turbulent drag-reducing flows of viscoelastic fluids. The main idea of the MCT SGS model is to perform spatial filtering for the momentum equation and temporal filtering for the conformation tensor transport equation of turbulent flow of viscoelastic fluid, respectively. The MCT model is suitable for large eddy simulation (LES) of turbulent drag-reducing flows of viscoelastic fluids in engineering applications since the model parameters can be easily obtained. The LES of forced homogeneous isotropic turbulence (FHIT) with polymer additives and turbulent channel flow with surfactant additives based on MCT SGS model shows excellent agreements with direct numerical simulation (DNS) results. Compared with the LES results using the temporal approximate deconvolution model (TADM) for FHIT with polymer additives, this mixed SGS model MCT behaves better, regarding the enhancement of calculating parameters such as the Reynolds number. For scientific and engineering research, turbulent flows at high Reynolds numbers are expected, so the MCT model can be a more suitable model for the LES of turbulent drag-reducing flows of viscoelastic fluid with polymer or surfactant additives. Project supported by the China Postdoctoral Science Foundation (Grant No. 2011M500652), the National Natural Science Foundation of China (Grant Nos. 51276046 and 51206033), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20112302110020).

  5. Exploring the Limits of the Dynamic Procedure for Modeling Subgrid-Scale Stresses in LES of Inhomogeneous Flows.

    Science.gov (United States)

    Le, A.-T.; Kim, J.; Coleman, G.

    1996-11-01

    One of the primary reasons dynamic subgrid-scale (SGS) models are more successful than those that are `hand-tuned' is thought to be their insensitivity to numerical and modeling parameters. Jiménez has recently demonstrated that large-eddy simulations (LES) of decaying isotropic turbulence using a dynamic Smagorinsky model yield correct decay rates -- even when the model is subjected to a range of artificial perturbations. The objective of the present study is to determine to what extent this `self-adjusting' feature of dynamic SGS models is found in LES of inhomogeneous flows. The effects of numerical and modeling parameters on the accuracy of LES solutions of fully developed and developing turbulent channel flow are studied, using a spectral code and various dynamic models (including those of Lilly et al. and Meneveau et al.); other modeling parameters tested include the filter-width ratio and the effective magnitude of the Smagorinsky coefficient. Numerical parameters include the form of the convective term and the type of test filter (sharp-cutoff versus tophat). The resulting LES statistics are found to be surprisingly sensitive to the various parameter choices, which implies that more care than is needed for homogeneous-flow simulations must be exercised when performing LES of inhomogeneous flows.

  6. Modelling sub-grid wetland in the ORCHIDEE global land surface model: evaluation against river discharges and remotely sensed data

    Directory of Open Access Journals (Sweden)

    B. Ringeval

    2012-07-01

    Full Text Available The quality of the global hydrological simulations performed by land surface models (LSMs strongly depends on processes that occur at unresolved spatial scales. Approaches such as TOPMODEL have been developed, which allow soil moisture redistribution within each grid-cell, based upon sub-grid scale topography. Moreover, the coupling between TOPMODEL and a LSM appears as a potential way to simulate wetland extent dynamic and its sensitivity to climate, a recently identified research problem for biogeochemical modelling, including methane emissions. Global evaluation of the coupling between TOPMODEL and an LSM is difficult, and prior attempts have been indirect, based on the evaluation of the simulated river flow. This study presents a new way to evaluate this coupling, within the ORCHIDEE LSM, using remote sensing data of inundated areas. Because of differences in nature between the satellite derived information – inundation extent – and the variable diagnosed by TOPMODEL/ORCHIDEE – area at maximum soil water content, the evaluation focuses on the spatial distribution of these two quantities as well as on their temporal variation. Despite some difficulties in exactly matching observed localized inundated events, we obtain a rather good agreement in the distribution of these two quantities at a global scale. Floodplains are not accounted for in the model, and this is a major limitation. The difficulty of reproducing the year-to-year variability of the observed inundated area (for instance, the decreasing trend by the end of 90s is also underlined. Classical indirect evaluation based on comparison between simulated and observed river flow is also performed and underlines difficulties to simulate river flow after coupling with TOPMODEL. The relationship between inundation and river flow at the basin scale in the model is analyzed, using both methods (evaluation against remote sensing data and river flow. Finally, we discuss the potential of

  7. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    Science.gov (United States)

    Sarlak, Hamid

    2017-05-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60,000 and simulations have been performed to primarily investigate the role of sub-grid scale (SGS) modeling on the dynamics of flow generated over the airfoil, which has not been dealt with in great detail in the past. It is seen that simulations are increasingly getting influenced by SGS modeling with increasing the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit LES offers closest pressure distribution predictions compared with literature.

  8. Simple lattice Boltzmann subgrid-scale model for convectional flows with high Rayleigh numbers within an enclosed circular annular cavity

    Science.gov (United States)

    Chen, Sheng; Tölke, Jonas; Krafczyk, Manfred

    2009-08-01

    Natural convection within an enclosed circular annular cavity formed by two concentric vertical cylinders is of fundamental interest and practical importance. Generally, the assumption of axisymmetric thermal flow is adopted for simulating such natural convections and the validity of the assumption of axisymmetric thermal flow is still held even for some turbulent convection. Usually the Rayleigh numbers (Ra) of realistic flows are very high. However, the work to design suitable and efficient lattice Boltzmann (LB) models on such flows is quite rare. To bridge the gap, in this paper a simple LB subgrid-scale (SGS) model, which is based on our recent work [S. Chen, J. Tölke, and M. Krafczyk, Phys. Rev. E 79, 016704 (2009); S. Chen, J. Tölke, S. Geller, and M. Krafczyk, Phys. Rev. E 78, 046703 (2008)], is proposed for simulating convectional flow with high Ra within an enclosed circular annular cavity. The key parameter for the SGS model can be quite easily and efficiently evaluated by the present model. The numerical experiments demonstrate that the present model works well for a large range of Ra and Prandtl number (Pr). Though in the present study a popularly used static Smagorinsky turbulence model is adopted to demonstrate how to develop a LB SGS model for simulating axisymmetric thermal flows with high Ra, other state-of-the-art turbulence models can be incorporated into the present model in the same way. In addition, the present model can be extended straightforwardly to simulate other axisymmetric convectional flows with high Ra, for example, turbulent convection with internal volumetric heat generation in a vertical cylinder, which is an important simplified representation of a nuclear reactor.

  9. A Physically Based Horizontal Subgrid-scale Turbulent Mixing Parameterization for the Convective Boundary Layer in Mesoscale Models

    Science.gov (United States)

    Zhou, Bowen; Xue, Ming; Zhu, Kefeng

    2017-04-01

    Compared to the representation of vertical turbulent mixing through various PBL schemes, the treatment of horizontal turbulence mixing in the boundary layer within mesoscale models, with O(10) km horizontal grid spacing, has received much less attention. In mesoscale models, subgrid-scale horizontal fluxes most often adopt the gradient-diffusion assumption. The horizontal mixing coefficients are usually set to a constant, or through the 2D Smagorinsky formulation, or in some cases based on the 1.5-order turbulence kinetic energy (TKE) closure. In this work, horizontal turbulent mixing parameterizations using physically based characteristic velocity and length scales are proposed for the convective boundary layer based on analysis of a well-resolved, wide-domain large-eddy simulation (LES). The proposed schemes involve different levels of sophistication. The first two schemes can be used together with first-order PBL schemes, while the third uses TKE to define its characteristic velocity scale and can be used together with TKE-based higher-order PBL schemes. The current horizontal mixing formulations are also assessed a priori through the filtered LES results to illustrate their limitations. The proposed parameterizations are tested a posteriori in idealized simulations of turbulent dispersion of a passive scalar. Comparisons show improved horizontal dispersion by the proposed schemes, and further demonstrate the weakness of the current schemes.

  10. Intercomparison of different subgrid-scale models for the Large Eddy Simulation of the diurnal evolution of the atmospheric boundary layer during the Wangara experiment

    Science.gov (United States)

    Dall'Ozzo, C.; Carissimo, B.; Musson-Genon, L.; Dupont, E.; Milliez, M.

    2012-04-01

    The study of a whole diurnal cycle of the atmospheric boundary layer evolving through unstable, neutral and stable states is essential to test a model applicable to the dispersion of pollutants. Consequently a LES of a diurnal cycle is performed and compared to observations from the Wangara experiment (Day 33-34). All simulations are done with Code_Saturne [1] an open source CFD code. The synthetic eddy method (SEM) [2] is implemented to initialize turbulence at the beginning of the simulation. Two different subgrid-scale (SGS) models are tested: the Smagorinsky model [3],[4] and the dynamical Wong and Lilly model [5]. The first one, the most classical, uses a Smagorinsky constant Cs to parameterize the dynamical turbulent viscosity while the second one relies on a variable C. Cs remains insensitive to the atmospheric stability level in contrary to the parameter C determined by the Wong and Lilly model. It is based on the error minimization of the difference between the tensors of the resolved turbulent stress (Lij) and the difference of the SGS stress tensors at two different filter scales (Mij). Furthermore, the thermal eddy diffusivity, as opposed to the Smagorinsky model, is calculated with a dynamical Prandtl number determination. The results are confronted to previous simulations from Basu et al. (2008) [6], using a locally averaged scale-dependent dynamic (LASDD) SGS model, and to previous RANS simulations. The accuracy in reproducing the experimental atmospheric conditions is discussed, especially regarding the night time low-level jet formation. In addition, the benefit of the utilization of a coupled radiative model is discussed.

  11. Final Report. Evaluating the Climate Sensitivity of Dissipative Subgrid-Scale Mixing Processes and Variable Resolution in NCAR's Community Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-12-14

    The goals of this project were to (1) assess and quantify the sensitivity and scale-dependency of unresolved subgrid-scale mixing processes in NCAR’s Community Earth System Model (CESM), and (2) to improve the accuracy and skill of forthcoming CESM configurations on modern cubed-sphere and variable-resolution computational grids. The research thereby contributed to the description and quantification of uncertainties in CESM’s dynamical cores and their physics-dynamics interactions.

  12. Model averaging and muddled multimodel inferences.

    Science.gov (United States)

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  13. Model averaging and muddled multimodel inferences

    Science.gov (United States)

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the

  14. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  15. Accounting for subgrid scale topographic variations in flood propagation modeling using MODFLOW

    DEFF Research Database (Denmark)

    Milzow, Christian; Kinzelbach, W.

    2010-01-01

    To be computationally viable, grid-based spatially distributed hydrological models of large wetlands or floodplains must be set up using relatively large cells (order of hundreds of meters to kilometers). Computational costs are especially high when considering the numerous model runs or model time...

  16. A Dynamic Subgrid Scale Model for Large Eddy Simulations Based on the Mori-Zwanzig Formalism

    CERN Document Server

    Parish, Eric J

    2016-01-01

    The development of reduced models for complex systems that lack scale separation remains one of the principal challenges in computational physics. The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a methodology for the development of mathematically-derived reduced models of dynamical systems. Several promising models have emerged from the optimal prediction community and have found application in molecular dynamics and turbulent flows. In this work, a novel M-Z-based closure model that addresses some of the deficiencies of existing methods is developed. The model is constructed by exploiting similarities between two levels of coarse-graining via the Germano identity of fluid mechanics and by assuming that memory effects have a finite temporal support. The appeal of the proposed model, which will be referred to as the `dynamic-$\\tau$' model, is that it is parameter-free and has a structural form imp...

  17. Sub-grid scale models for discontinuous Galerkin methods based on the Mori-Zwanzig formalism

    Science.gov (United States)

    Parish, Eric; Duraisamy, Karthk

    2017-11-01

    The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived closure models. The M-Z formalism provides a methodology to reformulate a high-dimensional Markovian dynamical system as a lower-dimensional, non-Markovian (non-local) system. In this lower-dimensional system, the effects of the unresolved scales on the resolved scales are non-local and appear as a convolution integral. The non-Markovian system is an exact statement of the original dynamics and is used as a starting point for model development. In this work, we investigate the development of M-Z-based closures model within the context of the Variational Multiscale Method (VMS). The method relies on a decomposition of the solution space into two orthogonal subspaces. The impact of the unresolved subspace on the resolved subspace is shown to be non-local in time and is modeled through the M-Z-formalism. The models are applied to hierarchical discontinuous Galerkin discretizations. Commonalities between the M-Z closures and conventional flux schemes are explored. This work was supported in part by AFOSR under the project ''LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  18. Simulation of subgrid orographic precipitation with an embedded 2-D cloud-resolving model

    Science.gov (United States)

    Jung, Joon-Hee; Arakawa, Akio

    2016-03-01

    By explicitly resolving cloud-scale processes with embedded two-dimensional (2-D) cloud-resolving models (CRMs), superparameterized global atmospheric models have successfully simulated various atmospheric events over a wide range of time scales. Up to now, however, such models have not included the effects of topography on the CRM grid scale. We have used both 3-D and 2-D CRMs to simulate the effects of topography with prescribed "large-scale" winds. The 3-D CRM is used as a benchmark. The results show that the mean precipitation can be simulated reasonably well by using a 2-D representation of topography as long as the statistics of the topography such as the mean and standard deviation are closely represented. It is also shown that the use of a set of two perpendicular 2-D grids can significantly reduce the error due to a 2-D representation of topography.

  19. USING CMAQ FOR EXPOSURE MODELING AND CHARACTERIZING THE SUB-GRID VARIABILITY FOR EXPOSURE ESTIMATES

    Science.gov (United States)

    Atmospheric processes and the associated transport and dispersion of atmospheric pollutants are known to be highly variable in time and space. Current air quality models that characterize atmospheric chemistry effects, e.g. the Community Multi-scale Air Quality (CMAQ), provide vo...

  20. A New Approach to Validate Subgrid Models in Complex High Reynolds Number Flows

    Science.gov (United States)

    1994-05-01

    data are also shown. These figures show the characteristic decrease in correla- tion when the grid is coarsened with the scale similarity model showing...passmms sogbe .iului by a Pus* dll- apWaishmalm ass" immp to bpssm do af sepia abdas h bell pufai aftg a pmiuayomd NO P) emd a smA amedidg of do @*M

  1. On the Effect of an Anisotropy-Resolving Subgrid-Scale Model on Turbulent Vortex Motions

    Science.gov (United States)

    2014-09-19

    expression coincides with the modified Leonard stress proposed by Ger- mano et al. (1991). In this model, the SGS turbulence energy kSGS may be evaluated as... mano subgridscale closure method. Phys. Fluids A, Vol. 4, pp. 633-635. Morinishi, Y. and Vasilyev, O.V. (2001), A recommended modification to the

  2. Numerical Dissipation and Subgrid Scale Modeling for Separated Flows at Moderate Reynolds Numbers

    Science.gov (United States)

    Cadieux, Francois; Domaradzki, Julian Andrzej

    2014-11-01

    Flows in rotating machinery, for unmanned and micro aerial vehicles, wind turbines, and propellers consist of different flow regimes. First, a laminar boundary layer is followed by a laminar separation bubble with a shear layer on top of it that experiences transition to turbulence. The separated turbulent flow then reattaches and evolves downstream from a nonequilibrium turbulent boundary layer to an equilibrium one. In previous work, the capability of LES to reduce the resolution requirements down to 1 % of DNS resolution for such flows was demonstrated (Cadieux et al., JFE 136-6). However, under-resolved DNS agreed better with the benchmark DNS than simulations with explicit SGS modeling because numerical dissipation and filtering alone acted as a surrogate SGS dissipation. In the present work numerical viscosity is quantified using a new method proposed recently by Schranner et al. and its effects are analyzed and compared to turbulent eddy viscosities of explicit SGS models. The effect of different SGS models on a simulation of the same flow using a non-dissipative code is also explored. Supported by NSF.

  3. Renormalization-group theory for the eddy viscosity in subgrid modeling

    Science.gov (United States)

    Zhou, YE; Vahala, George; Hossain, Murshed

    1988-01-01

    Renormalization-group theory is applied to incompressible three-dimensional Navier-Stokes turbulence so as to eliminate unresolvable small scales. The renormalized Navier-Stokes equation now includes a triple nonlinearity with the eddy viscosity exhibiting a mild cusp behavior, in qualitative agreement with the test-field model results of Kraichnan. For the cusp behavior to arise, not only is the triple nonlinearity necessary but the effects of pressure must be incorporated in the triple term. The renormalized eddy viscosity will not exhibit a cusp behavior if it is assumed that a spectral gap exists between the large and small scales.

  4. An explicit relaxation filtering framework based upon Perona-Malik anisotropic diffusion for shock capturing and subgrid scale modeling of Burgers turbulence

    CERN Document Server

    Maulik, Romit

    2016-01-01

    In this paper, we introduce a relaxation filtering closure approach to account for subgrid scale effects in explicitly filtered large eddy simulations using the concept of anisotropic diffusion. We utilize the Perona-Malik diffusion model and demonstrate its shock capturing ability and spectral performance for solving the Burgers turbulence problem, which is a simplified prototype for more realistic turbulent flows showing the same quadratic nonlinearity. Our numerical assessments present the behavior of various diffusivity functions in conjunction with a detailed sensitivity analysis with respect to the free modeling parameters. In comparison to direct numerical simulation (DNS) and under-resolved DNS results, we find that the proposed closure model is efficient in the prevention of energy accumulation at grid cut-off and is also adept at preventing any possible spurious numerical oscillations due to shock formation under the optimal parameter choices. In contrast to other relaxation filtering approaches, it...

  5. Average Annual Precipitation (PRISM model) 1961 - 1990

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  6. Birefringent dispersive FDTD subgridding scheme

    OpenAIRE

    De Deckere, B; Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2016-01-01

    A novel 2D finite difference time domain (FDTD) subgridding method is proposed, only subject to the Courant limit of the coarse grid. By making mu or epsilon inside the subgrid dispersive, unconditional stability is induced at the cost of a sparse, implicit set of update equations. By only adding dispersion along preferential directions, it is possible to dramatically reduce the rank of the matrix equation that needs to be solved.

  7. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  8. Subgrid-scale turbulence in shock-boundary layer flows

    Science.gov (United States)

    Jammalamadaka, Avinash; Jaberi, Farhad

    2015-04-01

    Data generated by direct numerical simulation (DNS) for a Mach 2.75 zero-pressure gradient turbulent boundary layer interacting with shocks of different intensities are used for a priori analysis of subgrid-scale (SGS) turbulence and various terms in the compressible filtered Navier-Stokes equations. The numerical method used for DNS is based on a hybrid scheme that uses a non-dissipative central scheme in the shock-free turbulent regions and a robust monotonicity-preserving scheme in the shock regions. The behavior of SGS stresses and their components, namely Leonard, Cross and Reynolds components, is examined in various regions of the flow for different shock intensities and filter widths. The backscatter in various regions of the flow is found to be significant only instantaneously, while the ensemble-averaged statistics indicate no significant backscatter. The budgets for the SGS kinetic energy equation are examined for a better understanding of shock-tubulence interactions at the subgrid level and also with the aim of providing useful information for one-equation LES models. A term-by-term analysis of SGS terms in the filtered total energy equation indicate that while each term in this equation is significant by itself, the net contribution by all of them is relatively small. This observation is consistent with our a posteriori analysis.

  9. Modeling lightning-NOx chemistry on a sub-grid scale in a global chemical transport model

    Directory of Open Access Journals (Sweden)

    A. Gressent

    2016-05-01

    Full Text Available For the first time, a plume-in-grid approach is implemented in a chemical transport model (CTM to parameterize the effects of the nonlinear reactions occurring within high concentrated NOx plumes from lightning NOx emissions (LNOx in the upper troposphere. It is characterized by a set of parameters including the plume lifetime, the effective reaction rate constant related to NOx–O3 chemical interactions, and the fractions of NOx conversion into HNO3 within the plume. Parameter estimates were made using the Dynamical Simple Model of Atmospheric Chemical Complexity (DSMACC box model, simple plume dispersion simulations, and the 3-D Meso-NH (non-hydrostatic mesoscale atmospheric model. In order to assess the impact of the LNOx plume approach on the NOx and O3 distributions on a large scale, simulations for the year 2006 were performed using the GEOS-Chem global model with a horizontal resolution of 2° × 2.5°. The implementation of the LNOx parameterization implies an NOx and O3 decrease on a large scale over the region characterized by a strong lightning activity (up to 25 and 8 %, respectively, over central Africa in July and a relative increase downwind of LNOx emissions (up to 18 and 2 % for NOx and O3, respectively, in July. The calculated variability in NOx and O3 mixing ratios around the mean value according to the known uncertainties in the parameter estimates is at a maximum over continental tropical regions with ΔNOx [−33.1, +29.7] ppt and ΔO3 [−1.56, +2.16] ppb, in January, and ΔNOx [−14.3, +21] ppt and ΔO3 [−1.18, +1.93] ppb, in July, mainly depending on the determination of the diffusion properties of the atmosphere and the initial NO mixing ratio injected by lightning. This approach allows us (i to reproduce a more realistic lightning NOx chemistry leading to better NOx and O3 distributions on the large scale and (ii to focus on other improvements to reduce remaining uncertainties from processes

  10. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.

    2017-01-05

    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  11. Advanced subgrid-scale modeling for convection-dominated species transport at fluid interfaces with application to mass transfer from rising bubbles

    Science.gov (United States)

    Weiner, Andre; Bothe, Dieter

    2017-10-01

    This paper presents a novel subgrid scale (SGS) model for simulating convection-dominated species transport at deformable fluid interfaces. One possible application is the Direct Numerical Simulation (DNS) of mass transfer from rising bubbles. The transport of a dissolving gas along the bubble-liquid interface is determined by two transport phenomena: convection in streamwise direction and diffusion in interface normal direction. The convective transport for technical bubble sizes is several orders of magnitude higher, leading to a thin concentration boundary layer around the bubble. A true DNS, fully resolving hydrodynamic and mass transfer length scales results in infeasible computational costs. Our approach is therefore a DNS of the flow field combined with a SGS model to compute the mass transfer between bubble and liquid. An appropriate model-function is used to compute the numerical fluxes on all cell faces of an interface cell. This allows to predict the mass transfer correctly even if the concentration boundary layer is fully contained in a single cell layer around the interface. We show that the SGS-model reduces the resolution requirements at the interface by a factor of ten and more. The integral flux correction is also applicable to other thin boundary layer problems. Two flow regimes are investigated to validate the model. A semi-analytical solution for creeping flow is used to assess local and global mass transfer quantities. For higher Reynolds numbers ranging from Re = 100 to Re = 460 and Péclet numbers between Pe =104 and Pe = 4 ṡ106 we compare the global Sherwood number against correlations from literature. In terms of accuracy, the predicted mass transfer never deviates more than 4% from the reference values.

  12. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  13. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  14. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  15. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  16. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  17. High-Resolution Global Modeling of the Effects of Subgrid-Scale Clouds and Turbulence on Precipitating Cloud Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bogenschutz, Peter [National Center for Atmospheric Research, Boulder, CO (United States); Moeng, Chin-Hoh [National Center for Atmospheric Research, Boulder, CO (United States)

    2015-10-13

    The PI’s at the National Center for Atmospheric Research (NCAR), Chin-Hoh Moeng and Peter Bogenschutz, have primarily focused their time on the implementation of the Simplified-Higher Order Turbulence Closure (SHOC; Bogenschutz and Krueger 2013) to the Multi-scale Modeling Framework (MMF) global model and testing of SHOC on deep convective cloud regimes.

  18. Effect of reactions in small eddies on biomass gasification with eddy dissipation concept - Sub-grid scale reaction model.

    Science.gov (United States)

    Chen, Juhui; Yin, Weijie; Wang, Shuai; Meng, Cheng; Li, Jiuru; Qin, Bai; Yu, Guangbin

    2016-07-01

    Large-eddy simulation (LES) approach is used for gas turbulence, and eddy dissipation concept (EDC)-sub-grid scale (SGS) reaction model is employed for reactions in small eddies. The simulated gas molar fractions are in better agreement with experimental data with EDC-SGS reaction model. The effect of reactions in small eddies on biomass gasification is emphatically analyzed with EDC-SGS reaction model. The distributions of the SGS reaction rates which represent the reactions in small eddies with particles concentration and temperature are analyzed. The distributions of SGS reaction rates have the similar trend with those of total reactions rates and the values account for about 15% of the total reactions rates. The heterogeneous reaction rates with EDC-SGS reaction model are also improved during the biomass gasification process in bubbling fluidized bed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Assessment of a flame surface density-based subgrid turbulent combustion model for nonpremixed flames of wood pyrolysis gas

    Science.gov (United States)

    Zhou, Xiangyang; Pakdee, Watit; Mahalingam, Shankar

    2004-10-01

    A flame surface density (FSD) model for closing the unresolved reaction source terms is developed and implemented in a large eddy simulation (LES) of turbulent nonpremixed flame of wood pyrolysis gas and air. In this model, the filtered reaction rate ω¯α of species α is estimated as the product of the consumption rate per unit surface area mα and the filtered FSD Σ¯. This approach is attractive since it decouples the complex chemical problem (mα) from the description of the turbulence combustion interaction (Σ¯). A simplified computational methodology is derived for filtered FSD Σ¯, which is approximated as the product of the conditional filtered gradient of mixture fraction and the filtered probability density function. Two models for flamelet consumption rate mα are proposed to consider the effect of filtered scalar dissipation rate. The performance of these models is assessed by direct numerical simulation (DNS) database where a laminar diffusion flame interacts with a decaying homogeneous and isotropic turbulent flow field. The chemistry is modeled by a four-step reduced mechanism that describes the oxidization process of gaseous fuel released from high temperature pyrolysis of wood occurring in a wildland fire. Two-dimensional (2D) and 3D LES computations based on the FSD models are conducted for the same conditions as the DNS. The comparative assessments confirm the applicability of the proposed FSD model to describe the filtered reaction rate and the time evolution of temperature and species concentration in the turbulent nonpremixed flame.

  20. Predicting the impacts of fishing canals on Floodplain Dynamics in Northern Cameroon using a small-scale sub-grid hydraulic model

    Science.gov (United States)

    Shastry, A. R.; Durand, M. T.; Fernandez, A.; Hamilton, I.; Kari, S.; Labara, B.; Laborde, S.; Mark, B. G.; Moritz, M.; Neal, J. C.; Phang, S. C.

    2015-12-01

    Modeling Regime Shifts in the Logone floodplain (MORSL) is an ongoing interdisciplinary project at The Ohio State University studying the ecological, social and hydrological system of the region. This floodplain, located in Northern Cameroon, is part of the Lake Chad basin. Between September and October the floodplain is inundated by the overbank flow from the Logone River, which is important for agriculture and fishing. Fishermen build canals to catch fish during the flood's recession to the river by installing fishnets at the intersection of the canals and the river. Fishing canals thus connect the river to natural depressions of the terrain, which act as seasonal ponds during this part of the year. Annual increase in the number of canals affect hydraulics and hence fishing in the region. In this study, the Bara region (1 km2) of the Logone floodplain, through which Lorome Mazra flows, is modeled using LISFLOOD-FP, a raster-based model with sub-grid parameterizations of canals. The aim of the study is to find out how the small-scale, local features like canals and fishnets govern the flow, so that it can be incorporated in a large-scale model of the floodplain at a coarser spatial resolution. We will also study the effect of increasing number of canals on the flooding pattern. We use a simplified version of the hydraulic system at a grid-cell size of 30-m, using synthetic topography, parameterized fishing canals, and representing fishnets as trash screens. The inflow at Bara is obtained from a separate, lower resolution (1-km grid-cell) model run, which is forced by daily discharge records obtained from Katoa, located about 25-km to the south of Bara. The model appropriately captures the rise and recession of the annual flood, supporting use of the LISFLOOD-FP approach. Predicted water levels at specific points in the river, the canals, the depression and the floodplain will be compared to field measured heights of flood recession in Bara, November 2014.

  1. Effect of aerosol subgrid variability on aerosol optical depth and cloud condensation nuclei: Implications for global aerosol modelling

    NARCIS (Netherlands)

    Weigum, Natalie; Schutgens, Nick; Stier, Philip

    2016-01-01

    A fundamental limitation of grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies in simulated aerosol climate effects

  2. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  3. Statistical dynamical subgrid-scale parameterizations for geophysical flows

    Energy Technology Data Exchange (ETDEWEB)

    O' Kane, T J; Frederiksen, J S [Centre for Australian Weather and Climate Research, Bureau of Meteorology, 700 Collins St, Docklands, Melbourne, VIC (Australia) and CSIRO Marine and Atmospheric Research, Aspendale, VIC (Australia)], E-mail: t.okane@bom.gov.au

    2008-12-15

    Simulations of both atmospheric and oceanic circulations at given finite resolutions are strongly dependent on the form and strengths of the dynamical subgrid-scale parameterizations (SSPs) and in particular are sensitive to subgrid-scale transient eddies interacting with the retained scale topography and the mean flow. In this paper, we present numerical results for SSPs of the eddy-topographic force, stochastic backscatter, eddy viscosity and eddy-mean field interaction using an inhomogeneous statistical turbulence model based on a quasi-diagonal direct interaction approximation (QDIA). Although the theoretical description on which our model is based is for general barotropic flows, we specifically focus on global atmospheric flows where large-scale Rossby waves are present. We compare and contrast the closure-based results with an important earlier heuristic SSP of the eddy-topographic force, based on maximum entropy or statistical canonical equilibrium arguments, developed specifically for general ocean circulation models (Holloway 1992 J. Phys. Oceanogr. 22 1033-46). Our results demonstrate that where strong zonal flows and Rossby waves are present, such as in the atmosphere, maximum entropy arguments are insufficient to accurately parameterize the subgrid contributions due to eddy-eddy, eddy-topographic and eddy-mean field interactions. We contrast our atmospheric results with findings for the oceans. Our study identifies subgrid-scale interactions that are currently not parameterized in numerical atmospheric climate models, which may lead to systematic defects in the simulated circulations.

  4. An improved switching converter model using discrete and average techniques

    Science.gov (United States)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  5. On the Choice of Prior in Bayesian Model Averaging

    NARCIS (Netherlands)

    Einmahl, J.H.J.; Magnus, J.R.; Kumar, K.

    2011-01-01

    Bayesian model averaging attempts to combine parameter estimation and model uncertainty in one coherent framework. The choice of prior is then critical. Within an explicit framework of ignorance we define a ‘suitable’ prior as one which leads to a continuous and suitable analog to the pretest

  6. Bayesian model averaging of naive Bayes for clustering.

    Science.gov (United States)

    Santafé, Guzmán; Lozano, Jose A; Larrañaga, Pedro

    2006-10-01

    This paper considers a Bayesian model-averaging (MA) approach to learn an unsupervised naive Bayes classification model. By using the expectation model-averaging (EMA) algorithm, which is proposed in this paper, a unique naive Bayes model that approximates an MA over selective naive Bayes structures is obtained. This algorithm allows to obtain the parameters for the approximate MA clustering model in the same time complexity needed to learn the maximum-likelihood model with the expectation-maximization algorithm. On the other hand, the proposed method can also be regarded as an approach to an unsupervised feature subset selection due to the fact that the model obtained by the EMA algorithm incorporates information on how dependent every predictive variable is on the cluster variable.

  7. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  8. A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Chumakov, Sergei [Los Alamos National Laboratory

    2008-01-01

    We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.

  9. Modeling of Sokoto Daily Average Temperature: A Fractional ...

    African Journals Online (AJOL)

    5698. Modeling of Sokoto Daily Average Temperature: A Fractional Integration Approach. *1L.K. Ibrahim, 2B.K. Asare, 2M. Yakubu and 1U. Dauda. 1Department of Mathematics and Computer Science, Umaru Musa Yar'adua University, Katsina ...

  10. Dynamic logistic regression and dynamic model averaging for binary classification.

    Science.gov (United States)

    McCormick, Tyler H; Raftery, Adrian E; Madigan, David; Burd, Randall S

    2012-03-01

    We propose an online binary classification procedure for cases when there is uncertainty about the model to use and parameters within a model change over time. We account for model uncertainty through dynamic model averaging, a dynamic extension of Bayesian model averaging in which posterior model probabilities may also change with time. We apply a state-space model to the parameters of each model and we allow the data-generating model to change over time according to a Markov chain. Calibrating a "forgetting" factor accommodates different levels of change in the data-generating mechanism. We propose an algorithm that adjusts the level of forgetting in an online fashion using the posterior predictive distribution, and so accommodates various levels of change at different times. We apply our method to data from children with appendicitis who receive either a traditional (open) appendectomy or a laparoscopic procedure. Factors associated with which children receive a particular type of procedure changed substantially over the 7 years of data collection, a feature that is not captured using standard regression modeling. Because our procedure can be implemented completely online, future data collection for similar studies would require storing sensitive patient information only temporarily, reducing the risk of a breach of confidentiality. © 2011, The International Biometric Society.

  11. Fully variational average atom model with ion-ion correlations.

    Science.gov (United States)

    Starrett, C E; Saumon, D

    2012-02-01

    An average atom model for dense ionized fluids that includes ion correlations is presented. The model assumes spherical symmetry and is based on density functional theory, the integral equations for uniform fluids, and a variational principle applied to the grand potential. Starting from density functional theory for a mixture of classical ions and quantum mechanical electrons, an approximate grand potential is developed, with an external field being created by a central nucleus fixed at the origin. Minimization of this grand potential with respect to electron and ion densities is carried out, resulting in equations for effective interaction potentials. A third condition resulting from minimizing the grand potential with respect to the average ion charge determines the noninteracting electron chemical potential. This system is coupled to a system of point ions and electrons with an ion fixed at the origin, and a closed set of equations is obtained. Solution of these equations results in a self-consistent electronic and ionic structure for the plasma as well as the average ionization, which is continuous as a function of temperature and density. Other average atom models are recovered by application of simplifying assumptions.

  12. Forecasting natural gas consumption in China by Bayesian Model Averaging

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2015-11-01

    Full Text Available With rapid growth of natural gas consumption in China, it is in urgent need of more accurate and reliable models to make a reasonable forecast. Considering the limitations of the single model and the model uncertainty, this paper presents a combinative method to forecast natural gas consumption by Bayesian Model Averaging (BMA. It can effectively handle the uncertainty associated with model structure and parameters, and thus improves the forecasting accuracy. This paper chooses six variables for forecasting the natural gas consumption, including GDP, urban population, energy consumption structure, industrial structure, energy efficiency and exports of goods and services. The results show that comparing to Gray prediction model, Linear regression model and Artificial neural networks, the BMA method provides a flexible tool to forecast natural gas consumption that will have a rapid growth in the future. This study can provide insightful information on natural gas consumption in the future.

  13. The stability of a zonally averaged thermohaline circulation model

    CERN Document Server

    Schmidt, G A

    1995-01-01

    A combination of analytical and numerical techniques are used to efficiently determine the qualitative and quantitative behaviour of a one-basin zonally averaged thermohaline circulation ocean model. In contrast to earlier studies which use time stepping to find the steady solutions, the steady state equations are first solved directly to obtain the multiple equilibria under identical mixed boundary conditions. This approach is based on the differentiability of the governing equations and especially the convection scheme. A linear stability analysis is then performed, in which the normal modes and corresponding eigenvalues are found for the various equilibrium states. Resonant periodic solutions superimposed on these states are predicted for various types of forcing. The results are used to gain insight into the solutions obtained by Mysak, Stocker and Huang in a previous numerical study in which the eddy diffusivities were varied in a randomly forced one-basin zonally averaged model. Resonant stable oscillat...

  14. Improved Bayesian multimodeling: Integration of copulas and Bayesian model averaging

    Science.gov (United States)

    Madadgar, Shahrbanou; Moradkhani, Hamid

    2014-12-01

    Bayesian model averaging (BMA) is a popular approach to combine hydrologic forecasts from individual models and characterize the uncertainty induced by model structure. In the original form of BMA, the conditional probability density function (PDF) of each model is assumed to be a particular probability distribution (e.g., Gaussian, gamma, etc.). If the predictions of any hydrologic model do not follow certain distribution, a data transformation procedure is required prior to model averaging. Moreover, it is strongly recommended to apply BMA on unbiased forecasts, whereas it is sometimes difficult to effectively remove bias from the predictions of complex hydrologic models. To overcome these limitations, we develop an approach to integrate a group of multivariate functions, the so-called copula functions, into BMA. Here we introduce a copula-embedded BMA (Cop-BMA) method that relaxes any assumption on the shape of conditional PDFs. Copula functions have a flexible structure and do not restrict the shape of posterior distributions. Furthermore, copulas are effective tools in removing bias from hydrologic forecasts. To compare the performance of BMA with Cop-BMA, they are applied to hydrologic forecasts from different rainfall-runoff and land-surface models. We consider the streamflow observation and simulations for 10 river basins provided by the Model Parameter Estimation Experiment (MOPEX) project. Results demonstrate that the predictive distributions are more accurate and reliable, less biased, and more confident with small uncertainty after Cop-BMA application. It is also shown that the postprocessed forecasts have better correlation with observation after Cop-BMA application.

  15. Evaluation of a Sub-Grid Topographic Drag Parameterizations for Modeling Surface Wind Speed During Storms Over Complex Terrain in the Northeast U.S.

    Science.gov (United States)

    Frediani, M. E.; Hacker, J.; Anagnostou, E. N.; Hopson, T. M.

    2015-12-01

    This study aims at improving regional simulation of 10-meter wind speed by verifying PBL schemes for storms at different scales, including convective storms, blizzards, tropical storms and nor'easters over complex terrain in the northeast U.S. We verify a recently proposed sub-grid topographic drag scheme in stormy conditions and compare it with two PBL schemes (Mellor-Yamada and Yonsei University) from WRF-ARW over a region in the Northeast U.S. The scheme was designed to adjust the surface drag over regions with high subgrid-scale topographic variability. The schemes are compared using spatial, temporal, and pattern criteria against surface observations. The spatial and temporal criteria are defined by season, diurnal cycle, and topography; the pattern, is based on clusters derived using clustering analysis. Results show that the drag scheme reduces the positive bias of low wind speeds, but over-corrects the high wind speeds producing a magnitude-increasing negative bias with increasing speed. Both other schemes underestimate the most frequent low-speed mode and overestimate the high-speeds. Error characteristics of all schemes respond to seasonal and diurnal cycle changes. The Topo-wind experiment shows the best agreement with the observation quantiles in summer and fall, the best representation of the diurnal cycle in these seasons, and reduces the bias of all surface stations near the coast. In more stable conditions the Topo-wind scheme shows a larger negative bias. The cluster analysis reveals a correlation between bias and mean speed from the Mellor-Yamada and Yonsei University schemes that is not present when the drag scheme is used. When the drag scheme is used the bias correlates with wind direction; the bias increases when the meridional wind component is negative. This pattern corresponds to trajectories with more land interaction with the highest biases found in northwest circulation clusters.

  16. Application Bayesian Model Averaging method for ensemble system for Poland

    Science.gov (United States)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation

  17. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  18. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  19. Operational forecasting with the subgrid technique on the Elbe Estuary

    Science.gov (United States)

    Sehili, Aissa

    2017-04-01

    Modern remote sensing technologies can deliver very detailed land surface height data that should be considered for more accurate simulations. In that case, and even if some compromise is made with regard to grid resolution of an unstructured grid, simulations still will require large grids which can be computationally very demanding. The subgrid technique, first published by Casulli (2009), is based on the idea of making use of the available detailed subgrid bathymetric information while performing computations on relatively coarse grids permitting large time steps. Consequently, accuracy and efficiency are drastically enhanced if compared to the classical linear method, where the underlying bathymetry is solely discretized by the computational grid. The algorithm guarantees rigorous mass conservation and nonnegative water depths for any time step size. Computational grid-cells are permitted to be wet, partially wet or dry and no drying threshold is needed. The subgrid technique is used in an operational forecast model for water level, current velocity, salinity and temperature of the Elbe estuary in Germany. Comparison is performed with the comparatively highly resolved classical unstructured grid model UnTRIM. The daily meteorological forcing data are delivered by the German Weather Service (DWD) using the ICON-EU model. Open boundary data are delivered by the coastal model BSHcmod of the German Federal Maritime and Hydrographic Agency (BSH). Comparison of predicted water levels between classical and subgrid model shows a very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out within less than 10 minutes on standard PC-like hardware. The model is capable of permanently delivering highly resolved temporal and spatial information on water level, current velocity, salinity and temperature for the whole estuary. The model offers also the possibility to

  20. Average waiting time profiles of uniform DQDB model

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V. [Oak Ridge National Lab., TN (United States); Maly, K.; Olariu, S.; Dharanikota, S.; Zhang, L.; Game, D. [Old Dominion Univ., Norfolk, VA (United States). Dept. of Computer Science

    1993-09-07

    The Distributed Queue Dual Bus (DQDB) system consists of a linear arrangement of N nodes that communicate with each other using two contra-flowing buses; the nodes use an extremely simple protocol to send messages on these buses. This simple, but elegant, system has been found to be very challenging to analyze. We consider a simple and uniform abstraction of this model to highlight the fairness issues in terms of average waiting time. We introduce a new approximation method to analyze the performance of DQDB system in terms of the average waiting time of a node expressed as a function of its position. Our approach abstracts the intimate relationship between the load of the system and its fairness characteristics, and explains all basic behavior profiles of DQDB observed in previous simulation. For the uniform DQDB with equal distance between adjacent nodes, we show that the system operates under three basic behavior profiles and a finite number of their combinations that depend on the load of the network. Consequently, the system is not fair at any load in terms of the average waiting times. In the vicinity of a critical load of 1 {minus} 4/N, the uniform network runs into a state akin to chaos, where its behavior fluctuates from one extreme to the other with a load variation of 2/N. Our analysis is supported by simulation results. We also show that the main theme of the analysis carries over to the general (non-uniform) DQDB; by suitably choosing the inter-node distances, the DQDB can be made fair around some loads, but such system will become unfair as the load changes.

  1. The dynamics of multimodal integration: The averaging diffusion model.

    Science.gov (United States)

    Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James

    2017-12-01

    We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

  2. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.

    2015-08-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  3. Incorporation of fragmentation into a volume average solidification model

    Science.gov (United States)

    Zheng, Y.; Wu, M.; Kharicha, A.; Ludwig, A.

    2018-01-01

    In this study, a volume average solidification model was extended to consider fragmentation as a source of equiaxed crystals during mixed columnar-equiaxed solidification. The formulation suggested for fragmentation is based on two hypotheses: the solute-driven remelting is the dominant mechanism; and the transport of solute-enriched melt through an interdendritic flow in the columnar growth direction is favorable for solute-driven remelting and is the necessary condition for fragment transportation. Furthermore, a test case with Sn-10 wt%Pb melt solidifying vertically downward in a 2D domain (50 × 60 mm2) was calculated to demonstrate the model’s features. Solidification started from the top boundary, and a columnar structure developed initially with its tip growing downward. Furthermore, thermo-solutal convection led to fragmentation in the mushy zone near the columnar tip front. The fragments transported out of the columnar region continued to grow and sink, and finally settled down and piled up in the bottom domain. The growing columnar structure from the top and pile-up of equiaxed crystals from the bottom finally led to a mixed columnar-equiaxed structure, in turn leading to a columnar-to-equiaxed transition (CET). A special macrosegregation pattern was also predicted, in which negative segregation occurred in both columnar and equiaxed regions and a relatively strong positive segregation occurred in the middle domain near the CET line. A parameter study was performed to verify the model capability, and the uncertainty of the model assumption and parameter was discussed.

  4. Model characteristics of average skill boxers’ competition functioning

    Directory of Open Access Journals (Sweden)

    Martsiv V.P.

    2015-08-01

    Full Text Available Purpose: analysis of competition functioning of average skill boxers. Material: 28 fights of boxers-students have been analyzed. The following coefficients have been determined: effectiveness of punches, reliability of defense. The fights were conducted by formula: 3 rounds (3 minutes - every round. Results: models characteristics of boxers for stage of specialized basic training have been worked out. Correlations between indicators of specialized and general exercises have been determined. It has been established that sportsmanship of boxers manifests as increase of punches’ density in a fight. It has also been found that increase of coefficient of punches’ effectiveness results in expansion of arsenal of technical-tactic actions. Importance of consideration of standard specialized loads has been confirmed. Conclusions: we have recommended means to be applied in training process at this stage of training. On the base of our previous researches we have made recommendations on complex assessment of sportsmen-students’ skillfulness. Besides, we have shown approaches to improvement of different sides of sportsmen’s fitness.

  5. An averaged polarizable potential for multiscale modeling in phospholipid membranes

    DEFF Research Database (Denmark)

    Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard

    2017-01-01

    A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...... and dynamical studies. © 2017 Wiley Periodicals, Inc....

  6. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  7. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  8. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  9. Corporate Average Fuel Economy Compliance and Effects Modeling System Documentation

    Science.gov (United States)

    2009-04-01

    The Volpe National Transportation Systems Center (Volpe Center) of the United States Department of Transportation's Research and Innovative Technology Administration has developed a modeling system to assist the National Highway Traffic Safety Admini...

  10. Model averaging and dimension selection for the singular value decomposition

    OpenAIRE

    Hoff, Peter D.

    2006-01-01

    Many multivariate data analysis techniques for an $m\\times n$ matrix $\\m Y$ are related to the model $\\m Y = \\m M +\\m E$, where $\\m Y$ is an $m\\times n$ matrix of full rank and $\\m M$ is an unobserved mean matrix of rank $K< (m\\wedge n)$. Typically the rank of $\\m M$ is estimated in a heuristic way and then the least-squares estimate of $\\m M$ is obtained via the singular value decomposition of $\\m Y$, yielding an estimate that can have a very high variance. In this paper we suggest a model-b...

  11. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  12. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  13. AC Small Signal Modeling of PWM Y-Source Converter by Circuit Averaging and Averaged Switch Modeling Technique

    DEFF Research Database (Denmark)

    Forouzesh, Mojtaba; Siwakoti, Yam Prasad; Blaabjerg, Frede

    2016-01-01

    Magnetically coupled Y-source impedance network is a newly proposed structure with versatile features intended for various power converter applications e.g. in the renewable energy technologies. The voltage gain of the Y-source impedance network rises exponentially as a function of turns ratio......, which is inherited from a special coupled inductor with three windings. Due to the importance of modeling in the converter design procedure, this paper is dedicated to dc and ac small signal modeling of the PWM Y-source converter. The derived transfer functions are presented in detail and have been...

  14. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.

  15. The Sensitivity of Simulated Competition Between Different Plant Functional Types to Subgrid Scale Representation of Vegetation in a Land Surface Model

    Science.gov (United States)

    Shrestha, R. K.; Arora, V.; Melton, J. R.

    2014-12-01

    Vegetation is a dynamic component of the earth system that affects weather and climate at hourly to centennial time scales. However, most current dynamic vegetation models do not explicitly simulate competition among Plant Functional Types (PFTs). Here we use the coupled CLASS-CTEM model (Canadian Land Surface Scheme coupled to Canadian Terrestrial Ecosystem Model) to explicitly simulate competition between nine PFTs for available space using a modified version of Lotka - Volterra (LV) predator-prey equations. The nine PFTs include evergreen and deciduous needleleaf trees, evergreen and cold and drought deciduous broadleaf trees and C3 and C4 crops and grasses. The CLASS-CTEM model can be configured either in the composite (single tile) or the mosaic (multiple tiles) mode. Our results show that the model is sensitive to the chosen mode. The simulated fractional coverage of PFTs are similar between two approaches at some locations whereas at the other locations the two approaches yield different results. The simulated fractional coverage of PFTs are also compared with the available observations-based estimates. Simulated results at selected locations across the globe show that the model is able to realistically simulate the fractional coverage of tree and grass PFTs and the bare fraction, as well as the fractional coverage of individual tree and grass PFTs. Along with the observed patterns of vegetation distribution the CLASS-CTEM modelling framework is also able to simulate realistic succession patterns. Some differences remain and these are attributed to the coarse spatial resolution of the model (~3.75°) and the limited number of PFTs represented in the model.

  16. Collaborative Project: High-resolution Global Modeling of the Effects of Subgrid-Scale Clouds and Turbulence on Precipitating Cloud Systems

    Energy Technology Data Exchange (ETDEWEB)

    Randall, David A. [Colorado State Univ., Fort Collins, CO (United States). Dept. of Atmospheric Science

    2015-11-01

    We proposed to implement, test, and evaluate recently developed turbulence parameterizations, using a wide variety of methods and modeling frameworks together with observations including ARM data. We have successfully tested three different turbulence parameterizations in versions of the Community Atmosphere Model: CLUBB, SHOC, and IPHOC. All three produce significant improvements in the simulated climate. CLUBB will be used in CAM6, and also in ACME. SHOC is being tested in the NCEP forecast model. In addition, we have achieved a better understanding of the strengths and limitations of the PDF-based parameterizations of turbulence and convection.

  17. Evaluation of Subgrid-Scale Transport of Hydrometeors in a PDF-based Scheme using High-Resolution CRM Simulations

    Science.gov (United States)

    Wong, M.; Ovchinnikov, M.; Wang, M.; Larson, V. E.

    2014-12-01

    In current climate models, the model resolution is too coarse to explicitly resolve deep convective systems. Parameterization schemes are therefore needed to represent the physical processes at the sub-grid scale. Recently, an approach based on assumed probability density functions (PDFs) has been developed to help unify the various parameterization schemes used in current global models. In particular, a unified parameterization scheme called the Cloud Layers Unified By Binormals (CLUBB) scheme has been developed and tested successfully for shallow boundary-layer clouds. CLUBB's implementation in the Community Atmosphere Model, version 5 (CAM5) is also being extended to treat deep convection cases, but parameterizing subgrid-scale vertical transport of hydrometeors remains a challenge. To investigate the roots of the problem and possible solutions, we generate a high-resolution benchmark simulation of a deep convection case using a cloud-resolving model (CRM) called System for Atmospheric Modeling (SAM). We use the high-resolution 3D CRM results to assess the prognostic and diagnostic higher-order moments in CLUBB that are in relation to the subgrid-scale transport of hydrometeors. We also analyze the heat and moisture budgets in terms of CLUBB variables from the SAM benchmark simulation. The results from this study will be used to devise a better representation of vertical subgrid-scale transport of hydrometeors by utilizing the sub-grid variability information from CLUBB.

  18. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    Science.gov (United States)

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  19. Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation

    Science.gov (United States)

    Tsai, Frank T.-C.; Elshall, Ahmed S.

    2013-09-01

    Analysts are often faced with competing propositions for each uncertain model component. How can we judge that we select a correct proposition(s) for an uncertain model component out of numerous possible propositions? We introduce the hierarchical Bayesian model averaging (HBMA) method as a multimodel framework for uncertainty analysis. The HBMA allows for segregating, prioritizing, and evaluating different sources of uncertainty and their corresponding competing propositions through a hierarchy of BMA models that forms a BMA tree. We apply the HBMA to conduct uncertainty analysis on the reconstructed hydrostratigraphic architectures of the Baton Rouge aquifer-fault system, Louisiana. Due to uncertainty in model data, structure, and parameters, multiple possible hydrostratigraphic models are produced and calibrated as base models. The study considers four sources of uncertainty. With respect to data uncertainty, the study considers two calibration data sets. With respect to model structure, the study considers three different variogram models, two geological stationarity assumptions and two fault conceptualizations. The base models are produced following a combinatorial design to allow for uncertainty segregation. Thus, these four uncertain model components with their corresponding competing model propositions result in 24 base models. The results show that the systematic dissection of the uncertain model components along with their corresponding competing propositions allows for detecting the robust model propositions and the major sources of uncertainty.

  20. Average elasticity in the framework of the fixed effects logit model

    OpenAIRE

    Yoshitsugu Kitazawa

    2011-01-01

    This note proposes the average elasticity of the logit probabilities with respect to the exponential functions of explanatory variables in the framework of the fixed effects logit model. The average elasticity is able to be calculated using the consistent estimators of parameters of interest and the average of binary dependent variables, regardless of the fixed effects.

  1. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  2. Monte Carlo-based subgrid parameterization of vertical velocity and stratiform cloud microphysics in ECHAM5.5-HAM2

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2013-08-01

    Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.

  3. Online Prediction under Model Uncertainty Via Dynamic Model Averaging: Application to a Cold Rolling Mill

    Science.gov (United States)

    2007-12-14

    Raftery is Blumstein-Jordan Professor of Statistics and Sociology, Box 354320, Uni- versity of Washington, Seattle, WA 98195- 4320 ; email: raftery...and Hoeting 1997; Fernández, Ley, and Steel 2001; Eicher, Papageorgiou, and Raftery 2007). BMA usually ad- dresses the problem of uncertainty about...International Convention Record Part i, 216–240. Fernández, C., E. Ley, and M. F. J. Steel (2001). Benchmark priors for Bayesian model averaging. Journal of

  4. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  5. Multi-objective calibration of forecast ensembles using Bayesian model averaging

    NARCIS (Netherlands)

    Vrugt, J.A.; Clark, M.P.; Diks, C.G.H.; Duan, Q.; Robinson, B.A.

    2006-01-01

    Bayesian Model Averaging (BMA) has recently been proposed as a method for statistical postprocessing of forecast ensembles from numerical weather prediction models. The BMA predictive probability density function (PDF) of any weather quantity of interest is a weighted average of PDFs centered on the

  6. An Approach to Average Modeling and Simulation of Switch-Mode Systems

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of average modeling of PWM switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The paper discusses the derivation of PSPICE/ORCAD-compatible average models of the switch-mode power stages, their software implementation, and…

  7. Impacts of model averaging techniques on CMIP5 precipitation evaluation and projection

    Science.gov (United States)

    Xu, Yue-Ping; Zhu, Qian; Hsu, Kuo-lin

    2017-04-01

    Reliable precipitation projections are essential for informing policy decisions of climate change adaptation. Due to large uncertainty in GCM structure or initial conditions, multi-model ensemble is gaining its popularity for investigating impacts of climate change. However, how many models should be utilized to generate the ensemble has seldom been investigated, as well as the uncertainty from different model averaging techniques. The first aim of this study is to assess the performance of 22 CMIP5 models in terms of three statistical indices, i.e. the root-mean-square error, correlation coefficient and relative bias. Second, the number of models for ensemble is dealt with three different model averaging techniques, namely Bates-Granger averaging (BGA), Bayesian Model Averaging (BMA) and equal weight averaging (EWA). Thirdly, future annual and seasonal precipitation projections from the multi-model ensembles with different model averaging techniques and from individual CMIP5 models are compared. The Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) is used as reference dataset. The final results show that the best five models are generally enough to produce the ensemble. The root-mean square error, relative bias, correlation coefficient of the ensembles with BGA, BMA and EWA improve to a great extent compared with the best individual model. The ensembles can reduce uncertainty from GCMs for tendency detection and quantity projection of annul precipitations in the future under RCP2.6, RCP4.5 and RCP8.5. The seasonal projections from multi-model ensembles will generally increase from the value below the 25th percentile of projections from 22 individual CMIP5 models to the value between the mean and 75th percentile of individuals' projections. Uncertainty can rise from different model averaging techniques. Keywords: model averaging techniques; precipitation; PERSIANN-CDR; climate change; CMIP

  8. Identification of periodic autoregressive moving average models and their application to the modeling of river flows

    Science.gov (United States)

    Tesfaye, Yonas Gebeyehu; Meerschaert, Mark M.; Anderson, Paul L.

    2006-01-01

    The generation of synthetic river flow samples that can reproduce the essential statistical features of historical river flows is useful for the planning, design, and operation of water resource systems. Most river flow series are periodically stationary; that is, their mean and covariance functions are periodic with respect to time. This article develops model identification and simulation techniques based on a periodic autoregressive moving average (PARMA) model to capture the seasonal variations in river flow statistics. The innovations algorithm is used to obtain parameter estimates. An application to monthly flow data for the Fraser River in British Columbia is included. A careful statistical analysis of the PARMA model residuals, including a truncated Pareto model for the extreme tails, produces a realistic simulation of these river flows.

  9. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  10. Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data.

    Science.gov (United States)

    Shao, Kan; Gift, Jeffrey S

    2014-01-01

    The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the "hybrid" method proposed by Crump, two strategies of BMA, including both "maximum likelihood estimation based" and "Markov Chain Monte Carlo based" methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose-response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose-response data. © 2013 Society for Risk Analysis.

  11. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertising effectiveness

    OpenAIRE

    Diedrichs, P. C.; Lee, C.

    2010-01-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women’s body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men’s and women’s body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large model...

  12. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  13. Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations

    KAUST Repository

    Iliev, Oleg P.

    2010-01-01

    We present a two-scale finite element method for solving Brinkman\\'s equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We make use of the recently proposed discontinuous Galerkin FEM for Stokes equations by Wang and Ye in [12] and the concept of subgrid approximation developed for Darcy\\'s equations by Arbogast in [4]. In order to reduce the error along the coarse-grid interfaces we have added a alternating Schwarz iteration using patches around the coarse-grid boundaries. We have implemented the subgrid method using Deal.II FEM library, [7], and we present the computational results for a number of model problems. © 2010 Springer-Verlag Berlin Heidelberg.

  14. Estimation and Forecasting in Vector Autoregressive Moving Average Models for Rich Datasets

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Kapetanios, George

    We address the issue of modelling and forecasting macroeconomic variables using rich datasets, by adopting the class of Vector Autoregressive Moving Average (VARMA) models. We overcome the estimation issue that arises with this class of models by implementing an iterative ordinary least squares...

  15. Multi-Model Grand Ensemble Hydrologic Forecasting in the Fu River Basin Using Bayesian Model Averaging

    Directory of Open Access Journals (Sweden)

    Bo Qu

    2017-01-01

    Full Text Available Statistical post-processing for multi-model grand ensemble (GE hydrologic predictions is necessary, in order to achieve more accurate and reliable probabilistic forecasts. This paper presents a case study which applies Bayesian model averaging (BMA to statistically post-process raw GE runoff forecasts in the Fu River basin in China, at lead times ranging from 6 to 120 h. The raw forecasts were generated by running the Xinanjiang hydrologic model with ensemble forecasts (164 forecast members, using seven different “THORPEX Interactive Grand Global Ensemble” (TIGGE weather centres as forcing inputs. Some measures, such as data transformation and high-dimensional optimization, were included in the experiment after considering the practical water regime and data conditions. The results indicate that the BMA post-processing method is capable of improving the performance of raw GE runoff forecasts, yielding more calibrated and sharp predictive probability density functions (PDFs, over a range of lead times from 24 to 120 h. The analysis of percentile forecasts in two different flood events illustrates the great potential and prospects of BMA GE probabilistic river discharge forecasts, for taking precautions against severe flooding events.

  16. Analysis of litter size and average litter weight in pigs using a recursive model

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel; Thompson, Robin

    2007-01-01

    An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size......, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance...

  17. An averaging battery model for a lead-acid battery operating in an electric car

    Science.gov (United States)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  18. Importance of subgrid-scale parameterization in numerical simulations of lake circulation

    Science.gov (United States)

    Wang, Yongqi

    Two subgrid-scale modeling techniques--Smagorinsky's postulation for the horizontal eddy viscosity and the Mellor-Yamada level-2 model for the vertical eddy viscosity--are applied as turbulence closure conditions to numerical simulations of resolved-scale baroclinic lake circulations. The use of the total variation diminishing (TVD) technique in the numerical treatment of the advection terms in the governing equations depresses numerical diffusion to an acceptably low level and makes stable numerical performances possible with small eddy viscosities resulting from the turbulence closure parameterizations. The results show that, with regard to the effect of an external wind stress, the vertical turbulent mixing is mainly restricted to the topmost epilimnion with the order of magnitude for the vertical eddy viscosity of 10 -3 m 2 s -1, whilst the horizontal turbulent mixing may reach a somewhat deeper zone with an order of magnitude for the horizontal eddy viscosity of 0.1-1 m 2 s -1. Their spatial and temporal variations and influences on numerical results are significant. A comparison with prescribed constant eddy viscosities clearly shows the importance of subgrid-scale closures on resolved-scale flows in the lake circulation simulation. A predetermination of the eddy viscosities is inappropriate and should be abandoned. Their values must be determined by suitable subgrid-scale closure techniques.

  19. The Effects of Use of Average Instead of Daily Weather Data in Crop Growth Simulation Models

    NARCIS (Netherlands)

    Nonhebel, Sanderine

    1994-01-01

    Development and use of crop growth simulation models has increased in the last decades. Most crop growth models require daily weather data as input values. These data are not easy to obtain and therefore in many studies daily data are generated, or average values are used as input data for these

  20. Quaternion Averaging

    Science.gov (United States)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  1. Estimating Energy Conversion Efficiency of Thermoelectric Materials: Constant Property Versus Average Property Models

    Science.gov (United States)

    Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt

    2017-01-01

    Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.

  2. The subgrid-scale scalar variance under supercritical pressure conditions

    Science.gov (United States)

    Masi, Enrica; Bellan, Josette

    2011-08-01

    To model the subgrid-scale (SGS) scalar variance under supercritical-pressure conditions, an equation is first derived for it. This equation is considerably more complex than its equivalent for atmospheric-pressure conditions. Using a previously created direct numerical simulation (DNS) database of transitional states obtained for binary-species systems in the context of temporal mixing layers, the activity of terms in this equation is evaluated, and it is found that some of these new terms have magnitude comparable to that of governing terms in the classical equation. Most prominent among these new terms are those expressing the variation of diffusivity with thermodynamic variables and Soret terms having dissipative effects. Since models are not available for these new terms that would enable solving the SGS scalar variance equation, the adopted strategy is to directly model the SGS scalar variance. Two models are investigated for this quantity, both developed in the context of compressible flows. The first one is based on an approximate deconvolution approach and the second one is a gradient-like model which relies on a dynamic procedure using the Leonard term expansion. Both models are successful in reproducing the SGS scalar variance extracted from the filtered DNS database, and moreover, when used in the framework of a probability density function (PDF) approach in conjunction with the β-PDF, they excellently reproduce a filtered quantity which is a function of the scalar. For the dynamic model, the proportionality coefficient spans a small range of values through the layer cross-stream coordinate, boding well for the stability of large eddy simulations using this model.

  3. Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere

    Science.gov (United States)

    Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.; Houweling, Sander; De Bruine, Marko; Stiller, Gabriele P.; Haenel, Florian J.; Plieninger, Johannes; Bousquet, Philippe; Yin, Yi; Saunois, Marielle; Walker, Kaley A.; Deutscher, Nicholas M.; Griffith, David W. T.; Blumenstock, Thomas; Hase, Frank; Warneke, Thorsten; Wang, Zhiting; Kivi, Rigel; Robinson, John

    2016-09-01

    The distribution of methane (CH4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH4 mixing ratio (XCH4), which is being measured increasingly for the assessment of CH4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH4 accurately for estimating surface emissions from XCH4. Simulations from three CTMs are tested against XCH4 observations from the Total Carbon Column Network (TCCON). We analyze how the model-TCCON agreement in XCH4 depends on the model representation of stratospheric CH4 distributions. Model equivalents of TCCON XCH4 are computed with stratospheric CH4 fields from both the model simulations and from satellite-based CH4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH4 fields in place of model simulations improves the model-TCCON XCH4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH4 fields adjusted to ACE-FTS reduces the average XCH4 bias for ACTM (3.3 ppb), but increases the average XCH4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model differences in the simulation of

  4. A refined nonlinear averaged model for constant frequency current mode controlled PWM converters

    Science.gov (United States)

    Rodriguez, Francis D.; Chen, Jesse E.

    1991-10-01

    A refined, duo-mode model for current programmed buck converters is presented. The refined model uses a form of the current mode control law which is truly invariant with respect to operating conditions. That is, it is valid for both transient and steady-state operating conditions regardless of the converter operating mode, which could be etiher continuous conduction mode (CCM) or discontinuous conduction mode (DCM). The large-signal transient response predicted using the refined average model is shown to be virtually indistinguishable, in an average sense, from that predicted using a pulse-by-pulse simulation. The refined model is shown to exhibit improved high-frequency accuracy in both time and frequency domains. The model has been implemented in SPICE 2G6 and runs with default analysis options.

  5. Model averaging methods to merge operational statistical and dynamic seasonal streamflow forecasts in Australia

    Science.gov (United States)

    Schepen, Andrew; Wang, Q. J.

    2015-03-01

    The Australian Bureau of Meteorology produces statistical and dynamic seasonal streamflow forecasts. The statistical and dynamic forecasts are similarly reliable in ensemble spread; however, skill varies by catchment and season. Therefore, it may be possible to optimize forecasting skill by weighting and merging statistical and dynamic forecasts. Two model averaging methods are evaluated for merging forecasts for 12 locations. The first method, Bayesian model averaging (BMA), applies averaging to forecast probability densities (and thus cumulative probabilities) for a given forecast variable value. The second method, quantile model averaging (QMA), applies averaging to forecast variable values (quantiles) for a given cumulative probability (quantile fraction). BMA and QMA are found to perform similarly in terms of overall skill scores and reliability in ensemble spread. Both methods improve forecast skill across catchments and seasons. However, when both the statistical and dynamical forecasting approaches are skillful but produce, on special occasions, very different event forecasts, the BMA merged forecasts for these events can have unusually wide and bimodal distributions. In contrast, the distributions of the QMA merged forecasts for these events are narrower, unimodal and generally more smoothly shaped, and are potentially more easily communicated to and interpreted by the forecast users. Such special occasions are found to be rare. However, every forecast counts in an operational service, and therefore the occasional contrast in merged forecasts between the two methods may be more significant than the indifference shown by the overall skill and reliability performance.

  6. An extended car-following model accounting for the average headway effect in intelligent transportation system

    Science.gov (United States)

    Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming

    2017-04-01

    In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.

  7. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    Directory of Open Access Journals (Sweden)

    Huashan Li

    2014-01-01

    Full Text Available Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China.

  8. On the Gibbsian Nature of the Random Field Kac Model under Block-Averaging

    NARCIS (Netherlands)

    Külske, Christof

    2001-01-01

    We consider the Kac–Ising model in an arbitrary configuration of local magnetic fields η=(ηi)i ∈ Zd, in any dimension d, at any inverse temperature. We investigate the Gibbs properties of the ‘renormalized’ infinite volume measures obtained by block averaging any of the Gibbs-measures corresponding

  9. Averaging of the Equations of the Standard Cosmological Model over Rapid Oscillations

    Science.gov (United States)

    Ignat'ev, Yu. G.; Samigullina, A. R.

    2017-11-01

    An averaging of the equations of the standard cosmological model (SCM) is carried out. It is shown that the main contribution to the macroscopic energy density of the scalar field comes from its microscopic oscillations with the Compton period. The effective macroscopic equation of state of the oscillations of the scalar field corresponds to the nonrelativistic limit.

  10. The Possibility of Cosmic Acceleration via Spatial Averaging in Lemaitre-Tolman-Bondi Models

    OpenAIRE

    Paranjape, Aseem; Singh, T. P.

    2006-01-01

    We investigate the possible occurrence of a positive cosmic acceleration in a spatially averaged, expanding, unbound Lemaitre-Tolman-Bondi cosmology. By studying an approximation in which the contribution of three-curvature dominates over the matter density, we construct numerical models which exhibit acceleration.

  11. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    Science.gov (United States)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF

  12. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    Directory of Open Access Journals (Sweden)

    C. Montzka

    2017-07-01

    Full Text Available Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC and hydraulic conductivity (HCC curves are typically derived from soil texture via pedotransfer functions (PTFs. Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller–Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem–van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based

  13. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  14. Comparison of sea ice simulations with interactive and monthly averaged forcing models

    Science.gov (United States)

    Wu, Xingren; Simmonds, Ian; Budd, W. F.

    1996-04-01

    A dynamic-thermodynamic sea ice model is developed and coupled with the "21 wave 9 level" Melbourne University general circulation model to simulate the seasonal cycle of the global sea ice distribution. We have run the coupled system and obtained a creditable seasonal simulation of global sea ice. When monthly averaged atmospheric data (taken from the mean of the coupled run) are used to force the sea ice model, the seasonal cycle of sea ice extent (to the outer ice edge) is quite similar to that simulated in the interactive run. However, the actual sea ice covered area (i.e., excluding leads) differs considerably between the two simulations. Sea ice is more compact in the monthly averaged forced run than in the interactive run throughout the year in both hemispheres. The sea ice thickness distribution also differs between the two runs. In general, the sea ice is more open and thicker in the seasonal ice zone of the two polar regions for the interactive coupled case than for the mean forcing. We have also run the model forced with daily atmospheric data and the simulated sea ice distribution differs significantly from both the interactive model and the monthly averaged forcing results. These differences highlight the dangers of undertaking studies with sea ice models forced with prescribed atmospheric conditions rather than using a fully interactive atmosphere-sea ice system.

  15. Averaged model to study long-term dynamics of a probe about Mercury

    Science.gov (United States)

    Tresaco, Eva; Carvalho, Jean Paulo S.; Prado, Antonio F. B. A.; Elipe, Antonio; de Moraes, Rodolpho Vilhena

    2018-02-01

    This paper provides a method for finding initial conditions of frozen orbits for a probe around Mercury. Frozen orbits are those whose orbital elements remain constant on average. Thus, at the same point in each orbit, the satellite always passes at the same altitude. This is very interesting for scientific missions that require close inspection of any celestial body. The orbital dynamics of an artificial satellite about Mercury is governed by the potential attraction of the main body. Besides the Keplerian attraction, we consider the inhomogeneities of the potential of the central body. We include secondary terms of Mercury gravity field from J_2 up to J_6, and the tesseral harmonics \\overline{C}_{22} that is of the same magnitude than zonal J_2. In the case of science missions about Mercury, it is also important to consider third-body perturbation (Sun). Circular restricted three body problem can not be applied to Mercury-Sun system due to its non-negligible orbital eccentricity. Besides the harmonics coefficients of Mercury's gravitational potential, and the Sun gravitational perturbation, our average model also includes Solar acceleration pressure. This simplified model captures the majority of the dynamics of low and high orbits about Mercury. In order to capture the dominant characteristics of the dynamics, short-period terms of the system are removed applying a double-averaging technique. This algorithm is a two-fold process which firstly averages over the period of the satellite, and secondly averages with respect to the period of the third body. This simplified Hamiltonian model is introduced in the Lagrange Planetary equations. Thus, frozen orbits are characterized by a surface depending on three variables: the orbital semimajor axis, eccentricity and inclination. We find frozen orbits for an average altitude of 400 and 1000 km, which are the predicted values for the BepiColombo mission. Finally, the paper delves into the orbital stability of frozen

  16. Using multi-model averaging to improve the reliability of catchment scale nitrogen predictions

    Directory of Open Access Journals (Sweden)

    J.-F. Exbrayat

    2013-01-01

    Full Text Available Hydro-biogeochemical models are used to foresee the impact of mitigation measures on water quality. Usually, scenario-based studies rely on single model applications. This is done in spite of the widely acknowledged advantage of ensemble approaches to cope with structural model uncertainty issues. As an attempt to demonstrate the reliability of such multi-model efforts in the hydro-biogeochemical context, this methodological contribution proposes an adaptation of the reliability ensemble averaging (REA philosophy to nitrogen losses predictions. A total of 4 models are used to predict the total nitrogen (TN losses from the well-monitored Ellen Brook catchment in Western Australia. Simulations include re-predictions of current conditions and a set of straightforward management changes targeting fertilisation scenarios. Results show that, in spite of good calibration metrics, one of the models provides a very different response to management changes. This behaviour leads the simple average of the ensemble members to also predict reductions in TN export that are not in agreement with the other models. However, considering the convergence of model predictions in the more sophisticated REA approach assigns more weight to previously less well-calibrated models that are more in agreement with each other. This method also avoids having to disqualify any of the ensemble members.

  17. Third-Body Perturbation Using a Single Averaged Model: Application in Nonsingular Variables

    Directory of Open Access Journals (Sweden)

    Carlos Renato Huaura Solórzano

    2007-01-01

    Full Text Available The Lagrange's planetary equations written in terms of the classical orbital elements have the disadvantage of singularities in eccentricity and inclination. These singularities are due to the mathematical model used and do not have physical reasons. In this paper, we studied the third-body perturbation using a single averaged model in nonsingular variables. The goal is to develop a semianalytical study of the perturbation caused in a spacecraft by a third body using a single averaged model to eliminate short-period terms caused by the motion of the spacecraft. This is valid if no resonance occurs with the moon or the sun. Several plots show the time histories of the Keplerian elements of equatorial and circular orbits, which are the situations with singularities. In this paper, the expansions are limited only to second order in eccentricity and for the ratio of the semimajor axis of the perturbing and perturbed bodies and to the fourth order for the inclination.

  18. Seasonal Ensemble Forecasting: Using Bayesian Model Averaging and Constrained Fourier Smoothing

    Science.gov (United States)

    Sahu, M.; Lahari, S.; Khosa, R.

    2016-12-01

    Bayesian model averaging (BMA) is a statistical method that can be used to exploit the strength of individual models by combining them with suitable weights assigned to each member of the ensemble in the hope of achieving a more reliable forecast and together with quantification of uncertainty. In this paper, a methodology has been proposed for "online" real time streamflow forecasting using Bayesian averaging and constraint Fourier smoothing that seeks to correct for negative coefficients that are likely to arise in the traditional Fourier transform based approach. It is understood that the realization of any given state of a process under observation is expected to consist of a `non-observable' true signal which is corrupted by noise or random fluctuation that could potentially arise in nature for a diverse set of reasons and estimation of the expected state of the process has to be managed within this `noisy' environment. Further, in order to capture seasonal variation in streamflow, updated estimates of period specific variances are used to determine a weight function. Various models including the Multi-scale Coupled Wavelet Volterra models, ARMA models, Unit Hydrograph methods for forecasting, amongst others, are used to generate ensembles of forecasts and combined using Bayesian averaging.

  19. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition...... of disk regions. The coefficient C4ε of the turbulent source term is also discussed and modified to improve the simulation accuracy. To validate the model, results are presented for the Nibe-B wind turbine and Horns Rev I offshore wind farm and show a good agreement with the measurements....

  20. Model averaging methods to merge statistical and dynamic seasonal streamflow forecasts in Australia

    Science.gov (United States)

    Schepen, A.; Wang, Q. J.

    2014-12-01

    The Australian Bureau of Meteorology operates a statistical seasonal streamflow forecasting service. It has also developed a dynamic seasonal streamflow forecasting approach. The two approaches produce similarly reliable forecasts in terms of ensemble spread but can differ in forecast skill depending on catchment and season. Therefore, it may be possible to augment the skill of the existing service by objectively weighting and merging the forecasts. Bayesian model averaging (BMA) is first applied to merge statistical and dynamic forecasts for 12 locations using leave-five-years-out cross-validation. It is seen that the BMA merged forecasts can sometimes be too uncertain, as shown by ensemble spreads that are unrealistically wide and even bi-modal. The BMA method applies averaging to forecast probability densities (and thus cumulative probabilities) for a given forecast variable value. An alternative approach is quantile model averaging (QMA), whereby forecast variable values (quantiles) are averaged for a given cumulative probability (quantile fraction). For the 12 locations, QMA is compared to BMA. BMA and QMA perform similarly in terms of forecast accuracy skill scores and reliability in terms of ensemble spread. Both methods improve forecast skill across catchments and seasons by combining the different strengths of the statistical and dynamic approaches. A major advantage of QMA over BMA is that it always produces reasonably well defined forecast distributions, even in the special cases where BMA does not. Optimally estimated QMA weights and BMA weights are similar; however, BMA weights are more efficiently estimated.

  1. Weighed scalar averaging in LTB dust models: part II. A formalism of exact perturbations

    Science.gov (United States)

    Sussman, Roberto A.

    2013-03-01

    We examine the exact perturbations that arise from the q-average formalism that was applied in the preceding article (part I) to Lemaître-Tolman-Bondi (LTB) models. By introducing an initial value parametrization, we show that all LTB scalars that take an FLRW ‘look-alike’ form (frequently used in the literature dealing with LTB models) follow as q-averages of covariant scalars that are common to FLRW models. These q-scalars determine for every averaging domain a unique FLRW background state through Darmois matching conditions at the domain boundary, though the definition of this background does not require an actual matching with an FLRW region (Swiss cheese-type models). Local perturbations describe the deviation from the FLRW background state through the local gradients of covariant scalars at the boundary of every comoving domain, while non-local perturbations do so in terms of the intuitive notion of a ‘contrast’ of local scalars with respect to FLRW reference values that emerge from q-averages assigned to the whole domain or the whole time slice in the asymptotic limit. We derive fluid flow evolution equations that completely determine the dynamics of the models in terms of the q-scalars and both types of perturbations. A rigorous formalism of exact spherical nonlinear perturbations is defined over the FLRW background state associated with the q-scalars, recovering the standard results of linear perturbation theory in the appropriate limit. We examine the notion of the amplitude and illustrate the differences between local and non-local perturbations by qualitative diagrams and through an example of a cosmic density void that follows from the numeric solution of the evolution equations.

  2. Extra compressibility terms for Favre-averaged two-equation models of inhomogeneous turbulent flows

    Science.gov (United States)

    Rubesin, Morris W.

    1990-01-01

    Forms of extra-compressibility terms that result from use of Favre averaging of the turbulence transport equations for kinetic energy and dissipation are derived. These forms introduce three new modeling constants, a polytropic coefficient that defines the interrelationships of the pressure, density, and enthalpy fluctuations and two constants in the dissipation equation that account for the non-zero pressure-dilitation and mean pressure gradients.

  3. A Tidally Averaged Sediment-Transport Model for San Francisco Bay, California

    Science.gov (United States)

    Lionberger, Megan A.; Schoellhamer, David H.

    2009-01-01

    A tidally averaged sediment-transport model of San Francisco Bay was incorporated into a tidally averaged salinity box model previously developed and calibrated using salinity, a conservative tracer (Uncles and Peterson, 1995; Knowles, 1996). The Bay is represented in the model by 50 segments composed of two layers: one representing the channel (>5-meter depth) and the other the shallows (0- to 5-meter depth). Calculations are made using a daily time step and simulations can be made on the decadal time scale. The sediment-transport model includes an erosion-deposition algorithm, a bed-sediment algorithm, and sediment boundary conditions. Erosion and deposition of bed sediments are calculated explicitly, and suspended sediment is transported by implicitly solving the advection-dispersion equation. The bed-sediment model simulates the increase in bed strength with depth, owing to consolidation of fine sediments that make up San Francisco Bay mud. The model is calibrated to either net sedimentation calculated from bathymetric-change data or measured suspended-sediment concentration. Specified boundary conditions are the tributary fluxes of suspended sediment and suspended-sediment concentration in the Pacific Ocean. Results of model calibration and validation show that the model simulates the trends in suspended-sediment concentration associated with tidal fluctuations, residual velocity, and wind stress well, although the spring neap tidal suspended-sediment concentration variability was consistently underestimated. Model validation also showed poor simulation of seasonal sediment pulses from the Sacramento-San Joaquin River Delta at Point San Pablo because the pulses enter the Bay over only a few days and the fate of the pulses is determined by intra-tidal deposition and resuspension that are not included in this tidally averaged model. The model was calibrated to net-basin sedimentation to calculate budgets of sediment and sediment-associated contaminants. While

  4. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  5. A bounce-averaged kinetic model of the ring current ion population

    Science.gov (United States)

    Jordanova, V. K.; Kozyra, J. U.; Khazanov, G. V.; Nagy, A. F.; Rasmussen, C. E.; Fok, M.-C.

    1994-01-01

    A bounced-averaged ring current kinetic model for arbitrary pitch angle, including losses due to charge exchange and Coulomb collisions along ion drift paths, is developed and solved numerically. Results from simplifield model runs, intended to illustrate the effects of adiabatic drifts and collisional losses on the proton population, are presented. The processes of: (1) particle acceleration under the conditions of time-independent magnetospheric electric fields; (2) a predominant loss of particles with small pitch angles due to charge exchange; and (3) a buildup of a low-energy population caused by the Coulomb drag energy degradation, are discussed.

  6. Time series forecasting using ERNN and QR based on Bayesian model averaging

    Science.gov (United States)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  7. Model selection and averaging of nonlinear mixed-effect models for robust phase III dose selection.

    Science.gov (United States)

    Aoki, Yasunori; Röshammar, Daniel; Hamrén, Bengt; Hooker, Andrew C

    2017-12-01

    Population model-based (pharmacometric) approaches are widely used for the analyses of phase IIb clinical trial data to increase the accuracy of the dose selection for phase III clinical trials. On the other hand, if the analysis is based on one selected model, model selection bias can potentially spoil the accuracy of the dose selection process. In this paper, four methods that assume a number of pre-defined model structure candidates, for example a set of dose-response shape functions, and then combine or select those candidate models are introduced. The key hypothesis is that by combining both model structure uncertainty and model parameter uncertainty using these methodologies, we can make a more robust model based dose selection decision at the end of a phase IIb clinical trial. These methods are investigated using realistic simulation studies based on the study protocol of an actual phase IIb trial for an oral asthma drug candidate (AZD1981). Based on the simulation study, it is demonstrated that a bootstrap model selection method properly avoids model selection bias and in most cases increases the accuracy of the end of phase IIb decision. Thus, we recommend using this bootstrap model selection method when conducting population model-based decision-making at the end of phase IIb clinical trials.

  8. Average spectral power changes at the hippocampal electroencephalogram in schizophrenia model induced by ketamine.

    Science.gov (United States)

    Sampaio, Luis Rafael L; Borges, Lucas T N; Silva, Joyse M F; de Andrade, Francisca Roselin O; Barbosa, Talita M; Oliveira, Tatiana Q; Macedo, Danielle; Lima, Ricardo F; Dantas, Leonardo P; Patrocinio, Manoel Cláudio A; do Vale, Otoni C; Vasconcelos, Silvânia M M

    2017-08-29

    The use of ketamine (Ket) as a pharmacological model of schizophrenia is an important tool for understanding the main mechanisms of glutamatergic regulated neural oscillations. Thus, the aim of the current study was to evaluate Ket-induced changes in the average spectral power using the hippocampal quantitative electroencephalography (QEEG). To this end, male Wistar rats were submitted to a stereotactic surgery for the implantation of an electrode in the right hippocampus. After three days, the animals were divided into four groups that were treated for 10 consecutive days with Ket (10, 50, or 100 mg/kg). Brainwaves were captured on the 1st or 10th day, respectively, to acute or repeated treatments. The administration of Ket (10, 50, or 100 mg/kg), compared with controls, induced changes in the hippocampal average spectral power of delta, theta, alpha, gamma low or high waves, after acute or repeated treatments. Therefore, based on the alterations in the average spectral power of hippocampal waves induced by Ket, our findings might provide a basis for the use of hippocampal QEEG in animal models of schizophrenia. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  9. Averaged Solvent Embedding Potential Parameters for Multiscale Modeling of Molecular Properties

    DEFF Research Database (Denmark)

    Beerepoot, Maarten; Steindal, Arnfinn Hykkerud; List, Nanna Holmgaard

    2016-01-01

    We derive and validate averaged solvent parameters for embedding potentials to be used in polarizable embedding quantum mechanics/molecular mechanics (QM/MM) molecular property calculations of solutes in organic solvents. The parameters are solvent-specific atom-centered partial charges...... by analyzing the quality of the resulting molecular electrostatic potentials with respect to full QM potentials. We show that a combination of geometry-specific parameters for solvent molecules close to the QM region and averaged parameters for solvent molecules further away allows for efficient polarizable...... embedding multiscale modeling without compromising the accuracy. The results are promising for the development of general embedding parameters for biomolecules, where the reduction in computational cost can be considerable....

  10. Modeling an Application's Theoretical Minimum and Average Transactional Response Times

    Energy Technology Data Exchange (ETDEWEB)

    Paiz, Mary Rose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-04-01

    The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.

  11. Forecasting Rice Productivity and Production of Odisha, India, Using Autoregressive Integrated Moving Average Models

    Directory of Open Access Journals (Sweden)

    Rahul Tripathi

    2014-01-01

    Full Text Available Forecasting of rice area, production, and productivity of Odisha was made from the historical data of 1950-51 to 2008-09 by using univariate autoregressive integrated moving average (ARIMA models and was compared with the forecasted all Indian data. The autoregressive (p and moving average (q parameters were identified based on the significant spikes in the plots of partial autocorrelation function (PACF and autocorrelation function (ACF of the different time series. ARIMA (2, 1, 0 model was found suitable for all Indian rice productivity and production, whereas ARIMA (1, 1, 1 was best fitted for forecasting of rice productivity and production in Odisha. Prediction was made for the immediate next three years, that is, 2007-08, 2008-09, and 2009-10, using the best fitted ARIMA models based on minimum value of the selection criterion, that is, Akaike information criteria (AIC and Schwarz-Bayesian information criteria (SBC. The performances of models were validated by comparing with percentage deviation from the actual values and mean absolute percent error (MAPE, which was found to be 0.61 and 2.99% for the area under rice in Odisha and India, respectively. Similarly for prediction of rice production and productivity in Odisha and India, the MAPE was found to be less than 6%.

  12. Partial ionization in dense plasmas: comparisons among average-atom density functional models.

    Science.gov (United States)

    Murillo, Michael S; Weisheit, Jon; Hansen, Stephanie B; Dharma-wardana, M W C

    2013-06-01

    Nuclei interacting with electrons in dense plasmas acquire electronic bound states, modify continuum states, generate resonances and hopping electron states, and generate short-range ionic order. The mean ionization state (MIS), i.e, the mean charge Z of an average ion in such plasmas, is a valuable concept: Pseudopotentials, pair-distribution functions, equations of state, transport properties, energy-relaxation rates, opacity, radiative processes, etc., can all be formulated using the MIS of the plasma more concisely than with an all-electron description. However, the MIS does not have a unique definition and is used and defined differently in different statistical models of plasmas. Here, using the MIS formulations of several average-atom models based on density functional theory, we compare numerical results for Be, Al, and Cu plasmas for conditions inclusive of incomplete atomic ionization and partial electron degeneracy. By contrasting modern orbital-based models with orbital-free Thomas-Fermi models, we quantify the effects of shell structure, continuum resonances, the role of exchange and correlation, and the effects of different choices of the fundamental cell and boundary conditions. Finally, the role of the MIS in plasma applications is illustrated in the context of x-ray Thomson scattering in warm dense matter.

  13. Model averaging for robust assessment of QT prolongation by concentration-response analysis.

    Science.gov (United States)

    Dosne, A G; Bergstrand, M; Karlsson, M O; Renard, D; Heimann, G

    2017-10-30

    Assessing the QT prolongation potential of a drug is typically done based on pivotal safety studies called thorough QT studies. Model-based estimation of the drug-induced QT prolongation at the estimated mean maximum drug concentration could increase efficiency over the currently used intersection-union test. However, robustness against model misspecification needs to be guaranteed in pivotal settings. The objective of this work was to develop an efficient, fully prespecified model-based inference method for thorough QT studies, which controls the type I error and provides satisfactory test power. This is achieved by model averaging: The proposed estimator of the concentration-response relationship is a weighted average of a parametric (linear) and a nonparametric (monotonic I-splines) estimator, with weights based on mean integrated square error. The desired properties of the method were confirmed in an extensive simulation study, which demonstrated that the proposed method controlled the type I error adequately, and that its power was higher than the power of the nonparametric method alone. The method can be extended from thorough QT studies to the analysis of QT data from pooled phase I studies. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Radiative forcing and climate metrics for ozone precursor emissions: the impact of multi-model averaging

    Directory of Open Access Journals (Sweden)

    C. R. MacIntosh

    2015-04-01

    Full Text Available Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds and CO. When these ozone changes are used to calculate radiative forcing (RF (and climate metrics such as the global warming potential (GWP and global temperature-change potential (GTP there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia. We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3

  15. Demonstration of two-phase Direct Numerical Simulation (DNS) methods potentiality to give information to averaged models: application to bubbles column; Demonstration de la potentialite des methodes de SND diphasique a renseigner les modeles moyennes: Application a la colonne a bulles

    Energy Technology Data Exchange (ETDEWEB)

    Magdeleine, S.

    2009-11-15

    This work is a part of a long term project that aims at using two-phase Direct Numerical Simulation (DNS) in order to give information to averaged models. For now, it is limited to isothermal bubbly flows with no phase change. It could be subdivided in two parts: Firstly, theoretical developments are made in order to build an equivalent of Large Eddy Simulation (LES) for two phase flows called Interfaces and Sub-grid Scales (ISS). After the implementation of the ISS model in our code called Trio{sub U}, a set of various cases is used to validate this model. Then, special test are made in order to optimize the model for our particular bubbly flows. Thus we showed the capacity of the ISS model to produce a cheap pertinent solution. Secondly, we use the ISS model to perform simulations of bubbly flows in column. Results of these simulations are averaged to obtain quantities that appear in mass, momentum and interfacial area density balances. Thus, we processed to an a priori test of a complete one dimensional averaged model.We showed that this model predicts well the simplest flows (laminar and monodisperse). Moreover, the hypothesis of one pressure, which is often made in averaged model like CATHARE, NEPTUNE and RELAP5, is satisfied in such flows. At the opposite, without a polydisperse model, the drag is over-predicted and the uncorrelated A{sub i} flux needs a closure law. Finally, we showed that in turbulent flows, fluctuations of velocity and pressure in the liquid phase are not represented by the tested averaged model. (author)

  16. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  17. Forecasting mortality of road traffic injuries in China using seasonal autoregressive integrated moving average model.

    Science.gov (United States)

    Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun

    2015-02-01

    Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate......In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...

  19. Application of the Periodic Average System Model in Dam Deformation Analysis

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2015-01-01

    Full Text Available Dams are among the most important hydraulic engineering facilities used for water supply, flood control, and hydroelectric power. Monitoring of dams is crucial since deformation might have occurred. How to obtain the deformation information and then judge the safe conditions is the key and difficult problem in dam deformation monitoring field. This paper proposes the periodic average system model and creates the concept of “settlement activity” based on the dam deformation issue. Long-term deformation monitoring data is carried out in a pumped-storage power station, this model combined with settlement activity is used to make the single point deformation analysis, and then the whole settlement activity profile is drawn by clustering analysis. Considering the cumulative settlement value of every point, the dam deformation trend is analyzed in an intuitive effect way. The analysis mode of combined single point with multipoints is realized. The results show that the key deformation information of the dam can be easily grasped by the application of the periodic average system model combined with the distribution diagram of settlement activity. And, above all, the ideas of this research provide an effective method for dam deformation analysis.

  20. Prognostic factors for urachal cancer: a bayesian model-averaging approach.

    Science.gov (United States)

    Kim, In Kyong; Lee, Joo Yong; Kwon, Jong Kyou; Park, Jae Joon; Cho, Kang Su; Ham, Won Sik; Hong, Sung Joon; Yang, Seung Choul; Choi, Young Deuk

    2014-09-01

    This study was conducted to evaluate prognostic factors and cancer-specific survival (CSS) in a cohort of 41 patients with urachal carcinoma by use of a Bayesian model-averaging approach. Our cohort included 41 patients with urachal carcinoma who underwent extended partial cystectomy, total cystectomy, transurethral resection, chemotherapy, or radiotherapy at a single institute. All patients were classified by both the Sheldon and the Mayo staging systems according to histopathologic reports and preoperative radiologic findings. Kaplan-Meier survival curves and Cox proportional-hazards regression models were carried out to investigate prognostic factors, and a Bayesian model-averaging approach was performed to confirm the significance of each variable by using posterior probabilities. The mean age of the patients was 49.88 ± 13.80 years and the male-to-female ratio was 24:17. The median follow-up was 5.42 years (interquartile range, 2.8-8.4 years). Five- and 10-year CSS rates were 55.9% and 43.4%, respectively. Lower Sheldon (p=0.004) and Mayo (pcancer-specific mortality in urachal carcinoma. The Mayo staging system might be more effective than the Sheldon staging system. In addition, the multivariate analyses suggested that tumor size may be a prognostic factor for urachal carcinoma.

  1. Average and dispersion of the luminosity-redshift relation in the concordance model

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Dayan, I. [DESY Hamburg (Germany). Theory Group; Gasperini, M. [Bari Univ. (Italy). Dipt. di Fisica; Istituto Nazionale di Fisica Nucleare, Bari (Italy); Marozzi, G. [College de France, 75 - Paris (France); Geneve Univ. (Switzerland). Dept. de Physique Theorique and CAP; Nugier, F. [Ecole Normale Superieure CNRS, Paris (France). Laboratoire de Physique Theorique; Veneziano, G. [College de France, 75 - Paris (France); CERN, Geneva (Switzerland). Physics Dept.; New York Univ., NY (United States). Dept. of Physics

    2013-03-15

    Starting from the luminosity-redshift relation recently given up to second order in the Poisson gauge, we calculate the effects of the realistic stochastic background of perturbations of the so-called concordance model on the combined light-cone and ensemble average of various functions of the luminosity distance, and on their variance, as functions of redshift. We apply a gauge-invariant light-cone averaging prescription which is free from infrared and ultraviolet divergences, making our results robust with respect to changes of the corresponding cutoffs. Our main conclusions, in part already anticipated in a recent letter for the case of a perturbation spectrum computed in the linear regime, are that such inhomogeneities not only cannot avoid the need for dark energy, but also cannot prevent, in principle, the determination of its parameters down to an accuracy of order 10{sup -3} - 10{sup -5}, depending on the averaged observable and on the regime considered for the power spectrum. However, taking into account the appropriate corrections arising in the non-linear regime, we predict an irreducible scatter of the data approaching the 10% level which, for limited statistics, will necessarily limit the attainable precision. The predicted dispersion appears to be in good agreement with current observational estimates of the distance-modulus variance due to Doppler and lensing effects (at low and high redshifts, respectively), and represents a challenge for future precision measurements.

  2. Average and dispersion of the luminosity-redshift relation in the concordance model

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Dayan, I. [Deutches Elektronen-Synchrotron DESY, Theory Group, D-22603 Hamburg (Germany); Gasperini, M. [Dipartimento di Fisica, Università di Bari, Via G. Amendola 173, 70126 Bari (Italy); Marozzi, G.; Veneziano, G. [Collège de France, 11 Place M. Berthelot, 75005 Paris (France); Nugier, F., E-mail: ido.bendayan@desy.de, E-mail: maurizio.gasperini@ba.infn.it, E-mail: giovanni.marozzi@unige.ch, E-mail: fabien.nuglier@lpt.ens.fr, E-mail: gabriele.veneziano@cern.ch [Laboratoire de Physique Théorique de l' École Normale Supérieure, CNRS UMR 8549, 24 Rue Lhomond, 75005 Paris (France)

    2013-06-01

    Starting from the luminosity-redshift relation recently given up to second order in the Poisson gauge, we calculate the effects of the realistic stochastic background of perturbations of the so-called concordance model on the combined light-cone and ensemble average of various functions of the luminosity distance, and on their variance, as functions of redshift. We apply a gauge-invariant light-cone averaging prescription which is free from infrared and ultraviolet divergences, making our results robust with respect to changes of the corresponding cutoffs. Our main conclusions, in part already anticipated in a recent letter for the case of a perturbation spectrum computed in the linear regime, are that such inhomogeneities not only cannot avoid the need for dark energy, but also cannot prevent, in principle, the determination of its parameters down to an accuracy of order 10{sup −3}−10{sup −5}, depending on the averaged observable and on the regime considered for the power spectrum. However, taking into account the appropriate corrections arising in the non-linear regime, we predict an irreducible scatter of the data approaching the 10% level which, for limited statistics, will necessarily limit the attainable precision. The predicted dispersion appears to be in good agreement with current observational estimates of the distance-modulus variance due to Doppler and lensing effects (at low and high redshifts, respectively), and represents a challenge for future precision measurements.

  3. Reducing the Uncertainty in Atlantic Meridional Overturning Circulation Projections Using Bayesian Model Averaging

    Science.gov (United States)

    Olson, R.; An, S. I.

    2016-12-01

    Atlantic Meridional Overturning Circulation (AMOC) in the ocean might slow down in the future, which can lead to a host of climatic effects in North Atlantic and throughout the world. Despite improvements in climate models and availability of new observations, AMOC projections remain uncertain. Here we constrain CMIP5 multi-model ensemble output with observations of a recently developed AMOC index to provide improved Bayesian predictions of future AMOC. Specifically, we first calculate yearly AMOC index loosely based on Rahmstorf et al. (2015) for years 1880—2004 for both observations, and the CMIP5 models for which relevant output is available. We then assign a weight to each model based on a Bayesian Model Averaging method that accounts for differential model skill in terms of both mean state and variability. We include the temporal autocorrelation in climate model errors, and account for the uncertainty in the parameters of our statistical model. We use the weights to provide future weighted projections of AMOC, and compare them to un-weighted ones. Our projections use bootstrapping to account for uncertainty in internal AMOC variability. We also perform spectral and other statistical analyses to show that AMOC index variability, both in models and in observations, is consistent with red noise. Our results improve on and complement previous work by using a new ensemble of climate models, a different observational metric, and an improved Bayesian weighting method that accounts for differential model skill at reproducing internal variability. Reference: Rahmstorf, S., Box, J. E., Feulner, G., Mann, M. E., Robinson, A., Rutherford, S., & Schaffernicht, E. J. (2015). Exceptional twentieth-century slowdown in atlantic ocean overturning circulation. Nature Climate Change, 5(5), 475-480. doi:10.1038/nclimate2554

  4. Particular solutions of a problem resulting from Huang's model by averaging according to Fatou's scheme

    Science.gov (United States)

    Shirmin, G. I.

    1980-08-01

    In the present paper, an averaging on the basis of Fatou's (1931) scheme is obtained within the framework of a version of the doubly restricted problem of four bodies. A proof is obtained for the existence of particular solutions that are analogous to the Eulerian and Lagrangian solutions. The solutions are applied to an analysis of first-order secular disturbances in the positions of libration points, caused by the influence of a body whose attraction is neglected in the classical model of the restricted three-body problem. These disturbances are shown to lead to continuous displacements of the libration points.

  5. Statistical methodology: V. Time series analysis using autoregressive integrated moving average (ARIMA) models.

    Science.gov (United States)

    Nelson, B K

    1998-07-01

    Most methods of defining a statistical relationship between variables require that errors in prediction not be correlated. That is, knowledge of the error in one instance should not give information about the likely error in the next measurement. Real data frequently fail this requirement. If a Durbin-Watson statistic reveals that there is autocorrelation of sequential data points, analysis of variance and regression results will be invalid and possibly misleading. Such data sets may be analyzed by time series methodologies such as autoregressive integrated moving average (ARIMA) modeling. This method is demonstrated by an example from a public policy intervention.

  6. Averaged Solar Radiation Pressure Modeling for High Area-to-Mass Ratio Objects in Geostationary Space

    Science.gov (United States)

    Eapen, Roshan Thomas

    Space Situational Awareness is aimed at providing timely and accurate information of the space environment. This was originally done by maintaining a catalog of space objects states (position and velocity). Traditionally, a cannonball model would be used to propagate the dynamics. This can be acceptable for an active satellite since its attitude motion can be stabilized. However, for non-functional space debris, the cannonball model would disappoint because it is attitude independent and the debris is prone to tumbling. Furthermore, high area-to-mass ratio objects are sensitive to very small changes in perturbations, particularly those of the non-conservative kind. This renders the cannonball model imprecise in propagating the orbital motion of such objects. With the ever-increasing population of man-made space debris, in-orbit explosions, collisions and potential impacts of near Earth objects, it has become imperative to modify the traditional approach to a more predictive, tactical and exact rendition. Hence, a more precise orbit propagation model needs to be developed which warrants a better understanding of the perturbations in the near Earth space. The attitude dependency of some perturbations renders the orbit-attitude motion to be coupled. In this work, a coupled orbit-attitude model is developed taking both conservative and non-conservative forces and torques into account. A high area-to-mass ratio multi-layer insulation in geostationary space is simulated using the coupled dynamics model. However, the high fidelity model developed is computationally expensive. This work aims at developing a model to average the short-term solar radiation pressure force to perform computationally better than the cannonball model and concurrently have a comparable fidelity to the coupled orbit-attitude model.

  7. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  8. Inferring cellular regulatory networks with Bayesian model averaging for linear regression (BMALR).

    Science.gov (United States)

    Huang, Xun; Zi, Zhike

    2014-08-01

    Bayesian network and linear regression methods have been widely applied to reconstruct cellular regulatory networks. In this work, we propose a Bayesian model averaging for linear regression (BMALR) method to infer molecular interactions in biological systems. This method uses a new closed form solution to compute the posterior probabilities of the edges from regulators to the target gene within a hybrid framework of Bayesian model averaging and linear regression methods. We have assessed the performance of BMALR by benchmarking on both in silico DREAM datasets and real experimental datasets. The results show that BMALR achieves both high prediction accuracy and high computational efficiency across different benchmarks. A pre-processing of the datasets with the log transformation can further improve the performance of BMALR, leading to a new top overall performance. In addition, BMALR can achieve robust high performance in community predictions when it is combined with other competing methods. The proposed method BMALR is competitive compared to the existing network inference methods. Therefore, BMALR will be useful to infer regulatory interactions in biological networks. A free open source software tool for the BMALR algorithm is available at https://sites.google.com/site/bmalr4netinfer/.

  9. A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms

    Science.gov (United States)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

    1992-01-01

    A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

  10. Analyzing Average and Conditional Effects with Multigroup Multilevel Structural EquationModels

    Directory of Open Access Journals (Sweden)

    Axel eMayer

    2014-04-01

    Full Text Available Conventionally, multilevel analysis of covariance (ML-ANCOVA has been therecommended approach for analyzing treatment effects in quasi-experimental multilevel designswith treatment application at the cluster-level. In this paper, we introduce the generalizedML-ANCOVA with linear effect functions that identifies average and conditional treatment effectsin the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVAmodel can be estimated with multigroup multilevel structural equation models that offerconsiderable advances compared to traditional ML-ANCOVA. The proposed model takes intoaccount measurement error in the covariates, sampling error in contextual covariates,treatment-covariate interactions, and stochastic predictors. We illustrate the implementation ofML-ANCOVA with an example from educational effectiveness research where we estimateaverage and conditional effects of early transition to secondary schooling on readingcomprehension.

  11. Finite Element Modelling of the Effects of Average Grain Size and Misorientation Angle on the Deformation

    Directory of Open Access Journals (Sweden)

    K Sanusi

    2016-09-01

    Full Text Available This paper comprises an investigation using finite element analysis to study the behaviour of nanocrystalline grain structures during Equal Channel Angular Press (ECAP processing of metals. The effects of average grain size and misorientation angle on the deformation are examined in order to see how microstructural features might explain the observed increase in strength of nanocrsytalline metals. While this approach forms a convenient starting as it offers a simple way of including grain size effects and grain misorientation to which we could add additional phenomena through developing the material model used to describe the anisotropy and techniques that would automatically re-mesh the refined grain structure produced under severe plastic deformation. From this, it can be concluded that these additional techniques incorporated into the finite element model produced effects that correspond to observed behaviour in real polycrystals.

  12. High resolution forecasting for wind energy applications using Bayesian model averaging

    Directory of Open Access Journals (Sweden)

    Jennifer F. Courtney

    2013-02-01

    Full Text Available Two methods of post-processing the uncalibrated wind speed forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF ensemble prediction system (EPS are presented here. Both methods involve statistically post-processing the EPS or a downscaled version of it with Bayesian model averaging (BMA. The first method applies BMA directly to the EPS data. The second method involves clustering the EPS to eight representative members (RMs and downscaling the data through two limited area models at two resolutions. Four weighted ensemble mean forecasts are produced and used as input to the BMA method. Both methods are tested against 13 meteorological stations around Ireland with 1 yr of forecast/observation data. Results show calibration and accuracy improvements using both methods, with the best results stemming from Method 2, which has comparatively low mean absolute error and continuous ranked probability scores.

  13. A cost-precision model for marine environmental monitoring, based on time-integrated averages.

    Science.gov (United States)

    Båmstedt, Ulf; Brugel, Sonia

    2017-07-01

    Ongoing marine monitoring programs are seldom designed to detect changes in the environment between different years, mainly due to the high number of samples required for a sufficient statistical precision. We here show that pooling over time (time integration) of seasonal measurements provides an efficient method of reducing variability, thereby improving the precision and power in detecting inter-annual differences. Such data from weekly environmental sensor profiles at 21 stations in the northern Bothnian Sea was used in a cost-precision spatio-temporal allocation model. Time-integrated averages for six different variables over 6 months from a rather heterogeneous area showed low variability between stations (coefficient of variation, CV, range of 0.6-12.4%) compared to variability between stations in a single day (CV range 2.4-88.6%), or variability over time for a single station (CV range 0.4-110.7%). Reduced sampling frequency from weekly to approximately monthly sampling did not change the results markedly, whereas lower frequency differed more from results with weekly sampling. With monthly sampling, high precision and power of estimates could therefore be achieved with a low number of stations. With input of cost factors like ship time, labor, and analyses, the model can predict the cost for a given required precision in the time-integrated average of each variable by optimizing sampling allocation. A following power analysis can provide information on minimum sample size to detect differences between years with a required power. Alternatively, the model can predict the precision of annual means for the included variables when the program has a pre-defined budget. Use of time-integrated results from sampling stations with different areal coverage and environmental heterogeneity can thus be an efficient strategy to detect environmental differences between single years, as well as a long-term temporal trend. Use of the presented allocation model will then

  14. Autonomous Operation of Hybrid Microgrid With AC and DC Subgrids

    DEFF Research Database (Denmark)

    Chiang Loh, Poh; Li, Ding; Kang Chai, Yi

    2013-01-01

    This paper investigates on power-sharing issues of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac subgrids interconnected by power electronic interfaces. The main challenge here is to manage power flows among all...... converters. Suitable control and normalization schemes are now developed for controlling them with the overall hybrid microgrid performance already verified in simulation and experiment....

  15. Evapotranspiration and cloud variability at regional sub-grid scales

    Science.gov (United States)

    Vila-Guerau de Arellano, Jordi; Sikma, Martin; Pedruzo-Bagazgoitia, Xabier; van Heerwaarden, Chiel; Hartogensis, Oscar; Ouwersloot, Huug

    2017-04-01

    In regional and global models uncertainties arise due to our incomplete understanding of the coupling between biochemical and physical processes. Representing their impact depends on our ability to calculate these processes using physically sound parameterizations, since they are unresolved at scales smaller than the grid size. More specifically over land, the coupling between evapotranspiration, turbulent transport of heat and moisture, and clouds lacks a combined representation to take these sub-grid scales interactions into account. Our approach is based on understanding how radiation, surface exchange, turbulent transport and moist convection are interacting from the leaf- to the cloud scale. We therefore place special emphasis on plant stomatal aperture as the main regulator of CO2-assimilation and water transpiration, a key source of moisture source to the atmosphere. Plant functionality is critically modulated by interactions with atmospheric conditions occurring at very short spatiotemporal scales such as cloud radiation perturbations or water vapour turbulent fluctuations. By explicitly resolving these processes, the LES (large-eddy simulation) technique is enabling us to characterize and better understand the interactions between canopies and the local atmosphere. This includes the adaption time of vegetation to rapid changes in atmospheric conditions driven by turbulence or the presence of cumulus clouds. Our LES experiments are based on explicitly coupling the diurnal atmospheric dynamics to a plant physiology model. Our general hypothesis is that different partitioning of direct and diffuse radiation leads to different responses of the vegetation. As a result there are changes in the water use efficiencies and shifts in the partitioning of sensible and latent heat fluxes under the presence of clouds. Our presentation is as follows. First, we discuss the ability of LES to reproduce the surface energy balance including photosynthesis and CO2 soil

  16. The application of naive Bayes model averaging to predict Alzheimer's disease from genome-wide data.

    Science.gov (United States)

    Wei, Wei; Visweswaran, Shyam; Cooper, Gregory F

    2011-01-01

    Predicting patient outcomes from genome-wide measurements holds significant promise for improving clinical care. The large number of measurements (eg, single nucleotide polymorphisms (SNPs)), however, makes this task computationally challenging. This paper evaluates the performance of an algorithm that predicts patient outcomes from genome-wide data by efficiently model averaging over an exponential number of naive Bayes (NB) models. This model-averaged naive Bayes (MANB) method was applied to predict late onset Alzheimer's disease in 1411 individuals who each had 312,318 SNP measurements available as genome-wide predictive features. Its performance was compared to that of a naive Bayes algorithm without feature selection (NB) and with feature selection (FSNB). Performance of each algorithm was measured in terms of area under the ROC curve (AUC), calibration, and run time. The training time of MANB (16.1 s) was fast like NB (15.6 s), while FSNB (1684.2 s) was considerably slower. Each of the three algorithms required less than 0.1 s to predict the outcome of a test case. MANB had an AUC of 0.72, which is significantly better than the AUC of 0.59 by NB (p<0.00001), but not significantly different from the AUC of 0.71 by FSNB. MANB was better calibrated than NB, and FSNB was even better in calibration. A limitation was that only one dataset and two comparison algorithms were included in this study. MANB performed comparatively well in predicting a clinical outcome from a high-dimensional genome-wide dataset. These results provide support for including MANB in the methods used to predict outcomes from large, genome-wide datasets.

  17. SU-F-R-44: Modeling Lung SBRT Tumor Response Using Bayesian Network Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Diamant, A; Ybarra, N; Seuntjens, J [McGill University, Montreal, Quebec (Canada); El Naqa, I [University of Michigan, Ann Arbor, MI (United States)

    2016-06-15

    Purpose: The prediction of tumor control after a patient receives lung SBRT (stereotactic body radiation therapy) has proven to be challenging, due to the complex interactions between an individual’s biology and dose-volume metrics. Many of these variables have predictive power when combined, a feature that we exploit using a graph modeling approach based on Bayesian networks. This provides a probabilistic framework that allows for accurate and visually intuitive predictive modeling. The aim of this study is to uncover possible interactions between an individual patient’s characteristics and generate a robust model capable of predicting said patient’s treatment outcome. Methods: We investigated a cohort of 32 prospective patients from multiple institutions whom had received curative SBRT to the lung. The number of patients exhibiting tumor failure was observed to be 7 (event rate of 22%). The serum concentration of 5 biomarkers previously associated with NSCLC (non-small cell lung cancer) was measured pre-treatment. A total of 21 variables were analyzed including: dose-volume metrics with BED (biologically effective dose) correction and clinical variables. A Markov Chain Monte Carlo technique estimated the posterior probability distribution of the potential graphical structures. The probability of tumor failure was then estimated by averaging the top 100 graphs and applying Baye’s rule. Results: The optimal Bayesian model generated throughout this study incorporated the PTV volume, the serum concentration of the biomarker EGFR (epidermal growth factor receptor) and prescription BED. This predictive model recorded an area under the receiver operating characteristic curve of 0.94(1), providing better performance compared to competing methods in other literature. Conclusion: The use of biomarkers in conjunction with dose-volume metrics allows for the generation of a robust predictive model. The preliminary results of this report demonstrate that it is possible

  18. Reynolds-Averaged Turbulence Model Assessment for a Highly Back-Pressured Isolator Flowfield

    Science.gov (United States)

    Baurle, Robert A.; Middleton, Troy F.; Wilson, L. G.

    2012-01-01

    The use of computational fluid dynamics in scramjet engine component development is widespread in the existing literature. Unfortunately, the quantification of model-form uncertainties is rarely addressed with anything other than sensitivity studies, requiring that the computational results be intimately tied to and calibrated against existing test data. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Due to ground test facility limitations, this expanded role is believed to be a requirement by some in the test and evaluation community if scramjet engines are to be given serious consideration as a viable propulsion device. An effort has been initiated at the NASA Langley Research Center to validate several turbulence closure models used for Reynolds-averaged simulations of scramjet isolator flows. The turbulence models considered were the Menter BSL, Menter SST, Wilcox 1998, Wilcox 2006, and the Gatski-Speziale explicit algebraic Reynolds stress models. The simulations were carried out using the VULCAN computational fluid dynamics package developed at the NASA Langley Research Center. A procedure to quantify the numerical errors was developed to account for discretization errors in the validation process. This procedure utilized the grid convergence index defined by Roache as a bounding estimate for the numerical error. The validation data was collected from a mechanically back-pressured constant area (1 2 inch) isolator model with an isolator entrance Mach number of 2.5. As expected, the model-form uncertainty was substantial for the shock-dominated, massively separated flowfield within the isolator as evidenced by a 6 duct height variation in shock train length depending on the turbulence model employed. Generally speaking, the turbulence models that did not include an explicit stress limiter more closely

  19. Association of climate drivers with rainfall in New South Wales, Australia, using Bayesian Model Averaging

    Science.gov (United States)

    Duc, Hiep Nguyen; Rivett, Kelly; MacSween, Katrina; Le-Anh, Linh

    2017-01-01

    Rainfall in New South Wales (NSW), located in the southeast of the Australian continent, is known to be influenced by four major climate drivers: the El Niño/Southern Oscillation (ENSO), the Interdecadal Pacific Oscillation (IPO), the Southern Annular Mode (SAM) and the Indian Ocean Dipole (IOD). Many studies have shown the influences of ENSO, IPO modulation, SAM and IOD on rainfall in Australia and on southeast Australia in particular. However, only limited work has been undertaken using a multiple regression framework to examine the extent of the combined effect of these climate drivers on rainfall. This paper analysed the role of these combined climate drivers and their interaction on the rainfall in NSW using Bayesian Model Averaging (BMA) to account for model uncertainty by considering each of the linear models across the whole model space which is equal to the set of all possible combinations of predictors to find the model posterior probabilities and their expected predictor coefficients. Using BMA for linear regression models, we are able to corroborate and confirm the results from many previous studies. In addition, the method gives the ranking order of importance and the probability of the association of each of the climate drivers and their interaction on the rainfall at a site. The ability to quantify the relative contribution of the climate drivers offers the key to understand the complex interaction of drivers on rainfall, or lack of rainfall in a region, such as the three big droughts in southeastern Australia which have been the subject of discussion and debate recently on their causes.

  20. Spatial prediction of N2O emissions in pasture: a Bayesian model averaging analysis.

    Directory of Open Access Journals (Sweden)

    Xiaodong Huang

    Full Text Available Nitrous oxide (N2O is one of the greenhouse gases that can contribute to global warming. Spatial variability of N2O can lead to large uncertainties in prediction. However, previous studies have often ignored the spatial dependency to quantify the N2O - environmental factors relationships. Few researches have examined the impacts of various spatial correlation structures (e.g. independence, distance-based and neighbourhood based on spatial prediction of N2O emissions. This study aimed to assess the impact of three spatial correlation structures on spatial predictions and calibrate the spatial prediction using Bayesian model averaging (BMA based on replicated, irregular point-referenced data. The data were measured in 17 chambers randomly placed across a 271 m(2 field between October 2007 and September 2008 in the southeast of Australia. We used a Bayesian geostatistical model and a Bayesian spatial conditional autoregressive (CAR model to investigate and accommodate spatial dependency, and to estimate the effects of environmental variables on N2O emissions across the study site. We compared these with a Bayesian regression model with independent errors. The three approaches resulted in different derived maps of spatial prediction of N2O emissions. We found that incorporating spatial dependency in the model not only substantially improved predictions of N2O emission from soil, but also better quantified uncertainties of soil parameters in the study. The hybrid model structure obtained by BMA improved the accuracy of spatial prediction of N2O emissions across this study region.

  1. Using Bayesian Model Averaging (BMA to calibrate probabilistic surface temperature forecasts over Iran

    Directory of Open Access Journals (Sweden)

    I. Soltanzadeh

    2011-07-01

    Full Text Available Using Bayesian Model Averaging (BMA, an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM, with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP Global Forecast System (GFS and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009 over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  2. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    Energy Technology Data Exchange (ETDEWEB)

    Soltanzadeh, I. [Tehran Univ. (Iran, Islamic Republic of). Inst. of Geophysics; Azadi, M.; Vakili, G.A. [Atmospheric Science and Meteorological Research Center (ASMERC), Teheran (Iran, Islamic Republic of)

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast. (orig.)

  3. Using Bayesian Model Averaging (BMA to calibrate probabilistic surface temperature forecasts over Iran

    Directory of Open Access Journals (Sweden)

    I. Soltanzadeh

    2011-07-01

    Full Text Available Using Bayesian Model Averaging (BMA, an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM, with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP Global Forecast System (GFS and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009 over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data.

    The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  4. Parameterizing the Effects of Finite Crested Wave Breaking in Wave-Averaged Models

    Science.gov (United States)

    Kumar, N.; Suanda, S. H.; Feddersen, F.

    2016-02-01

    Finite crested breaking waves generate a rotational body force that creates two-dimensional turbulent eddies with strong rotational velocities, capable of tracer exchange (sediment, pathogens, contaminants) between the surfzone and the inner shelf. This eddy generation mechanism is strongly tied to the wave directional spread. Wave-resolving Boussinesq models like funwaveC include finite crest length breaking and accurately simulate surfzone eddy generation. However, this surfzone eddy generation mechanism is not included in existing wave-averaged models (e.g., Coupled Ocean Atmosphere Wave Sediment Transport Modeling System, COAWST), leading to an incomplete representation of exchange between the surf zone and the inner shelf. In this study 250 funwaveC simulations with random, directionally spread waves spanning a range of beach slopes and wave conditions are used to simulate surfzone eddies. With these simulations, the stream function associated with breaking wave eddy forcing is isolated and quantified in the form of intensity, cross- and alongshore widths and propagation rates, followed by parameterization as a function of wave parameters and the beach slope. Parameterized stream function is implemented into COAWST as a stochastic surf zone eddy module which is used to study vorticity evolution from the surfzone to the inner-shelf, interaction between stratified water column and surfzone eddies, and overall provides a more complete representation of surfzone eddy induced cross-shore exchange. Funded by the Office of Naval Research.

  5. Comparison of Two Gas Selection Methodologies: An Application of Bayesian Model Averaging

    Energy Technology Data Exchange (ETDEWEB)

    Renholds, Andrea S.; Thompson, Sandra E.; Anderson, Kevin K.; Chilton, Lawrence K.

    2006-03-31

    One goal of hyperspectral imagery analysis is the detection and characterization of plumes. Characterization includes identifying the gases in the plumes, which is a model selection problem. Two gas selection methods compared in this report are Bayesian model averaging (BMA) and minimum Akaike information criterion (AIC) stepwise regression (SR). Simulated spectral data from a three-layer radiance transfer model were used to compare the two methods. Test gases were chosen to span the types of spectra observed, which exhibit peaks ranging from broad to sharp. The size and complexity of the search libraries were varied. Background materials were chosen to either replicate a remote area of eastern Washington or feature many common background materials. For many cases, BMA and SR performed the detection task comparably in terms of the receiver operating characteristic curves. For some gases, BMA performed better than SR when the size and complexity of the search library increased. This is encouraging because we expect improved BMA performance upon incorporation of prior information on background materials and gases.

  6. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    Energy Technology Data Exchange (ETDEWEB)

    Faranda, Davide, E-mail: davide.faranda@cea.fr; Dubrulle, Bérengère; Daviaud, François [Laboratoire SPHYNX, Service de Physique de l' Etat Condensé, DSM, CEA Saclay, CNRS URA 2464, 91191 Gif-sur-Yvette (France); Pons, Flavio Maria Emanuele [Dipartimento di Scienze Statistiche, Universitá di Bologna, Via delle Belle Arti 41, 40126 Bologna (Italy); Saint-Michel, Brice [Institut de Recherche sur les Phénomènes Hors Equilibre, Technopole de Chateau Gombert, 49 rue Frédéric Joliot Curie, B.P. 146, 13 384 Marseille (France); Herbert, Éric [Université Paris Diderot - LIED - UMR 8236, Laboratoire Interdisciplinaire des Énergies de Demain, Paris (France); Cortet, Pierre-Philippe [Laboratoire FAST, CNRS, Université Paris-Sud (France)

    2014-10-15

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system.

  7. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  8. Two-Dimensional Depth-Averaged Beach Evolution Modeling: Case Study of the Kizilirmak River Mouth, Turkey

    DEFF Research Database (Denmark)

    Baykal, Cüneyt; Ergin, Ayşen; Güler, Işikhan

    2014-01-01

    transformation model, a two-dimensional depth-averaged numerical waveinduced circulation model, a sediment transport model, and a bottom evolution model. To validate and verify the numerical model, it is applied to several cases of laboratory experiments. Later, the model is applied to a shoreline change problem......This study presents an application of a two-dimensional beach evolution model to a shoreline change problem at the Kizilirmak River mouth, which has been facing severe coastal erosion problems for more than 20 years. The shoreline changes at the Kizilirmak River mouth have been thus far...... investigated by satellite images, physical model tests, and one-dimensional numerical models. The current study uses a two-dimensional depth-averaged numerical beach evolution model, developed based on existing methodologies. This model is mainly composed of four main submodels: a phase-averaged spectral wave...

  9. Parameterization of subgrid plume dilution for use in large-scale atmospheric simulations

    Directory of Open Access Journals (Sweden)

    A. D. Naiman

    2010-03-01

    Full Text Available A new model of plume dynamics has been developed for use as a subgrid model of plume dilution in a large-scale atmospheric simulation. The model uses mean wind, shear, and diffusion parameters derived from the local large-scale variables to advance the plume cross-sectional shape and area in time. Comparisons with a large eddy simulation of aircraft emission plume dynamics, with an analytical solution to the dynamics of a sheared Gaussian plume, and with measurements of aircraft exhaust plume dilution at cruise altitude show good agreement with these previous studies. We argue that the model also provides a reasonable approximation of line-shaped contrail dilution and give an example of how it can be applied in a global climate model.

  10. Model Averaging for Predicting the Exposure to Aflatoxin B1 Using DNA Methylation in White Blood Cells of Infants

    Science.gov (United States)

    Rahardiantoro, S.; Sartono, B.; Kurnia, A.

    2017-03-01

    In recent years, DNA methylation has been the special issue to reveal the pattern of a lot of human diseases. Huge amount of data would be the inescapable phenomenon in this case. In addition, some researchers interesting to take some predictions based on these huge data, especially using regression analysis. The classical approach would be failed to take the task. Model averaging by Ando and Li [1] could be an alternative approach to face this problem. This research applied the model averaging to get the best prediction in high dimension of data. In the practice, the case study by Vargas et al [3], data of exposure to aflatoxin B1 (AFB1) and DNA methylation in white blood cells of infants in The Gambia, take the implementation of model averaging. The best ensemble model selected based on the minimum of MAPE, MAE, and MSE of predictions. The result is ensemble model by model averaging with number of predictors in model candidate is 15.

  11. New metric for optimizing Continuous Loop Averaging Deconvolution (CLAD) sequences under the 1/f noise model.

    Science.gov (United States)

    Peng, Xian; Yuan, Han; Chen, Wufan; Wang, Tao; Ding, Lei

    2017-01-01

    Continuous loop averaging deconvolution (CLAD) is one of the proven methods for recovering transient auditory evoked potentials (AEPs) in rapid stimulation paradigms, which requires an elaborated stimulus sequence design to attenuate impacts from noise in data. The present study aimed to develop a new metric in gauging a CLAD sequence in terms of noise gain factor (NGF), which has been proposed previously but with less effectiveness in the presence of pink (1/f) noise. We derived the new metric by explicitly introducing the 1/f model into the proposed time-continuous sequence. We selected several representative CLAD sequences to test their noise property on typical EEG recordings, as well as on five real CLAD electroencephalogram (EEG) recordings to retrieve the middle latency responses. We also demonstrated the merit of the new metric in generating and quantifying optimized sequences using a classic genetic algorithm. The new metric shows evident improvements in measuring actual noise gains at different frequencies, and better performance than the original NGF in various aspects. The new metric is a generalized NGF measurement that can better quantify the performance of a CLAD sequence, and provide a more efficient mean of generating CLAD sequences via the incorporation with optimization algorithms. The present study can facilitate the specific application of CLAD paradigm with desired sequences in the clinic.

  12. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    Science.gov (United States)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  13. Benefits of dominance over additive models for the estimation of average effects in the presence of dominance

    NARCIS (Netherlands)

    Duenk, Pascal; Calus, Mario P.L.; Wientjes, Yvonne C.J.; Bijma, Piter

    2017-01-01

    In quantitative genetics, the average effect at a single locus can be estimated by an additive (A) model, or an additive plus dominance (AD) model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and

  14. Application of a simple space-time averaged porous media model to flow in densely vegetated channels

    NARCIS (Netherlands)

    Hoffmann, M.R.

    2004-01-01

    Traditional flow modeling in open channels uses time-averaged turbulence models. These models are valid in clear fluid, but not if dense obstructions are present in the flow field. In this article we show that newly developed flow models can describe open channel flow as flow in a porous medium.

  15. Extending a Consensus-based Fuzzy Ordered Weighting Average (FOWA Model in New Water Quality Indices

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Baghapour

    2017-07-01

    Full Text Available In developing a specific WQI (Water Quality Index, many quality parameters are involved with different levels of importance. The impact of experts’ different opinions and viewpoints, current risks affecting their opinions, and plurality of the involved parameters double the significance of the issue. Hence, the current study tries to apply a consensus-based FOWA (Fuzzy Ordered Weighting Average model as one of the most powerful and well-known Multi-Criteria Decision- Making (MCDM techniques to determine the importance of the used parameters in the development of such WQIs which is shown with an example. This operator has provided the capability of modeling the risks in decision-making through applying the optimistic degree of stakeholders and their power coupled with the use of fuzzy numbers. Totally, 22 water quality parameters for drinking purposes were considered in this study. To determine the weight of each parameter, the viewpoints of 4 decision-making groups of experts were taken into account. After determining the final weights, to validate the use of each parameter in a potential WQI, consensus degrees of both the decision makers and the parameters are calculated. The highest and the lowest weight values, 0.999 and 0.073 respectively, were related to Hg and temperature. Regarding the type of consumption that was drinking, the parameters’ weights and ranks were consistent with their health impacts. Moreover, the decision makers’ highest and lowest consensus degrees were 0.9905 and 0.9669, respectively. Among the water quality parameters, temperature (with consensus degree of 0.9972 and Pb (with consensus degree of 0.9665, received the highest and lowest agreement with the decision-making group. This study indicated that the weight of parameters in determining water quality largely depends on the experts’ opinions and approaches. Moreover, using the FOWA model provides results accurate and closer- to-reality on the significance of

  16. Parameterization for subgrid-scale motion of ice-shelf calving fronts

    Directory of Open Access Journals (Sweden)

    T. Albrecht

    2011-01-01

    Full Text Available A parameterization for the motion of ice-shelf fronts on a Cartesian grid in finite-difference land-ice models is presented. The scheme prevents artificial thinning of the ice shelf at its edge, which occurs due to the finite resolution of the model. The intuitive numerical implementation diminishes numerical dispersion at the ice front and enables the application of physical boundary conditions to improve the calculation of stress and velocity fields throughout the ice-sheet-shelf system. Numerical properties of this subgrid modification are assessed in the Potsdam Parallel Ice Sheet Model (PISM-PIK for different geometries in one and two horizontal dimensions and are verified against an analytical solution in a flow-line setup.

  17. Combining multi-objective optimization and bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Wohling, Thomas [NON LANL

    2008-01-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  18. Ensemble learning and model averaging for material identification in hyperspectral imagery

    Science.gov (United States)

    Basener, William F.

    2017-05-01

    In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.

  19. Performance of Reynolds Averaged Navier-Stokes Models in Predicting Separated Flows: Study of the Hump Flow Model Problem

    Science.gov (United States)

    Cappelli, Daniele; Mansour, Nagi N.

    2012-01-01

    Separation can be seen in most aerodynamic flows, but accurate prediction of separated flows is still a challenging problem for computational fluid dynamics (CFD) tools. The behavior of several Reynolds Averaged Navier-Stokes (RANS) models in predicting the separated ow over a wall-mounted hump is studied. The strengths and weaknesses of the most popular RANS models (Spalart-Allmaras, k-epsilon, k-omega, k-omega-SST) are evaluated using the open source software OpenFOAM. The hump ow modeled in this work has been documented in the 2004 CFD Validation Workshop on Synthetic Jets and Turbulent Separation Control. Only the baseline case is treated; the slot flow control cases are not considered in this paper. Particular attention is given to predicting the size of the recirculation bubble, the position of the reattachment point, and the velocity profiles downstream of the hump.

  20. Canonical event based Bayesian model averaging for post-processing of multi-model ensemble precipitation forecasts

    Science.gov (United States)

    Li, Wentao; Duan, Qingyun

    2017-04-01

    Precipitation forecasts from numerical weather models usually contain biases in terms of mean and spread, and need to be post-processed before applying them as input to hydrological models. Bayesian Model Averaging (BMA) method is a widely used method for post-processing forecasts from multiple models. Traditionally, BMA is applied to time series of forecasts for a specific lead time directly. In this work, we propose to apply BMA based on "canonical events", which are precipitation events with specific lead times and durations to fully extract information from raw forecasts. For example, canonical events can be designed as the daily precipitation for day 1 to day 5, and the aggregation or average of total precipitation from day 6 to day 10, because forecasts beyond 5 day still have some skill but not as reliable as the first five days. Moreover, BMA parameters are traditionally calibrated using a moving window containing the forecast-observation pairs before a given forecast date, which cannot ensure similar meteorological condition when long training period is applied. In this work, the training dataset is chosen from the historical hindcast archive of forecast-observation pairs in a pre-specified time window surrounding a given forecast date. After all canonical events of different lead times and durations are calibrated for BMA models, ensemble members are generated from the calibrated probability forecasts using the Schaake shuffle to preserve the temporal dependency of forecasts for different lead times. This canonical event based BMA makes use of forecasts at different lead times more adequately and can generate continuous calibrated forecast time series for further application in hydrological modeling.

  1. Implementation, Comparison and Application of an Average Simulation Model of a Wind Turbine Driven Doubly Fed Induction Generator

    Directory of Open Access Journals (Sweden)

    Lidula N. Widanagama Arachchige

    2017-10-01

    Full Text Available Wind turbine driven doubly-fed induction generators (DFIGs are widely used in the wind power industry. With the increasing penetration of wind farms, analysis of their effect on power systems has become a critical requirement. This paper presents the modeling of wind turbine driven DFIGs using the conventional vector controls in a detailed model of a DFIG that represents power electronics (PE converters with device level models and proposes an average model eliminating the PE converters. The PSCAD/EMTDC™ (4.6 electromagnetic transient simulation software is used to develop the detailed and the proposing average model of a DFIG. The comparison of the two models reveals that the designed average DFIG model is adequate for simulating and analyzing most of the transient conditions.

  2. Comparison of the average surviving fraction model with the integral biologically effective dose model for an optimal irradiation scheme.

    Science.gov (United States)

    Takagi, Ryo; Komiya, Yuriko; Sutherland, Kenneth L; Shirato, Hiroki; Date, Hiroyuki; Mizuta, Masahiro

    2018-01-04

    In this paper, we compare two radiation effect models: the average surviving fraction (ASF) model and the integral biologically effective dose (IBED) model for deriving the optimal irradiation scheme and show the superiority of ASF. Minimizing the effect on an organ at risk (OAR) is important in radiotherapy. The biologically effective dose (BED) model is widely used to estimate the effect on the tumor or on the OAR, for a fixed value of dose. However, this is not always appropriate because the dose is not a single value but is distributed. The IBED and ASF models are proposed under the assumption that the irradiation is distributed. Although the IBED and ASF models are essentially equivalent for deriving the optimal irradiation scheme in the case of uniform distribution, they are not equivalent in the case of non-uniform distribution. We evaluate the differences between them for two types of cancers: high α/β ratio cancer (e.g. lung) and low α/β ratio cancer (e.g. prostate), and for various distributions i.e. various dose-volume histograms. When we adopt the IBED model, the optimal number of fractions for low α/β ratio cancers is reasonable, but for high α/β ratio cancers or for some DVHs it is extremely large. However, for the ASF model, the results keep within the range used in clinical practice for both low and high α/β ratio cancers and for most DVHs. These results indicate that the ASF model is more robust for constructing the optimal irradiation regimen than the IBED model. © The Author(s) 2018. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  3. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials

    DEFF Research Database (Denmark)

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-01-01

    -moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere...

  4. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    Science.gov (United States)

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    Science.gov (United States)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  6. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    Science.gov (United States)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  7. The Effect of Spatial Variability of Meteorological Data on Annual Average Air Concentrations Predicted by a Wind-Rose Model

    Energy Technology Data Exchange (ETDEWEB)

    Pendergast, M.M.

    2001-03-15

    This paper discusses the effect of spatial variability of meteorological data on annual average air concentrations for distance from the source ranging from 2-21 km. Annual average relative air concentrations varied by about 20 percent for the stations examined. This study also showed that annual average concentrations obtained with a simple wind-rose model, although overpredicting by a factor of 3, are not particularly sensitive to the amount of meteorological data used to represent the frequency distribution of wind and stability. These results indicate that a much smaller data recovery rate than the 90 percent required by NRC might be more appropriate.

  8. Ensemble phase averaging equations for multiphase flows in porous media, part I: the bundle-of-tubes model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Dali [Los Alamos National Laboratory; Zhang, Duan [Los Alamos National Laboratory; Currier, Robert [Los Alamos National Laboratory

    2008-01-01

    A bundle-of-tubes construct is used as a model system to study ensemble averaged equations for multiphase flow in a porous material. Momentum equations for the fluid phases obtained from the method are similar to Darcy's law, but with additional terms. We study properties of the additional terms, and the conditions under which the averaged equations can be approximated by the diffusion model or the extended Darcy's law as often used in models for multiphase flows in porous media. Although the bundle-of-tubes model is perhaps the simplest model for a porous material, the ensemble averaged equation technique developed in this paper assumes the very same form in more general treatments described in Part 2 of the present work (Zhang 2009). Any model equation system intended for the more general cases must be understood and tested first using simple models. The concept of ensemble phase averaging is dissected here in physical terms, without involved mathematics through its application to the idealized bundle-of-tubes model for multiphase flow in porous media.

  9. Bayesian Averaging over Many Dynamic Model Structures with Evidence on the Great Ratios and Liquidity Trap Risk

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2008-01-01

    textabstractA Bayesian model averaging procedure is presented that makes use of a finite mixture of many model structures within the class of vector autoregressive (VAR) processes. It is applied to two empirical issues. First, stability of the Great Ratios in U.S. macro-economic time series is

  10. Resolution-dependent behavior of subgrid-scale vertical transport in the Zhang-McFarlane convection parameterization

    Science.gov (United States)

    Xiao, Heng; Gustafson, William I.; Hagos, Samson M.; Wu, Chien-Ming; Wan, Hui

    2015-06-01

    To better understand the behavior of quasi-equilibrium-based convection parameterizations at higher resolution, we use a diagnostic framework to examine the resolution-dependence of subgrid-scale vertical transport of moist static energy as parameterized by the Zhang-McFarlane convection parameterization (ZM). Grid-scale input to ZM is supplied by coarsening output from cloud-resolving model (CRM) simulations onto subdomains ranging in size from 8 × 8 to 256 × 256 km2. Then the ZM-based parameterization of vertical transport of moist static energy for scales smaller than the subdomain size (w'h'>¯ZM) are compared to those directly calculated from the CRM simulations (w'h'>¯CRM) for different subdomain sizes. The ensemble mean w'h'>¯CRM decreases by more than half as the subdomain size decreases from 128 to 8 km across while w'h'>¯ZM decreases with subdomain size only for strong convection cases and increases for weaker cases. The resolution dependence of w'h'>¯ZM is determined by the positive-definite grid-scale tendency of convective available potential energy (CAPE) in the convective quasi-equilibrium (QE) closure. Further analysis shows the actual grid-scale tendency of CAPE (before taking the positive definite value) and w'h'>¯CRM behave very similarly as the subdomain size changes because they are both tied to grid-scale advective tendencies. We can improve the resolution dependence of w'h'>¯ZM significantly by averaging the grid-scale tendency of CAPE over an appropriately large area surrounding each subdomain before taking its positive definite value. Even though the ensemble mean w'h'>¯CRM decreases with increasing resolution, its variability increases dramatically. w'h'>¯ZM cannot capture such increase in the variability, suggesting the need for stochastic treatment of convection at relatively high spatial resolution (8 or 16 km).

  11. Benefits of Dominance over Additive Models for the Estimation of Average Effects in the Presence of Dominance

    Directory of Open Access Journals (Sweden)

    Pascal Duenk

    2017-10-01

    Full Text Available In quantitative genetics, the average effect at a single locus can be estimated by an additive (A model, or an additive plus dominance (AD model. In the presence of dominance, the AD-model is expected to be more accurate, because the A-model falsely assumes that residuals are independent and identically distributed. Our objective was to investigate the accuracy of an estimated average effect (α^ in the presence of dominance, using either a single locus A-model or AD-model. Estimation was based on a finite sample from a large population in Hardy-Weinberg equilibrium (HWE, and the root mean squared error of α^ was calculated for several broad-sense heritabilities, sample sizes, and sizes of the dominance effect. Results show that with the A-model, both sampling deviations of genotype frequencies from HWE frequencies and sampling deviations of allele frequencies contributed to the error. With the AD-model, only sampling deviations of allele frequencies contributed to the error, provided that all three genotype classes were sampled. In the presence of dominance, the root mean squared error of α^ with the AD-model was always smaller than with the A-model, even when the heritability was less than one. Remarkably, in the absence of dominance, there was no disadvantage of fitting dominance. In conclusion, the AD-model yields more accurate estimates of average effects from a finite sample, because it is more robust against sampling deviations from HWE frequencies than the A-model. Genetic models that include dominance, therefore, yield higher accuracies of estimated average effects than purely additive models when dominance is present.

  12. Application of the Hilbert space average method on heat conduction models.

    Science.gov (United States)

    Michel, Mathias; Gemmer, Jochen; Mahler, Günter

    2006-01-01

    We analyze closed one-dimensional chains of weakly coupled many level systems, by means of the so-called Hilbert space average method (HAM). Subject to some concrete conditions on the Hamiltonian of the system, our theory predicts energy diffusion with respect to a coarse-grained description for almost all initial states. Close to the respective equilibrium, we investigate this behavior in terms of heat transport and derive the heat conduction coefficient. Thus, we are able to show that both heat (energy) diffusive behavior as well as Fourier's law follows from and is compatible with a reversible Schrödinger dynamics on the complete level of description.

  13. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter.

    Science.gov (United States)

    Huang, Lei

    2015-09-30

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required.

  14. The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average options

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    in their specification of the conditional variance, conditional correlation, and innovation distribution. All models belong to the dynamic conditional correlation class which is particularly suited because it allows to consistently estimate the risk neutral dynamics with a manageable computational effort in relatively...... innovation for a Laplace innovation assumption improves the pricing in a smaller way. Apart from investigating directly the value of model sophistication in terms of dollar losses, we also use the model condence set approach to statistically infer the set of models that delivers the best pricing performance....

  15. Spatial models for probabilistic prediction of wind power with application to annual-average and high temporal resolution data

    DEFF Research Database (Denmark)

    Lenzi, Amanda; Pinson, Pierre; Clemmensen, Line Katrine Harder

    2017-01-01

    average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled...... with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show...

  16. Control of Stochastic Master Equation Models of Genetic Regulatory Networks by Approximating Their Average Behavior

    Science.gov (United States)

    Umut Caglar, Mehmet; Pal, Ranadip

    2010-10-01

    The central dogma of molecular biology states that ``information cannot be transferred back from protein to either protein or nucleic acid.'' However, this assumption is not exactly correct in most of the cases. There are a lot of feedback loops and interactions between different levels of systems. These types of interactions are hard to analyze due to the lack of data in the cellular level and probabilistic nature of interactions. Probabilistic models like Stochastic Master Equation (SME) or deterministic models like differential equations (DE) can be used to analyze these types of interactions. SME models based on chemical master equation (CME) can provide detailed representation of genetic regulatory system, but their use is restricted by the large data requirements and computational costs of calculations. The differential equations models on the other hand, have low calculation costs and much more adequate to generate control procedures on the system; but they are not adequate to investigate the probabilistic nature of interactions. In this work the success of the mapping between SME and DE is analyzed, and the success of a control policy generated by DE model with respect to SME model is examined. Index Terms--- Stochastic Master Equation models, Differential Equation Models, Control Policy Design, Systems biology

  17. A volume averaged global model for inductively coupled HBr/Ar plasma discharge

    Science.gov (United States)

    Chung, Sang-Young; Kwon, Deuk-Chul; Choi, Heechol; Song, Mi-Young

    2015-09-01

    A global model for inductively coupled HBr/Ar plasma was developed. The model was based on a self-consistent global model had been developed by Kwon et al., and a set of chemical reactions in the HBr/Ar plasma was compiled by surveying theoretical, experimental and evaluative researches. In this model vibrational excitations of bi-atomic molecules and electronic excitations of hydrogen atom were taken into account. Neutralizations by collisions between positive and negative ions were considered with Hakman's approximate formula achieved by fitting of theoretical result. For some reactions that were not supplied from literatures the reaction parameters of Cl2 and HCl were adopted as them Br2 and HBr, respectively. For validation calculation results using this model were compared with experimental results from literatures for various plasma discharge parameters and it showed overall good agreement.

  18. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  19. ensembleBMA: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging

    National Research Council Canada - National Science Library

    Fraley, Chris; Raftery, Adrian E; Gneiting, Tilmann; Sloughter, J. M

    2007-01-01

    .... It provides functions for parameter estimation via the EM algorithm for normal mixture models "appropriate for temperature or pressure" and mixtures of gamma distributions with a point mass at 0...

  20. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    Science.gov (United States)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to

  1. Average circulation, seasonal cycle, and mesoscale dynamics of the Peru Current System: A modeling approach

    Science.gov (United States)

    Penven, P.; Echevin, V.; Pasapera, J.; Colas, F.; Tam, J.

    2005-10-01

    The Humboldt Current System is the most productive of the eastern boundary currents. In the northern part, the Peru Current System (PCS) is located between 5°S and 20°S. Along the Peruvian coast, an equatorward wind forces a strong coastal upwelling. A high resolution model is designed to investigate the mean circulation, the seasonal cycle, and the mesoscale dynamics for the PCS. The model is able to reproduce the equatorward Peru Coastal Current (PCC), the Peru Chile Under-Current (PCUC) which follows the shelf break towards the pole, and the Peru-Chile Counter-Current (PCCC) which flows directly towards the south and veers to the west around 15°S. While the upper part of the PCUC is close to the surface and might even outcrop as a counter current, the bottom part follows ? isolines. The PCCC appears to be directly forced by the cyclonic wind stress curl. The model is able to produce the upwelling front, the cold water tongue which extends toward the equator and the equatorial front as described in the literature. Model seasonal changes in SST and SSH are compared to measurements. For the central PCS, model EKE is 10% to 30% lower than the observations. The model eddy diameters follow a strong equatorward increase. The injection length scales, derived from the energy spectra, strongly correlate to the Rossby radius of deformation, confirming the predominant role of baroclinic instability. At 3°S, the model solution appears to switch from a turbulent oceanic regime to an equatorial regime dominated by zonal currents.

  2. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    Science.gov (United States)

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  3. Application of Maximum Likelihood Bayesian Model Averaging to Groundwater Flow and Transport at the Hanford Site 300 Area

    Science.gov (United States)

    Meyer, P. D.; Ye, M.; Neuman, S. P.; Rockhold, M. L.

    2006-12-01

    Applications of groundwater flow and transport models to regulatory and design problems have illustrated the potential importance of accounting for uncertainties in model conceptualization and structure as well as model parameters. One approach to this issue is to characterize model uncertainty using a discrete set of alternatives and assess the prediction uncertainty arising from the joint impact of model and parameter uncertainty. We demonstrate the application of this approach to the modeling of groundwater flow and uranium transport at the 300 Area of the Dept. of Energy Hanford Site in Washington State using the recently developed Maximum Likelihood Bayesian Model Averaging (MLBMA) method. Model uncertainty was included using alternative representations of the hydrogeologic units at the 300 Area and alternative representations of uranium adsorption. Parameter uncertainties for each model were based on the estimated parameter covariances resulting from the joint calibration of each model alternative to observations of hydraulic head and uranium concentration. The relative plausibility of each calibrated model was expressed in terms of a posterior model probability computed on the basis of Kashyap's information criterion KIC. Results of the application show that model uncertainty may dominate parameter uncertainty for the set of alternative models considered. We discuss the sensitivity of model probabilities to differences in KIC values and examine the effect of particular calibration data on model probabilities. In addition, we discuss the advantages of KIC over other model discrimination criteria for estimating model probabilities.

  4. Adaptive and self-averaging Thouless-Anderson-Palmer mean-field theory for probabilistic modeling

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2001-01-01

    of the distribution of couplings between the random variables is required, our method adapts to the concrete set of couplings. We show the significance of the approach in two ways: Our approach reproduces replica symmetric results for a wide class of toy models (assuming a nonglassy phase) with given disorder...

  5. What Type of Finance Matters for Growth? Bayesian Model Averaging Evidence

    Czech Academy of Sciences Publication Activity Database

    Iftekhar, H.; Horváth, Roman; Mareš, J.

    -, - (2018) ISSN 0258-6770 R&D Projects: GA ČR GA16-09190S Institutional support: RVO:67985556 Keywords : long-term economic growth * Bayesian model * uncertainty Subject RIV: AH - Economics Impact factor: 1.431, year: 2016 http:// library .utia.cas.cz/separaty/2017/E/horvath-0466516.pdf

  6. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,

  7. Tidal Hydrodynamics in the Lower Columbia River Estuary through Depth Averaged Adaptive Hydraulics Modeling

    Directory of Open Access Journals (Sweden)

    Gaurav Savant

    2014-01-01

    Full Text Available The adaptive hydraulics (AdH numerical code was applied to study tidal propagation in the Lower Columbia River (LCR estuary. The results demonstrate the readiness of this AdH model towards the further study of hydrodynamics in the LCR. The AdH model accurately replicated behavior of the tide as it propagated upstream into the LCR system. Results show that the MSf tidal component and the M4 overtidal component are generated in the middle LCR and contain a substantial amount of tidal energy. An analysis was performed to determine the causes of MSf tide amplification, and it was found that approximately 80% of the amplification occurs due to nonlinear interaction between the M2 and the S2 tidal components.

  8. Inconsistency in the average hydraulic models used in nuclear reactor design and safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jee Won; Roh, Gyu Hong; Choi, Hang Bok [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    One of important inconsistencies in the six-equation model predictions has been found to be the force experienced by a single bubble placed in a convergent stream of liquid. Various sets of governing equations yield different amount of forces to hold the bubble stationary in a convergent nozzle. By using the first order potential flow theory, it is found that the six-equation model can not be used to estimate the force experienced by a deformed bubble. The theoretical value of the particle stress of a bubble in a convergent nozzle flow has been found to be a function of the Weber number when bubble distortion is allowed. This force has been calculated by using different sets of governing equations and compared with the theoretical value. It is suggested in this study that the bubble size distribution function can be used to remove the presented inconsistency by relating the interfacial variables with different moments of the bubble size distribution function. This study also shows that the inconsistencies in the thermal-hydraulic governing equation can be removed by mechanistic modeling of the phasic interface. 11 refs., 3 figs. (Author)

  9. Modeling and Simulation of Fluid Mixing Laser Experiments and Supernova

    Energy Technology Data Exchange (ETDEWEB)

    James Glimm

    2009-06-04

    The three year plan for this project was to develop novel theories and advanced simulation methods leading to a systematic understanding of turbulent mixing. A primary focus is the comparison of simulation models (Direct Numerical Simulation (DNS), Large Eddy Simulations (LES), full two fluid simulations and subgrid averaged models) to experiments. The comprehension and reduction of experimental and simulation data are central goals of this proposal. We model 2D and 3D perturbations of planar or circular interfaces. We compare these tests with models derived from averaged equations (our own and those of others). As a second focus, we develop physics based subgrid simulation models of diffusion across an interface, with physical but no numerical mass diffusion. Multiple layers and reshock are considered here.

  10. Variations in environmental tritium doses due to meteorological data averaging and uncertainties in pathway model parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kock, A.

    1996-05-01

    The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies.

  11. Comparison of Average Transport and Dispersion Among a Gaussian Model, a Two-Dimensional Model and a Three-Dimensional Model

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, J A; Molenkamp, C R; Bixler, N E; Morrow, C W; Ramsdell, Jr., J V

    2004-05-10

    The Nuclear Regulatory Commission uses MACCS2 (MELCOR Accident Consequence Code System, Version 2) for regulatory purposes such as planning for emergencies and cost-benefit analyses. MACCS2 uses a straight-line Gaussian model for atmospheric transport and dispersion. This model has been criticized as being overly simplistic, although only expected values of metrics of interest are used in the regulatory arena. To test the assumption that averaging numerous weather results adequately compensates for the loss of structure in the meteorology that occurs away from the point of release, average MACCS2 results have been compared with average results from a state-of-the-art, 3-dimensional LODI (Lagrangian Operational Dispersion Integrator)/ADAPT (Atmospheric Data Assimilation and Parameterization Technique) and a Lagrangian trajectory, Gaussian puff transport and dispersion model from RASCAL (Radiological Assessment System for consequence Analysis). The weather sample included 610 weather trials representing conditions for a hypothetical release at the Central Facility of the Department of Energy's Atmospheric Radiation Measurement site. The values compared were average ground concentrations and average surface-level air concentrations at several distances out to 100 miles (160.9 km) from the assumed release site.

  12. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  13. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  14. Instability Analysis of a Model Pump-Turbine with MGV Based on Nonlinear Partially Averaged Navier-Stokes Methods

    OpenAIRE

    Jintao Liu; Yulin Wu; Leqin Wang

    2013-01-01

    Pump-turbines were always running at partial condition with the power grid changing. Flow separations and stall phenomena were obvious in the pump-turbine. Most of the RANS turbulence models solved the shear stress by linear difference scheme and isotropic models, so they could not capture all kinds of vortexes in the pump-turbine well. At present, partially-averaged Navier-Stokes (PANS) model has been found to be better than LES in simulating flow regions especially those with less discretiz...

  15. Adaptive neuro-fuzzy based inferential sensor model for estimating the average air temperature in space heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Jassar, S.; Zhao, L. [Department of Electrical and Computer Engineering, Ryerson University, 350 Victoria Street, Toronto, ON (Canada); Liao, Z. [Department of Architectural Science, Ryerson University (Canada)

    2009-08-15

    The heating systems are conventionally controlled by open-loop control systems because of the absence of practical methods for estimating average air temperature in the built environment. An inferential sensor model, based on adaptive neuro-fuzzy inference system modeling, for estimating the average air temperature in multi-zone space heating systems is developed. This modeling technique has the advantage of expert knowledge of fuzzy inference systems (FISs) and learning capability of artificial neural networks (ANNs). A hybrid learning algorithm, which combines the least-square method and the back-propagation algorithm, is used to identify the parameters of the network. This paper describes an adaptive network based inferential sensor that can be used to design closed-loop control for space heating systems. The research aims to improve the overall performance of heating systems, in terms of energy efficiency and thermal comfort. The average air temperature results estimated by using the developed model are strongly in agreement with the experimental results. (author)

  16. A new method to account for large scale mixing-processes in spatially-averaged flow models

    OpenAIRE

    Huthoff, Freek; Roos, Pieter C.; Augustijn, Dionysius C.M.; Hulscher, Suzanne J.M.H.; Boyer, D.; Alexandrova, O.

    2007-01-01

    A new method is proposed to calculate the cross-sectional flow field in compound channels for 1D flow models. The proposed method involves a new parameterization of the interface stress between adjacent compartments, typically between the main channel and floodplain of a two-stage channel. This expression, proportional to the difference in squared velocities allows for a simple calculation of the average flow velocities in different compartments. For two-stage channels good agreement is found...

  17. Flow and transport simulation of Madeira River using three depth-averaged two-equation turbulence closure models

    Directory of Open Access Journals (Sweden)

    Li-ren YU

    2012-03-01

    Full Text Available This paper describes a numerical simulation in the Amazon water system, aiming to develop a quasi-three-dimensional numerical tool for refined modeling of turbulent flow and passive transport of mass in natural waters. Three depth-averaged two-equation turbulence closure models, , , and , were used to close the non-simplified quasi-three-dimensional hydrodynamic fundamental governing equations. The discretized equations were solved with the advanced multi-grid iterative method using non-orthogonal body-fitted coarse and fine grids with collocated variable arrangement. Except for steady flow computation, the processes of contaminant inpouring and plume development at the beginning of discharge, caused by a side-discharge of a tributary, have also been numerically investigated. The three depth-averaged two-equation closure models are all suitable for modeling strong mixing turbulence. The newly established turbulence models such as the model, with a higher order of magnitude of the turbulence parameter, provide a possibility for improving computational precision.

  18. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    Science.gov (United States)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by

  19. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Yuxing [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing China; Fan, Jiwen [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xiao, Heng [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Zhang, Guang J. [Scripps Institution of Oceanography, University of California, San Diego CA USA; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Xu, Kuan-Man [NASA Langley Research Center, Hampton VA USA; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA; Gustafson, William I. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-11-01

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32 km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.

  20. Structure and modeling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Novikov, E.A. [Univ. of California, San Diego, La Jolla, CA (United States)

    1995-12-31

    The {open_quotes}vortex strings{close_quotes} scale l{sub s} {approximately} LRe{sup -3/10} (L-external scale, Re - Reynolds number) is suggested as a grid scale for the large-eddy simulation. Various aspects of the structure of turbulence and subgrid modeling are described in terms of conditional averaging, Markov processes with dependent increments and infinitely divisible distributions. The major request from the energy, naval, aerospace and environmental engineering communities to the theory of turbulence is to reduce the enormous number of degrees of freedom in turbulent flows to a level manageable by computer simulations. The vast majority of these degrees of freedom is in the small-scale motion. The study of the structure of turbulence provides a basis for subgrid-scale (SGS) models, which are necessary for the large-eddy simulations (LES).

  1. A depth-averaged debris-flow model that includes the effects of evolving dilatancy. I. physical basis

    Science.gov (United States)

    Iverson, Richard M.; George, David L.

    2014-01-01

    To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.

  2. How to improve parameter estimates in GLM-based fMRI data analysis: cross-validated Bayesian model averaging.

    Science.gov (United States)

    Soch, Joram; Meyer, Achim Pascal; Haynes, John-Dylan; Allefeld, Carsten

    2017-09-01

    In functional magnetic resonance imaging (fMRI), model quality of general linear models (GLMs) for first-level analysis is rarely assessed. In recent work (Soch et al., 2016: "How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection", NeuroImage, vol. 141, pp. 469-489; http://dx.doi.org/10.1016/j.neuroimage.2016.07.047), we have introduced cross-validated Bayesian model selection (cvBMS) to infer the best model for a group of subjects and use it to guide second-level analysis. While this is the optimal approach given that the same GLM has to be used for all subjects, there is a much more efficient procedure when model selection only addresses nuisance variables and regressors of interest are included in all candidate models. In this work, we propose cross-validated Bayesian model averaging (cvBMA) to improve parameter estimates for these regressors of interest by combining information from all models using their posterior probabilities. This is particularly useful as different models can lead to different conclusions regarding experimental effects and the most complex model is not necessarily the best choice. We find that cvBMS can prevent not detecting established effects and that cvBMA can be more sensitive to experimental effects than just using even the best model in each subject or the model which is best in a group of subjects. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Instability Analysis of a Model Pump-Turbine with MGV Based on Nonlinear Partially Averaged Navier-Stokes Methods

    Directory of Open Access Journals (Sweden)

    Jintao Liu

    2013-01-01

    Full Text Available Pump-turbines were always running at partial condition with the power grid changing. Flow separations and stall phenomena were obvious in the pump-turbine. Most of the RANS turbulence models solved the shear stress by linear difference scheme and isotropic models, so they could not capture all kinds of vortexes in the pump-turbine well. At present, partially-averaged Navier-Stokes (PANS model has been found to be better than LES in simulating flow regions especially those with less discretized grid. In this paper, a new nonlinear PANS turbulence model was proposed, which was modified from RNG k-ε turbulence model and the shear stresses were solved by Ehrhardt's nonlinear methods. The nonlinear PANS model was used to study the instability of “S” region of a model pump-turbine with misaligned guide vanes (MGV. The opening of preopened guide vanes had great influence on the “S” characteristics. The optimal relative opening of the preopened guide vanes was 50% for the improvement of the “S” characteristics. Pressure fluctuations in the vaneless space were analyzed. It is found that the dominant frequency at the vaneless space was twice the blade passing frequency, while the second dominant frequency decreased as the preopening increased.

  4. Predicting dissolution patterns in variable aperture fractures: 1. Development and evaluation of an enhanced depth-averaged computational model

    Energy Technology Data Exchange (ETDEWEB)

    Detwiler, R L; Rajaram, H

    2006-04-21

    Water-rock interactions within variable-aperture fractures can lead to dissolution of fracture surfaces and local alteration of fracture apertures, potentially transforming the transport properties of the fracture over time. Because fractures often provide dominant pathways for subsurface flow and transport, developing models that effectively quantify the role of dissolution on changing transport properties over a range of scales is critical to understanding potential impacts of natural and anthropogenic processes. Dissolution of fracture surfaces is controlled by surface-reaction kinetics and transport of reactants and products to and from the fracture surfaces. We present development and evaluation of a depth-averaged model of fracture flow and reactive transport that explicitly calculates local dissolution-induced alterations in fracture apertures. The model incorporates an effective mass transfer relationship that implicitly represents the transition from reaction-limited dissolution to transport-limited dissolution. We evaluate the model through direct comparison to previously reported physical experiments in transparent analog fractures fabricated by mating an inert, transparent rough surface with a smooth single crystal of potassium dihydrogen phosphate (KDP), which allowed direct measurement of fracture aperture during dissolution experiments using well-established light transmission techniques [Detwiler, et al., 2003]. Comparison of experiments and simulations at different flow rates demonstrate the relative impact of the dimensionless Peclet and Damkohler numbers on fracture dissolution and the ability of the computational model to simulate dissolution. Despite some discrepancies in the small-scale details of dissolution patterns, the simulations predict the evolution of large-scale features quite well for the different experimental conditions. This suggests that our depth-averaged approach to simulating fracture dissolution provides a useful approach for

  5. A method to characterize average cervical spine ligament response based on raw data sets for implementation into injury biomechanics models.

    Science.gov (United States)

    Mattucci, Stephen F E; Cronin, Duane S

    2015-01-01

    Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. The effects of the sub-grid variability of soil and land cover data on agricultural droughts in Germany

    Science.gov (United States)

    Kumar, Rohini; Samaniego, Luis; Zink, Matthias

    2013-04-01

    Simulated soil moisture from land surface or water balance models is increasingly used to characterize and/or monitor the development of agricultural droughts at regional and global scales (e.g. NLADS, EDO, GLDAS). The skill of these models to accurately replicate hydrologic fluxes and state variables is strongly dependent on the quality meteorological forcings, the conceptualization of dominant processes, and the parameterization scheme used to incorporate the variability of land surface properties (e.g. soil, topography, and vegetation) at a coarser spatial resolutions (e.g. at least 4 km). The goal of this study is to analyze the effects of the sub-grid variability of soil texture and land cover properties on agricultural drought statistics such as duration, severity, and areal extent. For this purpose, a process based mesoscale hydrologic model (mHM) is used to create two sets of daily soil moisture fields over Germany at the spatial resolution of (4 × 4) km2 from 1950 to 2011. These simulations differ from each other only on the manner in which the land surface properties are accounted within the model. In the first set, soil moisture fields are obtained with the multiscale parameter regionalization (MPR) scheme (Samaniego, et. al. 2010, Kumar et. al. 2012), which explicitly takes the sub-grid variability of soil texture and land cover properties into account. In the second set, on the contrary, a single dominant soil and land cover class is used for ever grid cell at 4 km. Within each set, the propagation of the parameter uncertainty into the soil moisture simulations is also evaluated using an ensemble of 100 best global parameter sets of mHM (Samaniego, et. al. 2012). To ensure comparability, both sets of this ensemble simulations are forced with the same fields of meteorological variables (e.g., precipitation, temperature, and potential evapotranspiration). Results indicate that both sets of model simulations, with and without the sub-grid variability of

  7. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  8. A nonlinear structural subgrid-scale closure for compressible MHD Part II: a priori comparison on turbulence simulation data

    CERN Document Server

    Grete, P; Schmidt, W; Schleicher, D R G

    2016-01-01

    Even though compressible plasma turbulence is encountered in many astrophysical phenomena, its effect is often not well understood. Furthermore, direct numerical simulations are typically not able to reach the extreme parameters of these processes. For this reason, large-eddy simulations (LES), which only simulate large and intermediate scales directly, are employed. The smallest, unresolved scales and the interactions between small and large scales are introduced by means of a subgrid-scale (SGS) model. We propose and verify a new set of nonlinear SGS closures for future application as an SGS model in LES of compressible magnetohydrodynamics (MHD). We use 15 simulations (without explicit SGS model) of forced, isotropic, homogeneous turbulence with varying sonic Mach number $\\mathrm{M_s} = 0.2$ to $20$ as reference data for the most extensive \\textit{a priori} tests performed so far in literature. In these tests we explicitly filter the reference data and compare the performance of the new closures against th...

  9. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  10. Effect of Considering Sub-Grid Scale Uncertainties on the Forecasts of a High-Resolution Limited Area Ensemble Prediction System

    Science.gov (United States)

    Kim, SeHyun; Kim, Hyun Mee

    2017-05-01

    The ensemble prediction system (EPS) is widely used in research and at operation center because it can represent the uncertainty of predicted atmospheric state and provide information of probabilities. The high-resolution (so-called "convection-permitting") limited area EPS can represent the convection and turbulence related to precipitation phenomena in more detail, but it is also much sensitive to small-scale or sub-grid scale processes. The convection and turbulence are represented using physical processes in the model and model errors occur due to sub-grid scale processes that were not resolved. This study examined the effect of considering sub-grid scale uncertainties using the high-resolution limited area EPS of the Korea Meteorological Administration (KMA). The developed EPS has horizontal resolution of 3 km and 12 ensemble members. The initial and boundary conditions were provided by the global model. The Random Parameters (RP) scheme was used to represent sub-grid scale uncertainties. The EPSs with and without the RP scheme were developed and the results were compared. During the one month period of July, 2013, a significant difference was shown in the spread of 1.5 m temperature and the Root Mean Square Error and spread of 10 m zonal wind due to application of the RP scheme. For precipitation forecast, the precipitation tended to be overestimated relative to the observation when the RP scheme was applied. Moreover, the forecast became more accurate for heavy precipitations and the longer forecast lead times. For two heavy rainfall cases occurred during the research period, the higher Equitable Threat Score was observed for heavy precipitations in the system with the RP scheme compared to the one without, demonstrating consistency with the statistical results for the research period. Therefore, the predictability for heavy precipitation phenomena that affected the Korean Peninsula increases if the RP scheme is used to consider sub-grid scale uncertainties

  11. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    Science.gov (United States)

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  12. MOTION ARTIFACT REDUCTION IN FUNCTIONAL NEAR INFRARED SPECTROSCOPY SIGNALS BY AUTOREGRESSIVE MOVING AVERAGE MODELING BASED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    MEHDI AMIAN

    2013-10-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a technique that is used for noninvasive measurement of the oxyhemoglobin (HbO2 and deoxyhemoglobin (HHb concentrations in the brain tissue. Since the ratio of the concentration of these two agents is correlated with the neuronal activity, fNIRS can be used for the monitoring and quantifying the cortical activity. The portability of fNIRS makes it a good candidate for studies involving subject's movement. The fNIRS measurements, however, are sensitive to artifacts generated by subject's head motion. This makes fNIRS signals less effective in such applications. In this paper, the autoregressive moving average (ARMA modeling of the fNIRS signal is proposed for state-space representation of the signal which is then fed to the Kalman filter for estimating the motionless signal from motion corrupted signal. Results are compared to the autoregressive model (AR based approach, which has been done previously, and show that the ARMA models outperform AR models. We attribute it to the richer structure, containing more terms indeed, of ARMA than AR. We show that the signal to noise ratio (SNR is about 2 dB higher for ARMA based method.

  13. Estimation of the whole-body averaged SAR of grounded human models for plane wave exposure at respective resonance frequencies.

    Science.gov (United States)

    Hirata, Akimasa; Yanase, Kazuya; Laakso, Ilkka; Chan, Kwok Hung; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi; Conil, Emmanuelle; Wiart, Joe

    2012-12-21

    According to the international guidelines, the whole-body averaged specific absorption rate (WBA-SAR) is used as a metric of basic restriction for radio-frequency whole-body exposure. It is well known that the WBA-SAR largely depends on the frequency of the incident wave for a given incident power density. The frequency at which the WBA-SAR becomes maximal is called the 'resonance frequency'. Our previous study proposed a scheme for estimating the WBA-SAR at this resonance frequency based on an analogy between the power absorption characteristic of human models in free space and that of a dipole antenna. However, a scheme for estimating the WBA-SAR in a grounded human has not been discussed sufficiently, even though the WBA-SAR in a grounded human is larger than that in an ungrounded human. In this study, with the use of the finite-difference time-domain method, the grounded condition is confirmed to be the worst-case exposure for human body models in a standing posture. Then, WBA-SARs in grounded human models are calculated at their respective resonant frequencies. A formula for estimating the WBA-SAR of a human standing on the ground is proposed based on an analogy with a quarter-wavelength monopole antenna. First, homogenized human body models are shown to provide the conservative WBA-SAR as compared with anatomically based models. Based on the formula proposed here, the WBA-SARs in grounded human models are approximately 10% larger than those in free space. The variability of the WBA-SAR was shown to be ±30% even for humans of the same age, which is caused by the body shape.

  14. Stabilization for a Class of Switched Nonlinear Systems With Novel Average Dwell Time Switching by T-S Fuzzy Modeling.

    Science.gov (United States)

    Zhao, Xudong; Yin, Yunfei; Niu, Ben; Zheng, Xiaolong

    2016-08-01

    In this paper, the problem of switching stabilization for a class of switched nonlinear systems is studied by using average dwell time (ADT) switching, where the subsystems are possibly all unstable. First, a new concept of ADT is given, which is different from the traditional definition of ADT. Based on the new proposed switching signals, a sufficient condition of stabilization for switched nonlinear systems with unstable subsystems is derived. Then, the T-S fuzzy modeling approach is applied to represent the underlying nonlinear system to make the obtained condition easily verified. A novel multiple quadratic Lyapunov function approach is also proposed, by which some conditions are provided in terms of a set of linear matrix inequalities to guarantee the derived T-S fuzzy system to be asymptotically stable. Finally, a numerical example is given to demonstrate the effectiveness of our developed results.

  15. A priori study of subgrid-scale features in turbulent Rayleigh-Bénard convection

    Science.gov (United States)

    Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.

    2017-10-01

    At the crossroad between flow topology analysis and turbulence modeling, a priori studies are a reliable tool to understand the underlying physics of the subgrid-scale (SGS) motions in turbulent flows. In this paper, properties of the SGS features in the framework of a large-eddy simulation are studied for a turbulent Rayleigh-Bénard convection (RBC). To do so, data from direct numerical simulation (DNS) of a turbulent air-filled RBC in a rectangular cavity of aspect ratio unity and π spanwise open-ended distance are used at two Rayleigh numbers R a ∈{1 08,1 010 } [Dabbagh et al., "On the evolution of flow topology in turbulent Rayleigh-Bénard convection," Phys. Fluids 28, 115105 (2016)]. First, DNS at Ra = 108 is used to assess the performance of eddy-viscosity models such as QR, Wall-Adapting Local Eddy-viscosity (WALE), and the recent S3PQR-models proposed by Trias et al. ["Building proper invariants for eddy-viscosity subgrid-scale models," Phys. Fluids 27, 065103 (2015)]. The outcomes imply that the eddy-viscosity modeling smoothes the coarse-grained viscous straining and retrieves fairly well the effect of the kinetic unfiltered scales in order to reproduce the coherent large scales. However, these models fail to approach the exact evolution of the SGS heat flux and are incapable to reproduce well the further dominant rotational enstrophy pertaining to the buoyant production. Afterwards, the key ingredients of eddy-viscosity, νt, and eddy-diffusivity, κt, are calculated a priori and revealed positive prevalent values to maintain a turbulent wind essentially driven by the mean buoyant force at the sidewalls. The topological analysis suggests that the effective turbulent diffusion paradigm and the hypothesis of a constant turbulent Prandtl number are only applicable in the large-scale strain-dominated areas in the bulk. It is shown that the bulk-dominated rotational structures of vortex-stretching (and its synchronous viscous dissipative structures) hold

  16. A framework for epistemic uncertainty quantification of turbulent scalar flux models for Reynolds-averaged Navier-Stokes simulations

    Science.gov (United States)

    Gorlé, C.; Iaccarino, G.

    2013-05-01

    Reynolds-averaged Navier-Stokes (RANS) simulations are a practical approach for solving complex multi-physics turbulent flows, but the underlying assumptions of the turbulence models introduce errors and uncertainties in the simulation outcome. The flow in scramjet combustors is an example of such a complex flow and the accurate characterization of safety and operability limits of these engines using RANS simulations requires an assessment of the model uncertainty. The objective of this paper is to present a framework for the epistemic uncertainty quantification of turbulence and mixing models in RANS simulations. The capabilities of the methodology will be demonstrated by performing simulations of the mixing of an underexpanded jet in a supersonic cross flow, which involves many flow features observed in scramjet engines. The fundamental sources of uncertainty in the RANS simulations are the models used for the Reynolds stresses in the momentum equations and the turbulent scalar fluxes in the scalar transport equations. The methodology consists in directly perturbing the modeled quantities in the equations, thereby establishing a method that is completely independent of the initial model form to overcome the limitations of traditional sensitivity studies. The perturbations are defined in terms of the decomposed Reynolds stress tensor, i.e., the tensor magnitude and the eigenvalues and eigenvectors of the normalized anisotropy tensor. The turbulent scalar fluxes are perturbed by using the perturbed Reynolds stresses in a generalized gradient diffusion model formulation and by changing the model constant. The perturbations were parameterized based on a comparison between the Reynolds stresses obtained from a baseline RANS simulation and those obtained from a large-eddy simulation database. Subsequently an optimization problem was solved, varying the parameters in the perturbation functions to maximize a quantity of interest that quantifies the downstream mixing. The

  17. A Novel Approach to Testing for Average Bioequivalence Based on Modeling the Within-Period Dependence Structure.

    Science.gov (United States)

    Chandrasekhar, Rameela; Shi, Yi; Hutson, Alan D; Wilding, Gregory E

    2015-01-01

    Bioequivalence trials are commonly conducted to assess therapeutic equivalence between a generic and an innovator brand formulations. In such trials, drug concentrations are obtained repeatedly over time and are summarized using a metric such as the area under the concentration vs. time curve (AUC) for each subject. The usual practice is to then conduct two one-sided tests using these areas to evaluate for average bioequivalence. A major disadvantage of this approach is the loss of information encountered when ignoring the correlation structure between repeated measurements in the computation of areas. In this article, we propose a general linear model approach that incorporates the within-subject covariance structure for making inferences on mean areas. The model-based method can be seen to arise naturally from the reparameterization of the AUC as a linear combination of outcome means. We investigate and compare the inferential properties of our proposed method with the traditional two one-sided tests approach using Monte Carlo simulation studies. We also examine the properties of the method in the event of missing data. Simulations show that the proposed approach is a cost-effective, viable alternative to the traditional method with superior inferential properties. Inferential advantages are particularly apparent in the presence of missing data. To illustrate our approach, a real working example from an asthma study is utilized.

  18. Fusing moving average model and stationary wavelet decomposition for automatic incident detection: case study of Tokyo Expressway

    Directory of Open Access Journals (Sweden)

    Qinghua Liu

    2014-12-01

    Full Text Available Traffic congestion is a growing problem in urban areas all over the world. The transport sector has been in full swing event study on intelligent transportation system for automatic detection. The functionality of automatic incident detection on expressways is a primary objective of advanced traffic management system. In order to save lives and prevent secondary incidents, accurate and prompt incident detection is necessary. This paper presents a methodology that integrates moving average (MA model with stationary wavelet decomposition for automatic incident detection, in which parameters of layer coefficient are extracted from the difference between the upstream and downstream occupancy. Unlike other wavelet-based method presented before, firstly it smooths the raw data with MA model. Then it uses stationary wavelet to decompose, which can achieve accurate reconstruction of the signal, and does not shift the signal transfer coefficients. Thus, it can detect the incidents more accurately. The threshold to trigger incident alarm is also adjusted according to normal traffic condition with congestion. The methodology is validated with real data from Tokyo Expressway ultrasonic sensors. Experimental results show that it is accurate and effective, and that it can differentiate traffic accident from other condition such as recurring traffic congestion.

  19. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    Energy Technology Data Exchange (ETDEWEB)

    Bashahu, M. [University of Burundi, Bujumbura (Burundi). Institute of Applied Pedagogy, Department of Physics and Technology

    2003-07-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H{sub d}) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K{sub d} and N/N{sub d}, Ne/8 or K{sub t}. Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  20. Convective kinetic energy equation under the mass-flux subgrid-scale parameterization

    Science.gov (United States)

    Yano, Jun-Ichi

    2015-03-01

    The present paper originally derives the convective kinetic energy equation under mass-flux subgrid-scale parameterization in a formal manner based on the segmentally-constant approximation (SCA). Though this equation is long since presented by Arakawa and Schubert (1974), a formal derivation is not known in the literature. The derivation of this formulation is of increasing interests in recent years due to the fact that it can explain basic aspects of the convective dynamics such as discharge-recharge and transition from shallow to deep convection. The derivation is presented in two manners: (i) for the case that only the vertical component of the velocity is considered and (ii) the case that both the horizontal and vertical components are considered. The equation reduces to the same form as originally presented by Arakwa and Schubert in both cases, but with the energy dissipation term defined differently. In both cases, nevertheless, the energy "dissipation" (loss) term consists of the three principal contributions: (i) entrainment-detrainment, (ii) outflow from top of convection, and (iii) pressure effects. Additionally, inflow from the bottom of convection contributing to a growth of convection is also formally counted as a part of the dissipation term. The eddy dissipation is also included for a completeness. The order-of-magnitude analysis shows that the convective kinetic energy "dissipation" is dominated by the pressure effects, and it may be approximately described by Rayleigh damping with a constant time scale of the order of 102-103 s. The conclusion is also supported by a supplementary analysis of a cloud-resolving model (CRM) simulation. The Appendix discusses how the loss term ("dissipation") of the convective kinetic energy is qualitatively different from the conventional eddy-dissipation process found in turbulent flows.

  1. ESTIMATION OF ANNUAL AVERAGE SOIL LOSS, BASED ON RUSLE MODEL IN KALLAR WATERSHED, BHAVANI BASIN, TAMIL NADU, INDIA

    Directory of Open Access Journals (Sweden)

    S. Abdul Rahaman

    2015-10-01

    Full Text Available Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS, coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R, soil erodability (K, slope length and steepness (LS, cover management (C and conservation practice (P factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS, and soil erosion hazard (A=RKLSCP have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  2. Estimation of Annual Average Soil Loss, Based on Rusle Model in Kallar Watershed, Bhavani Basin, Tamil Nadu, India

    Science.gov (United States)

    Rahaman, S. Abdul; Aruchamy, S.; Jegankumar, R.; Ajeez, S. Abdul

    2015-10-01

    Soil erosion is a widespread environmental challenge faced in Kallar watershed nowadays. Erosion is defined as the movement of soil by water and wind, and it occurs in Kallar watershed under a wide range of land uses. Erosion by water can be dramatic during storm events, resulting in wash-outs and gullies. It can also be insidious, occurring as sheet and rill erosion during heavy rains. Most of the soil lost by water erosion is by the processes of sheet and rill erosion. Land degradation and subsequent soil erosion and sedimentation play a significant role in impairing water resources within sub watersheds, watersheds and basins. Using conventional methods to assess soil erosion risk is expensive and time consuming. A comprehensive methodology that integrates Remote sensing and Geographic Information Systems (GIS), coupled with the use of an empirical model (Revised Universal Soil Loss Equation- RUSLE) to assess risk, can identify and assess soil erosion potential and estimate the value of soil loss. GIS data layers including, rainfall erosivity (R), soil erodability (K), slope length and steepness (LS), cover management (C) and conservation practice (P) factors were computed to determine their effects on average annual soil loss in the study area. The final map of annual soil erosion shows a maximum soil loss of 398.58 t/ h-1/ y-1. Based on the result soil erosion was classified in to soil erosion severity map with five classes, very low, low, moderate, high and critical respectively. Further RUSLE factors has been broken into two categories, soil erosion susceptibility (A=RKLS), and soil erosion hazard (A=RKLSCP) have been computed. It is understood that functions of C and P are factors that can be controlled and thus can greatly reduce soil loss through management and conservational measures.

  3. Draft environmental impact statement : corporate average fuel economy standards, passenger cars and light trucks, model years 2011-2015.

    Science.gov (United States)

    2008-06-01

    The National Highway Traffic Safety Administration (NHTSA) has prepared this Draft Environmental Impact Statement (DEIS) to disclose and analyze the potential environmental impacts of the proposed new Corporate Average Fuel Economy (CAFE) standards a...

  4. Explaining the seasonal cycle of the globally averaged CO2 with a carbon-cycle model

    OpenAIRE

    G. A. Alexandrov

    2014-01-01

    The seasonal changes in the globally averaged atmospheric carbon-dioxide concentrations reflect an important aspect of the global carbon cycle: the gas exchange between the atmosphere and terrestrial biosphere. The data on the globally averaged atmospheric carbon-dioxide concentrations, which are reported by Earth System Research Laboratory of the US National Oceanic & Atmospheric Administration (NOAA/ESRL), could be used to demonstrate the adequacy of the global carbon-cycl...

  5. The effect of two training models on the average changes in running speed in 2400m races

    Directory of Open Access Journals (Sweden)

    Bolas Nikolaos

    2014-01-01

    Full Text Available Running at an even pace is, in both physical and tactical aspect, an essential factor when achieving good results in middle and long distance races. The appropriate strategy for running a tactically effective race starts by selecting the optimal running speed. Two models of training lasting for six weeks were applied on the group of subjects (N=43 composed of students from the Faculty of Sport and Physical Education, University of Belgrade. The aim of the study was to determine how the applied models of training would affect the deviations of running speed from the mean values in 2400m races when running for the best result and also, how the applied models of training would affect the improvement of aerobic capacities, showed through maximal oxygen uptake. The analysis of the obtained results showed that no statistically significant differences in the average deviations of running speed from the mean values in 2400m races were recorded in any of the experimental groups either in the initial (G1=2.44±1.74 % and G2=1±0.75 % or the final measurements (G1=3.72±3.69 % and G2=4.57±3.63 %. Although there were no statistically significant differences after training stimulus in either final measurements, the subjects achieved better result, that is, they improved the running speed in the final (G1=4.12±0.48 m/s and G2=4.23±0.31 m/s as compared with the initial measurement (G1=3.7±0.36 m/s and G2=3.84±0.38 m/s. The results of the study showed that in both groups, there was a statistically significant improvement in the final measurement (G1=56.05±6.91 ml/kg/min and G2=59.55±6.95 ml/kg/min as compared to the initial measurement (G1=53.71±7.23 ml/kg/min and G2=54.58±6.49 ml/kg/min regarding the maximal oxygen uptake so that both training models have a significant effect on this variable. The results obtained could have a significant contribution when working with students and school population, assuming that in the lessons of theory and

  6. RVMAB: Using the Relevance Vector Machine Model Combined with Average Blocks to Predict the Interactions of Proteins from Protein Sequences

    Directory of Open Access Journals (Sweden)

    Ji-Yong An

    2016-05-01

    Full Text Available Protein-Protein Interactions (PPIs play essential roles in most cellular processes. Knowledge of PPIs is becoming increasingly more important, which has prompted the development of technologies that are capable of discovering large-scale PPIs. Although many high-throughput biological technologies have been proposed to detect PPIs, there are unavoidable shortcomings, including cost, time intensity, and inherently high false positive and false negative rates. For the sake of these reasons, in silico methods are attracting much attention due to their good performances in predicting PPIs. In this paper, we propose a novel computational method known as RVM-AB that combines the Relevance Vector Machine (RVM model and Average Blocks (AB to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the AB feature representation on a Position Specific Scoring Matrix (PSSM, reducing the influence of noise using a Principal Component Analysis (PCA, and using a Relevance Vector Machine (RVM based classifier. We performed five-fold cross-validation experiments on yeast and Helicobacter pylori datasets, and achieved very high accuracies of 92.98% and 95.58% respectively, which is significantly better than previous works. In addition, we also obtained good prediction accuracies of 88.31%, 89.46%, 91.08%, 91.55%, and 94.81% on other five independent datasets C. elegans, M. musculus, H. sapiens, H. pylori, and E. coli for cross-species prediction. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM classifier on the yeast dataset. The experimental results demonstrate that our RVM-AB method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool. To facilitate extensive studies for future proteomics research, we developed

  7. Comparison of several models for long term monthly average daily insolation on horizontal surfaces and the estimation of horizontal surface insolation for 16 U. S. locations

    Science.gov (United States)

    Goswami, T. K.; Klett, D. E.

    1980-11-01

    Six models for estimating monthly average daily total insolation were compared with rehabilitated measured data for seven U.S. locations. The models compared are those of Bennett, Barbaro, et al., Sabbagh, et al., Reddy, Danashyar, and Swartman and Ogunlade. As a result of the comparison, the Barbaro model was chosen to estimate the insolation for U.S. locations for which the required climatological input data exists but for which no insolation data is available. The model was modified by a constant multiplier and used to generate average daily insolation values for the 16 locations. The results are tabulated for use by solar system designers.

  8. Autonomous Operation of Hybrid Microgrid with AC and DC Sub-Grids

    DEFF Research Database (Denmark)

    Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power electronic interfaces. The main challenge here is to manage...... the power flow among all the sources distributed throughout the two types of sub-grids, which certainly is tougher than previous efforts developed for only either ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc...... sources, ac sources and interlinking converters. Suitable control and normalization schemes are therefore developed for controlling them with results presented for showing the overall performance of the hybrid microgrid....

  9. Impact of Sub-grid Soil Textural Properties on Simulations of Hydrological Fluxes at the Continental Scale Mississippi River Basin

    Science.gov (United States)

    Kumar, R.; Samaniego, L. E.; Livneh, B.

    2013-12-01

    Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked

  10. [Model of multiple seasonal autoregressive integrated moving average model and its application in prediction of the hand-foot-mouth disease incidence in Changsha].

    Science.gov (United States)

    Tan, Ting; Chen, Lizhang; Liu, Fuqiang

    2014-11-01

    To establish multiple seasonal autoregressive integrated moving average model (ARIMA) according to the hand-foot-mouth disease incidence in Changsha, and to explore the feasibility of the multiple seasonal ARIMA in predicting the hand-foot-mouth disease incidence. EVIEWS 6.0 was used to establish multiple seasonal ARIMA according to the hand-foot- mouth disease incidence from May 2008 to August 2013 in Changsha, and the data of the hand- foot-mouth disease incidence from September 2013 to February 2014 were served as the examined samples of the multiple seasonal ARIMA, then the errors were compared between the forecasted incidence and the real value. Finally, the incidence of hand-foot-mouth disease from March 2014 to August 2014 was predicted by the model. After the data sequence was handled by smooth sequence, model identification and model diagnosis, the multiple seasonal ARIMA (1, 0, 1)×(0, 1, 1)12 was established. The R2 value of the model fitting degree was 0.81, the root mean square prediction error was 8.29 and the mean absolute error was 5.83. The multiple seasonal ARIMA is a good prediction model, and the fitting degree is good. It can provide reference for the prevention and control work in hand-foot-mouth disease.

  11. Volume-averaged SAR in adult and child head models when using mobile phones: a computational study with detailed CAD-based models of commercial mobile phones.

    Science.gov (United States)

    Keshvari, Jafar; Heikkilä, Teemu

    2011-12-01

    Previous studies comparing SAR difference in the head of children and adults used highly simplified generic models or half-wave dipole antennas. The objective of this study was to investigate the SAR difference in the head of children and adults using realistic EMF sources based on CAD models of commercial mobile phones. Four MRI-based head phantoms were used in the study. CAD models of Nokia 8310 and 6630 mobile phones were used as exposure sources. Commercially available FDTD software was used for the SAR calculations. SAR values were simulated at frequencies 900 MHz and 1747 MHz for Nokia 8310, and 900 MHz, 1747 MHz and 1950 MHz for Nokia 6630. The main finding of this study was that the SAR distribution/variation in the head models highly depends on the structure of the antenna and phone model, which suggests that the type of the exposure source is the main parameter in EMF exposure studies to be focused on. Although the previous findings regarding significant role of the anatomy of the head, phone position, frequency, local tissue inhomogeneity and tissue composition specifically in the exposed area on SAR difference were confirmed, the SAR values and SAR distributions caused by generic source models cannot be extrapolated to the real device exposures. The general conclusion is that from a volume averaged SAR point of view, no systematic differences between child and adult heads were found. Copyright © 2011. Published by Elsevier Ltd.

  12. Modeling photosynthesis in sea ice-covered waters

    Science.gov (United States)

    Long, Matthew C.; Lindsay, Keith; Holland, Marika M.

    2015-09-01

    The lower trophic levels of marine ecosystems play a critical role in the Earth System mediating fluxes of carbon to the ocean interior. Many of the functional relationships describing biological rate processes, such as primary productivity, in marine ecosystem models are nonlinear functions of environmental state variables. As a result of nonlinearity, rate processes computed from mean fields at coarse resolution will differ from similar computations that incorporate small-scale heterogeneity. Here we examine how subgrid-scale variability in sea ice thickness impacts simulated net primary productivity (NPP) in a 1°×1° configuration of the Community Earth System Model (CESM). CESM simulates a subgrid-scale ice thickness distribution and computes shortwave penetration independently for each ice thickness category. However, the default model formulation uses grid-cell mean irradiance to compute NPP. We demonstrate that accounting for subgrid-scale shortwave heterogeneity by computing light limitation terms under each ice category then averaging the result is a more accurate invocation of the photosynthesis equations. Moreover, this change delays seasonal bloom onset and increases interannual variability in NPP in the sea ice zone in the model. The new treatment reduces annual production by about 32% in the Arctic and 19% in the Antarctic. Our results highlight the importance of considering heterogeneity in physical fields when integrating nonlinear biogeochemical reactions.

  13. A depth-averaged 2-D shallow water model for breaking and non-breaking long waves affected by rigid vegetation

    Science.gov (United States)

    This paper presents a depth-averaged two-dimensional shallow water model for simulating long waves in vegetated water bodies under breaking and non-breaking conditions. The effects of rigid vegetation are modelled in the form of drag and inertia forces as sink terms in the momentum equations. The dr...

  14. FDTD calculation of whole-body average SAR in adult and child models for frequencies from 30 MHz to 3 GHz

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jianqing [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Fujiwara, Osamu [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Kodera, Sachiko [Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555 (Japan); Watanabe, Soichi [National Institute of Information and Communications Technology, Nukui-kitamachi, Koganei, Tokyo 184-8795 (Japan)

    2006-09-07

    Due to the difficulty of the specific absorption rate (SAR) measurement in an actual human body for electromagnetic radio-frequency (RF) exposure, in various compliance assessment procedures the incident electric field or power density is being used as a reference level, which should never yield a larger whole-body average SAR than the basic safety limit. The relationship between the reference level and the whole-body average SAR, however, was established mainly based on numerical calculations for highly simplified human modelling dozens of years ago. Its validity is being questioned by the latest calculation results. In verifying the validity of the reference level with respect to the basic SAR limit for RF exposure, it is essential to have a high accuracy of human modelling and numerical code. In this study, we made a detailed error analysis in the whole-body average SAR calculation for the finite-difference time-domain (FDTD) method in conjunction with the perfectly matched layer (PML) absorbing boundaries. We derived a basic rule for the PML employment based on a dielectric sphere and the Mie theory solution. We then attempted to clarify to what extent the whole-body average SAR may reach using an anatomically based Japanese adult model and a scaled child model. The results show that the whole-body average SAR under the ICNIRP reference level exceeds the basic safety limit nearly 30% for the child model both in the resonance frequency and 2 GHz band.

  15. Forecasting species distributions with geo-spatial data: R objects that predict from averages of competing statistical models or data mining methods

    Science.gov (United States)

    Salas, L. A.; Veloz, S.; Ballard, G.

    2011-12-01

    Most forecasting approaches based on statistical models and data mining methods share a set of characteristics: all are constructed from train sets and validated against test sets using methods to avoid over-fitting on the training data; standard validation methods are used (e.g., AUC values for binary response data); some form of model averaging is applied when predicting new values from a set of competing models; measurements of error of predictions and goodness-of-fit of each competing model are reported and made spatially explicit. Many packages exist in R to fit statistical models and for data mining, but few include algorithms for forecasting and there are no model-averaging methods. However, results from these packages are commonly reported in R objects (S4 classes) that usually extend from other objects, and so they share methods in common (e.g., "predict", "aic"). Here we illustrate an approach that takes advantages of the abovementioned commonalities to develop a "framework" using objects that fit competing models with algorithms for forecasting and include model averaging methods. These objects can be easily extended to incorporate new kinds of statistical and data mining methods. We illustrate this approach with three types of objects and show how to interact with them to produce weighted averages from competing models, and some tabular and graphic outputs. These objects have been compiled into an R package ("RavianForecasting" - http://data.prbo.org/apps/ravian). We encourage others to use and contribute toward the development of these types of forecasting objects, or to develop alternatives with similar flexibility. We show how these can be easily extended to incorporate new statistical methods, new outputs, new methods to weigh averages, and new methods to validate the models.

  16. Predictions of flow through an isothermal serpentine passage with linear eddy-viscosity Reynolds Averaged Navier Stokes models.

    Energy Technology Data Exchange (ETDEWEB)

    Laskowski, Gregory Michael

    2005-12-01

    Flows with strong curvature present a challenge for turbulence models, specifically eddy viscosity type models which assume isotropy and a linear and instantaneous equilibrium relation between stress and strain. Results obtained from three different codes and two different linear eddy viscosity turbulence models are compared to a DNS simulation in order to gain some perspective on the turbulence modeling capability of SIERRA/Fuego. The Fuego v2f results are superior to the more common two-layer k-e model results obtained with both a commercial and research code in terms of the concave near wall behavior predictions. However, near the convex wall, including the separated region, little improvement is gained using the v2f model and in general the turbulent kinetic energy prediction is fair at best.

  17. Accelerating Universe via Spatial Averaging

    OpenAIRE

    Nambu, Yasusada; TANIMOTO, Masayuki

    2005-01-01

    We present a model of an inhomogeneous universe that leads to accelerated expansion after taking spatial averaging. The model universe is the Tolman-Bondi solution of the Einstein equation and contains both a region with positive spatial curvature and a region with negative spatial curvature. We find that after the region with positive spatial curvature begins to re-collapse, the deceleration parameter of the spatially averaged universe becomes negative and the averaged universe starts accele...

  18. Disease mapping and regression with count data in the presence of overdispersion and spatial autocorrelation: a Bayesian model averaging approach.

    Science.gov (United States)

    Mohebbi, Mohammadreza; Wolfe, Rory; Forbes, Andrew

    2014-01-09

    This paper applies the generalised linear model for modelling geographical variation to esophageal cancer incidence data in the Caspian region of Iran. The data have a complex and hierarchical structure that makes them suitable for hierarchical analysis using Bayesian techniques, but with care required to deal with problems arising from counts of events observed in small geographical areas when overdispersion and residual spatial autocorrelation are present. These considerations lead to nine regression models derived from using three probability distributions for count data: Poisson, generalised Poisson and negative binomial, and three different autocorrelation structures. We employ the framework of Bayesian variable selection and a Gibbs sampling based technique to identify significant cancer risk factors. The framework deals with situations where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. The evidence from applying the modelling methodology suggests that modelling strategies based on the use of generalised Poisson and negative binomial with spatial autocorrelation work well and provide a robust basis for inference.

  19. Disease Mapping and Regression with Count Data in the Presence of Overdispersion and Spatial Autocorrelation: A Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Mohammadreza Mohebbi

    2014-01-01

    Full Text Available This paper applies the generalised linear model for modelling geographical variation to esophageal cancer incidence data in the Caspian region of Iran. The data have a complex and hierarchical structure that makes them suitable for hierarchical analysis using Bayesian techniques, but with care required to deal with problems arising from counts of events observed in small geographical areas when overdispersion and residual spatial autocorrelation are present. These considerations lead to nine regression models derived from using three probability distributions for count data: Poisson, generalised Poisson and negative binomial, and three different autocorrelation structures. We employ the framework of Bayesian variable selection and a Gibbs sampling based technique to identify significant cancer risk factors. The framework deals with situations where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. The evidence from applying the modelling methodology suggests that modelling strategies based on the use of generalised Poisson and negative binomial with spatial autocorrelation work well and provide a robust basis for inference.

  20. Disease Mapping and Regression with Count Data in the Presence of Overdispersion and Spatial Autocorrelation: A Bayesian Model Averaging Approach

    Science.gov (United States)

    Mohebbi, Mohammadreza; Wolfe, Rory; Forbes, Andrew

    2014-01-01

    This paper applies the generalised linear model for modelling geographical variation to esophageal cancer incidence data in the Caspian region of Iran. The data have a complex and hierarchical structure that makes them suitable for hierarchical analysis using Bayesian techniques, but with care required to deal with problems arising from counts of events observed in small geographical areas when overdispersion and residual spatial autocorrelation are present. These considerations lead to nine regression models derived from using three probability distributions for count data: Poisson, generalised Poisson and negative binomial, and three different autocorrelation structures. We employ the framework of Bayesian variable selection and a Gibbs sampling based technique to identify significant cancer risk factors. The framework deals with situations where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. The evidence from applying the modelling methodology suggests that modelling strategies based on the use of generalised Poisson and negative binomial with spatial autocorrelation work well and provide a robust basis for inference. PMID:24413702

  1. Reproducing the Ensemble Average Polar Solvation Energy of a Protein from a Single Structure: Gaussian-Based Smooth Dielectric Function for Macromolecular Modeling.

    Science.gov (United States)

    Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil

    2018-01-19

    Typically, the ensemble average polar component of solvation energy (∆G_polar^solv) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each of snapshots. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB) based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J Chem Theory Comput 2013, 9 (4), 2126-2136) can reproduce the ensemble average (∆G_polar^solv) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ∆G_polar^solv (〈∆G_polar^solv 〉) from an energy minimized structure regardless minimization environment (structure minimized in vacuo, implicit or explicit waters or crystal structure). The best case, however, is when it is paired with in vacuo minimized structure. In contrast, the traditional 2-dielectric model is successful in reproducing the ensemble average (∆G_polar^solv) only if the crystal structure or a structure minimized in solvent is used, the best being the case of implicit solvent minimized structure. Moreover, the traditional 2-dielectric model tends to underestimate the ensemble average 〈∆G_polar^solv 〉 even when the internal dielectric constant of macromolecule takes the lowest physically reasonable value of 1. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt-bridges residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single structure.

  2. 2D Chaos in the Interaction of Inflation and Unemployment: Moving Averages and the Modeling of High Frequency Macrodynamics

    Directory of Open Access Journals (Sweden)

    Peter Flaschel

    2014-01-01

    Full Text Available The paper argues that applicable macro is high frequency macro and the data generating process is therefore to be modeled in continuous time. It exemplifies this with a misuse of a 2D period model of monetarist type which becomes extremely overshooting, allowing for routes to “chaos,” when iterated at low frequencies. Instead of such low frequency procedures, we augment the model by a Keynesian feedback chain (the real rate of interest channel to introduce local instability into the model. We also introduce heterogeneous opinion dynamics into it. The implied 4D dynamics are made bounded thereby, but seem to allow only complex limit cycles, with no transition towards strange attractors anymore.

  3. Bias in modeled bi-directional NH3 fluxes associated with temporal averaging of atmospheric NH3 concentrations

    Science.gov (United States)

    Direct flux measurements of NH3 are expensive, time consuming, and require detailed supporting measurements of soil, vegetation, and atmospheric chemistry for interpretation and model parameterization. It is therefore often necessary to infer fluxes by combining measurements of...

  4. The Prediction of Exchange Rates with the Use of Auto-Regressive Integrated Moving-Average Models

    Directory of Open Access Journals (Sweden)

    Daniela Spiesová

    2014-10-01

    Full Text Available Currency market is recently the largest world market during the existence of which there have been many theories regarding the prediction of the development of exchange rates based on macroeconomic, microeconomic, statistic and other models. The aim of this paper is to identify the adequate model for the prediction of non-stationary time series of exchange rates and then use this model to predict the trend of the development of European currencies against Euro. The uniqueness of this paper is in the fact that there are many expert studies dealing with the prediction of the currency pairs rates of the American dollar with other currency but there is only a limited number of scientific studies concerned with the long-term prediction of European currencies with the help of the integrated ARMA models even though the development of exchange rates has a crucial impact on all levels of economy and its prediction is an important indicator for individual countries, banks, companies and businessmen as well as for investors. The results of this study confirm that to predict the conditional variance and then to estimate the future values of exchange rates, it is adequate to use the ARIMA (1,1,1 model without constant, or ARIMA [(1,7,1,(1,7] model, where in the long-term, the square root of the conditional variance inclines towards stable value.

  5. Prediction of transitional boundary layers and fully turbulent free shear flows, using Reynolds averaged Navier-Stokes models

    Science.gov (United States)

    Lopez Varilla, Maurin Alberto

    One of the biggest unsolved problems of modern physics is the turbulence phenomena in fluid flow. The appearance of turbulence in a flow system is regularly determined by velocity and length scales of the system. If those scales are small the motion of the fluid is laminar, but at larger scales, disturbances appear and grow, leading the flow field to transition to a fully turbulent state. The prediction of transitional flow is critical for many complex fluid flow applications, such as aeronautical, aerospace, biomedical, automotive, chemical processing, heating and cooling systems, and meteorology. For example, in some cases the flow may remain laminar throughout a significant portion of a given domain, and fully turbulent simulations may produce results that can lead to inaccurate conclusions or inefficient design, due to an inability to resolve the details of the transition process. This work aims to develop, implement, and test a new model concept for the prediction of transitional flows using a linear eddy-viscosity RANS approach. The effects of transition are included through one additional transport equation for upsilon 2 as an alternative to the Laminar Kinetic Energy (LKE) framework. Here upsilon2 is interpreted as the energy of fully turbulent, three-dimensional velocity fluctuations. The concept is based on a description of the transition process previously discussed by Walters. This dissertation presents two new single-point, physics-based turbulence models based on the transitional methodology mentioned above. The first one uses an existing transitional model as a baseline which is modified to accurately capture the physics of fully turbulent free shear flows. The model formulation was tested over several boundary layer and free shear flow test cases. The simulations show accurate results, qualitatively equal to the baseline model on transitional boundary layer test cases, and substantially improved over the baseline model for free shear flows. The

  6. Challenges and Perspectives in Bridging In- and Outpatient Sectors: The Implementation of Two Alternative Models of Care and Their Effect on the Average Length of Stay

    Directory of Open Access Journals (Sweden)

    Alexandre Wullschleger

    2017-10-01

    Full Text Available New models of care aimed at reinforcing the outpatient sector have been introduced in Germany over the last few years. Initially, a subscription-based model (“integrated care” was introduced in 2012 in the Immanuel Klinik Rüdersdorf, wherein patients had to actively subscribe to the integrated care program. This integrated care model was replaced after 2 years by a subscription-free “model project,” in which all patients insured by the contracting insurance company took part in the program. Data showed that the introduction of the integrated care program in the inpatient setting led to an increase of the average length of stay in this group. The switch to the model project corrected this unwanted effect but failed in significantly decreasing the average length of stay when compared to standard care. However, both the integrated care program and model project succeeded in reducing the length of stay in the day care setting. When adjusting for the sex and diagnosis proportions of each year, it was shown that diagnosis strongly influenced the average length of stay in both settings, whereas sex only slightly influenced the duration of stay in the inpatient setting. Thus, in spite of strong financial and clinical incentives, the introduction of the model project couldn’t fulfill its primary purpose of shifting resources from the inpatient to the outpatient setting in the initial years. Possible explanations, including struggle against long-established traditions and reluctance to change, are discussed.

  7. Fleet average NOx emission performance of 2007 model year light-duty vehicles, light-duty trucks and medium-duty passenger vehicles

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-09-15

    This report summarized the regulatory requirements related to nitrous oxide (NO{sub x}) fleet averaging for light-duty vehicles, light-duty trucks, and medium-duty passenger vehicles under the On-Road Vehicle and Engine Emission Regulations. The regulations introduced more stringent national emission standards for on-road vehicles and engines and include technical standards that establish maximum limits on vehicle exhaust emissions. The fleet average NO{sub x} emission performance of individual companies and the overall Canadian fleet for 2007 was summarized, and the effectiveness of the Canadian fleet average NO{sub x} emission program was evaluated in relation to its environmental performance objectives. A total of 22 companies submitted reports for 294 test groups comprising 1,599,051 vehicles of the 2007 model year. The average NO{sub x} value for the entire LDV/LLDT fleet was 0.06897630 grams per mile. The average value for the HLDT/MDPV fleet was 0.160668 grams per mile. NO{sub x} values for both overall fleets remained better than the corresponding fleet average NO{sub x} standards, and were consistent with the environmental performance objectives of the regulations. 9 tabs., 3 figs.

  8. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    Science.gov (United States)

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  9. 77 FR 2028 - 2017 and Later Model Year Light-Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel...

    Science.gov (United States)

    2012-01-13

    ..., responds to the country's critical needs to address global climate change and to reduce oil consumption... regulations applicable to model years 2012-2016, with respect to air conditioner performance, regulatory...) 493-2251. Mail: EPA: Environmental Protection Agency, EPA Docket Center (EPA/ DC), Air and Radiation...

  10. Combining synchronous averaging with a Gaussian mixture model novelty detection scheme for vibration-based condition monitoring of a gearbox

    CSIR Research Space (South Africa)

    Heyns, T

    2012-10-01

    Full Text Available This paper investigates how Gaussian mixture models (GMMs) may be used to detect and trend fault induced vibration signal irregularities, such as those which might be indicative of the onset of gear damage. The negative log likelihood (NLL...

  11. A new method to account for large scale mixing-processes in spatially-averaged flow models

    NARCIS (Netherlands)

    Huthoff, Freek; Roos, Pieter C.; Augustijn, Dionysius C.M.; Hulscher, Suzanne J.M.H.; Boyer, D.; Alexandrova, O.

    2007-01-01

    A new method is proposed to calculate the cross-sectional flow field in compound channels for 1D flow models. The proposed method involves a new parameterization of the interface stress between adjacent compartments, typically between the main channel and floodplain of a two-stage channel. This

  12. On the use of local diffusion models for path ensemble averaging in potential of mean force computations.

    Science.gov (United States)

    Calderon, Christopher P

    2007-02-28

    We use a constant velocity steered molecular dynamics (SMD) simulation of the stretching of deca-alanine in vacuum to demonstrate a technique that can be used to create a surrogate processes approximation (SPA) using the time series that come out of SMD simulations. In this article, the surrogate processes are constructed by first estimating a sequence of local parametric diffusion models along a SMD trajectory and then a single global model is constructed by piecing the local models together through smoothing splines (estimation is made computationally feasible by likelihood function approximations). The SPAs are then "bootstrapped" in order to obtain a plausible range of work values associated with a particular SMD realization. This information is then used to assist in estimating a potential of mean force constructed by appealing to the Jarzynski equality. When this procedure is repeated for a small number of SMD paths, it is shown that the global models appear to come from a single family of closely related diffusion processes. Possible techniques for exploiting this observation are also briefly discussed. The findings of this paper have potential relevance to computationally expensive computer simulations and experimental works involving optical tweezers where it is difficult to collect a large number of samples, but possible to sample accurately and frequently in time.

  13. 77 FR 68070 - 2017 and Later Model Year Light-Duty Vehicle Greenhouse Gas Emissions and Corporate Average Fuel...

    Science.gov (United States)

    2012-11-15

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY 40 CFR Parts 85, 86, and 600 DEPARTMENT OF TRANSPORTATION National Highway Traffic Safety Administration 49 CFR Parts 523, 531, 533, 536, and 537 RIN 2060-AQ54; RIN 2127-AK79 2017 and Later Model Year Light-Duty Vehicle Greenhouse Gas Emissions...

  14. Diagnosing the average spatio-temporal impact of convective systems – Part 1: A methodology for evaluating climate models

    Directory of Open Access Journals (Sweden)

    M. S. Johnston

    2013-12-01

    Full Text Available An earlier method to determine the mean response of upper-tropospheric water to localised deep convective systems (DC systems is improved and applied to the EC-Earth climate model. Following Zelinka and Hartmann (2009, several fields related to moist processes and radiation from various satellites are composited with respect to the local maxima in rain rate to determine their spatio-temporal evolution with deep convection in the central Pacific Ocean. Major improvements to the earlier study are the isolation of DC systems in time so as to prevent multiple sampling of the same event, and a revised definition of the mean background state that allows for better characterisation of the DC-system-induced anomalies. The observed DC systems in this study propagate westward at ~4 m s−1. Both the upper-tropospheric relative humidity and the outgoing longwave radiation are substantially perturbed over a broad horizontal extent and for periods >30 h. The cloud fraction anomaly is fairly constant with height but small maximum can be seen around 200 hPa. The cloud ice water content anomaly is mostly confined to pressures greater than 150 hPa and reaches its maximum around 450 hPa, a few hours after the peak convection. Consistent with the large increase in upper-tropospheric cloud ice water content, albedo increases dramatically and persists about 30 h after peak convection. Applying the compositing technique to EC-Earth allows an assessment of the model representation of DC systems. The model captures the large-scale responses, most notably for outgoing longwave radiation, but there are a number of important differences. DC systems appear to propagate eastward in the model, suggesting a strong link to Kelvin waves instead of equatorial Rossby waves. The diurnal cycle in the model is more pronounced and appears to trigger new convection further to the west each time. Finally, the modelled ice water content anomaly peaks at pressures greater than 500 h

  15. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    OpenAIRE

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this...

  16. Assessment of sub-grid scale dispersion closure with regularized deconvolution method in a particle-laden turbulent jet

    Science.gov (United States)

    Wang, Qing; Zhao, Xinyu; Ihme, Matthias

    2017-11-01

    Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).

  17. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    Science.gov (United States)

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  18. Seasonal cycles of the TBE and Lyme borreliosis vector Ixodes ricinus modelled with time-lagged and interval-averaged predictors.

    Science.gov (United States)

    Brugger, Katharina; Walter, Melanie; Chitimia-Dobler, Lidia; Dobler, Gerhard; Rubel, Franz

    2017-12-01

    Ticks of the species Ixodes ricinus (L.) are the major vectors for tick-borne diseases in Europe. The aim of this study was to quantify the influence of environmental variables on the seasonal cycle of questing I. ricinus. Therefore, an 8-year time series of nymphal I. ricinus flagged at monthly intervals in Haselmühl (Germany) was compiled. For the first time, cross correlation maps were applied to identify optimal associations between observed nymphal I. ricinus densities and time-lagged as well as temporal averaged explanatory variables. To prove the explanatory power of these associations, two Poisson regression models were generated. The first model simulates the ticks of the entire time series flagged per 100 m[Formula: see text], the second model the mean seasonal cycle. Explanatory variables comprise the temperature of the flagging month, the relative humidity averaged from the flagging month and 1 month prior to flagging, the temperature averaged over 4-6 months prior to the flagging event and the hunting statistics of the European hare from the preceding year. The first model explains 65% of the monthly tick variance and results in a root mean square error (RMSE) of 17 ticks per 100 m[Formula: see text]. The second model explains 96% of the tick variance. Again, the accuracy is expressed by the RMSE, which is 5 ticks per 100 m[Formula: see text]. As a major result, this study demonstrates that tick densities are higher correlated with time-lagged and temporal averaged variables than with contemporaneous explanatory variables, resulting in a better model performance.

  19. P-wave velocity changes in freezing hard low-porosity rocks: a laboratory-based time-average model

    Directory of Open Access Journals (Sweden)

    D. Draebing

    2012-10-01

    Full Text Available P-wave refraction seismics is a key method in permafrost research but its applicability to low-porosity rocks, which constitute alpine rock walls, has been denied in prior studies. These studies explain p-wave velocity changes in freezing rocks exclusively due to changing velocities of pore infill, i.e. water, air and ice. In existing models, no significant velocity increase is expected for low-porosity bedrock. We postulate, that mixing laws apply for high-porosity rocks, but freezing in confined space in low-porosity bedrock also alters physical rock matrix properties. In the laboratory, we measured p-wave velocities of 22 decimetre-large low-porosity (< 10% metamorphic, magmatic and sedimentary rock samples from permafrost sites with a natural texture (> 100 micro-fissures from 25 °C to −15 °C in 0.3 °C increments close to the freezing point. When freezing, p-wave velocity increases by 11–166% perpendicular to cleavage/bedding and equivalent to a matrix velocity increase from 11–200% coincident to an anisotropy decrease in most samples. The expansion of rigid bedrock upon freezing is restricted and ice pressure will increase matrix velocity and decrease anisotropy while changing velocities of the pore infill are insignificant. Here, we present a modified Timur's two-phase-equation implementing changes in matrix velocity dependent on lithology and demonstrate the general applicability of refraction seismics to differentiate frozen and unfrozen low-porosity bedrock.

  20. Evaluation of LES models for flow over bluff body from engineering ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Results are also discussed keeping in view limitations of LES methodology of modelling for practical problems and current developments. It is concluded that a one-equation model for subgrid kinetic energy is the best choice. Keywords. Subgrid scale stress models; engineering flows; flow over bluff body. 1. Introduction.

  1. Particle yields, antiproton scaling and the average transverse momenta in high energy lead-lead collisions a model-based study

    CERN Document Server

    Guptaroy, P; De, B; Bhattacharya, D P

    2001-01-01

    The study aims at explaining the behaviour of some of the very important observables measured in the latest lead-lead collisions at CERN in the light of a variety of the sequential chain model. Calculated values, to our surprise, are in excellent agreement with the measurements, especially when the effect of cascading and rescattering is empirically introduced in the calculations of the average transverse momenta. Implications of the results are discussed. (17 refs).

  2. Elucidating fluctuating diffusivity in center-of-mass motion of polymer models with time-averaged mean-square-displacement tensor

    Science.gov (United States)

    Miyaguchi, Tomoshige

    2017-10-01

    There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.

  3. On the contribution of G20 and G30 in the Time-Averaged Paleomagnetic Field: First results from a new Giant Gaussian Process inverse modeling approach

    Science.gov (United States)

    Khokhlov, A.; Hulot, G.; Johnson, C. L.

    2013-12-01

    It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). However, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) suggest that significant additional terms, in particular quadrupolar (G20) and octupolar (G30) zonal terms, likely contribute. The traditional way in which most such TAF models are recovered uses an empirical estimate for paleosecular variation (PSV) that is subject to limitations imposed by the limited age information available for such data. In this presentation, we will report on a new way to recover the TAF, using an inverse modeling approach based on the so-called Giant Gaussian Process (GGP) description of the TAF and PSV, and various statistical tools we recently made available (see Khokhlov and Hulot, Geophysical Journal International, 2013, doi: 10.1093/gji/ggs118). First results based on high quality data published from the Time-Averaged Field Investigations project (see Johnson et al., G-cubed, 2008, doi:10.1029/2007GC001696) clearly show that both the G20 and G30 terms are very well constrained, and that optimum values fully consistent with the data can be found. These promising results lay the groundwork for use of the method with more extensive data sets, to search for possible additional non-zonal departures of the TAF from the GAD.

  4. Exploring Modeling Options and Conversion of Average Response to Appropriate Vibration Envelopes for a Typical Cylindrical Vehicle Panel with Rib-stiffened Design

    Science.gov (United States)

    Harrison, Phil; LaVerde, Bruce; Teague, David

    2009-01-01

    Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.

  5. Effect of fluctuations of the linear feedback coefficient on the frequency spectrum of averaged temperature in a simple energy balance climate model

    Science.gov (United States)

    Petrov, D. A.

    2017-09-01

    Using the stochastic approach, we analyze the effect of fluctuations of the linear feedback coefficient in a simple zero-dimensional energy balance climate model on the frequency spectrum of averaged temperature. An expression is obtained for the model spectrum in the weak noise approximation. Its features are investigated in two cases: when the frequency spectrum of the feedback coefficient is a constant (white noise) and when the spectrum contains one resonant frequency and has a Lorentz form. We consider the issue whether the feedback coefficient fluctuations can be an independent mechanism for a qualitative change in the spectrum of the climate system.

  6. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA) and Generalized Regression Neural Network (GRNN) in Forecasting Hepatitis Incidence in Heng County, China

    Science.gov (United States)

    Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui

    2016-01-01

    Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555

  7. Accounting for disagreements on average cone loss rates in retinitis pigmentosa with a new kinetic model: Its relevance for clinical trials.

    Science.gov (United States)

    Baumgartner, W A; Baumgartner, A M

    2016-04-01

    Since 1985, at least nine studies of the average rate of cone loss in retinitis pigmentosa (RP) populations have yielded conflicting average rate constant values (-k), differing by 90-160%. This is surprising, since, except for the first two investigations, the Harvard or Johns Hopkins' protocols used in these studies were identical with respect to: use of the same exponential decline model, calculation of average -k from individual patient k values, monitoring patients over similarly large time frames, and excluding data exhibiting floor and ceiling effects. A detailed analysis of Harvard's and Hopkins' protocols and data revealed two subtle differences: (i) Hopkins' use of half-life t0.5 (or t(1/e)) for expressing patient cone-loss rates rather than k as used by Harvard; (ii) Harvard obtaining substantially more +k from improving fields due to dormant-cone recovery effects and "small -k" values than Hopkins' ("small -k" is defined as less than -0.040 year(-1)), e.g., 16% +k, 31% small -k, vs. Hopkins' 3% and 6% respectively. Since t0.5=0.693/k, it follows that when k=0, or is very small, t0.5 (or t(1/e)) is respectively infinity or a very large number. This unfortunate mathematical property (which also prevents t0.5 (t(1/e)) histogram construction corresponding to -k to +k) caused Hopkins' to delete all "small -k" and all +k due to "strong leverage". Naturally this contributed to Hopkins' larger average -k. Difference (ii) led us to re-evaluate the Harvard/Hopkins' exponential unchanging -k model. In its place we propose a model of increasing biochemical stresses from dying rods on cones during RP progression: increasing oxidative stresses and trophic factor deficiencies (e.g., RdCVF), and RPE malfunction. Our kinetic analysis showed rod loss to follow exponential kinetics with unchanging -k due to constant genetic stresses, thereby providing a theoretical basis for Clarke et al.'s empirical observation of such kinetics with eleven animal models of RP. In

  8. Thermodynamically Constrained Averaging Theory Approach for Modeling Flow and Transport Phenomena in Porous Medium Systems: 8. Interface and Common Curve Dynamics.

    Science.gov (United States)

    Gray, William G; Miller, Cass T

    2010-12-01

    This work is the eighth in a series that develops the fundamental aspects of the thermodynamically constrained averaging theory (TCAT) that allows for a systematic increase in the scale at which multiphase transport phenomena is modeled in porous medium systems. In these systems, the explicit locations of interfaces between phases and common curves, where three or more interfaces meet, are not considered at scales above the microscale. Rather, the densities of these quantities arise as areas per volume or length per volume. Modeling of the dynamics of these measures is an important challenge for robust models of flow and transport phenomena in porous medium systems, as the extent of these regions can have important implications for mass, momentum, and energy transport between and among phases, and formulation of a capillary pressure relation with minimal hysteresis. These densities do not exist at the microscale, where the interfaces and common curves correspond to particular locations. Therefore, it is necessary for a well-developed macroscale theory to provide evolution equations that describe the dynamics of interface and common curve densities. Here we point out the challenges and pitfalls in producing such evolution equations, develop a set of such equations based on averaging theorems, and identify the terms that require particular attention in experimental and computational efforts to parameterize the equations. We use the evolution equations developed to specify a closed two-fluid-phase flow model.

  9. Genetic Analysis of Milk Yield in First-Lactation Holstein Friesian in Ethiopia: A Lactation Average vs Random Regression Test-Day Model Analysis

    Directory of Open Access Journals (Sweden)

    S. Meseret

    2015-09-01

    Full Text Available The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM against the random regression test-day model (RRM in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations.

  10. Genetic Analysis of Milk Yield in First-Lactation Holstein Friesian in Ethiopia: A Lactation Average vs Random Regression Test-Day Model Analysis

    Science.gov (United States)

    Meseret, S.; Tamir, B.; Gebreyohannes, G.; Lidauer, M.; Negussie, E.

    2015-01-01

    The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM) against the random regression test-day model (RRM) in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD) records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations. PMID:26194217

  11. Comparison of magnetic field observations of an average magnetic cloud with a simple force free model: the importance of field compression and expansion

    Directory of Open Access Journals (Sweden)

    R. P. Lepping

    2007-01-01

    Full Text Available We investigate the ability of the cylindrically symmetric force-free magnetic cloud (MC fitting model of Lepping et al. (1990 to faithfully reproduce actual magnetic field observations by examining two quantities: (1 a difference angle, called β, i.e., the angle between the direction of the observed magnetic field (Bobs and the derived force free model field (Bmod and (2 the difference in magnitudes between the observed and modeled fields, i.e., ΔB(=|Bobs|−|Bmod|, and a normalized ΔB (i.e., ΔB/ is also examined, all for a judiciously chosen set of 50 WIND interplanetary MCs, based on quality considerations. These three quantities are developed as a percent of MC duration and averaged over this set of MCs to obtain average profiles. It is found that, although B> and its normalize version are significantly enhanced (from a broad central average value early in an average MC (and to a lesser extent also late in the MC, the angle is small (less than 8° and approximately constant all throughout the MC. The field intensity enhancements are due mainly to interaction of the MC with the surrounding solar wind plasma causing field compression at front and rear. For example, for a typical MC, ΔB/B> is: 0.21±0.27 very early in the MC, −0.11±0.10 at the center (and −0.085±0.12 averaged over the full "central region," i.e., for 30% to 80% of duration, and 0.05±0.29 very late in the MC, showing a double sign change as we travel from front to center to back, in the MC. When individual MCs are examined we find that over 80% of them possess field enhancements within several to many hours of the front boundary, but only about 30% show such enhancements at their rear portions. The enhancement of the MC's front field is also due to MC expansion, but this is usually a lesser effect compared to compression. It is expected that this compression is manifested as significant distortion to the MC's cross-section from the ideal circle, first suggested by

  12. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier-Stokes simulations: A data-driven, physics-informed Bayesian approach

    Science.gov (United States)

    Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.

    2016-11-01

    Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has

  13. Quantifying and Reducing Model-Form Uncertainties in Reynolds-Averaged Navier-Stokes Equations: An Open-Box, Physics-Based, Bayesian Approach

    CERN Document Server

    Xiao, H; Wang, J -X; Sun, R; Roy, C J

    2015-01-01

    Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering applications. For many practical flows, the turbulence models are by far the most important source of uncertainty. In this work we develop an open-box, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Si...

  14. Monte-Carlo approach to multiparticle production in a quark-parton model II Transverse momenta, energy dependence of average multiplicities, and inclusive spectra

    CERN Document Server

    Cerny, V; Pisút, J

    1978-01-01

    For pt.I see ibid., vol.16, p.2822 (1978). The authors extend their Monte-Carlo quark-parton model by introducing explicitly the transverse momenta of partons in the compound state formed by two colliding hadrons. Calculated energy dependence of the average multiplicities of stable hadrons produced in pp collisions and the p /sub T/ and y inclusive spectra are in good qualitative agreement with the data. Their results on resonance production at square root s=53 Ge V coincide with recent CERN ISR data. (20 refs).

  15. Bounce-averaged advection and diffusion coefficients for monochromatic electromagnetic ion cyclotron wave: Comparison between test-particle and quasi-linear models

    Science.gov (United States)

    Su, Z.; Zhu, H.; Xiao, F.; Zheng, H.; Shen, C.; Wang, Y.; Wang, S.

    2012-12-01

    The electromagnetic ion cyclotron (EMIC) wave has been long suggested to be responsible for the rapid loss of radiation belt relativistic electrons. The test-particle simulations are performed to calculate the bounce-averaged pitch-angle advection and diffusion coefficients for parallel-propagating monochromatic EMIC waves. The comparison between test-particle (TP) and quasi-linear (QL) transport coefficients is further made to quantify the influence of nonlinear processes. For typical EMIC waves, four nonlinear physical processes, i.e., the boundary reflection effect, finite perturbation effect, phase bunching and phase trapping, are found to occur sequentially from small to large equatorial pitch angles. The pitch-angle averaged finite perturbation effect yields slight differences between the transport coefficients of TP and QL models. The boundary reflection effect and phase bunching produce an average reduction of >80% in the diffusion coefficients but a small change in the corresponding average advection coefficients, tending to lower the loss rate predicted by QL theory. In contrast, the phase trapping causes continuous negative advection toward the loss cone and a minor change in the corresponding diffusion coefficients, tending to increase the loss rate predicted by QL theory. For small amplitude EMIC waves, the transport coefficients grow linearly with the square of wave amplitude. As the amplitude increases, the boundary reflection effect, phase bunching and phase trapping start to occur. Consequently, the TP advection coefficients deviate from the linear growth with the square of wave amplitude, and the TP diffusion coefficients become saturated with the amplitude approaching 1nT or above. The current results suggest that these nonlinear processes can cause significant deviation of transport coefficients from the prediction of QL theory, which should be taken into account in the future simulations of radiation belt dynamics driven by the EMIC waves.

  16. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  17. Averaging Schwarzschild spacetime

    Science.gov (United States)

    Tegai, S. Ph.; Drobov, I. V.

    2017-07-01

    We tried to average the Schwarzschild solution for the gravitational point source by analogy with the same problem in Newtonian gravity or electrostatics. We expected to get a similar result, consisting of two parts: the smoothed interior part being a sphere filled with some matter content and an empty exterior part described by the original solution. We considered several variants of generally covariant averaging schemes. The averaging of the connection in the spirit of Zalaletdinov's macroscopic gravity gave unsatisfactory results. With the transport operators proposed in the literature it did not give the expected Schwarzschild solution in the exterior part of the averaged spacetime. We were able to construct a transport operator that preserves the Newtonian analogy for the outward region but such an operator does not have a clear geometrical meaning. In contrast, using the curvature as the primary averaged object instead of the connection does give the desired result for the exterior part of the problem in a fine way. However for the interior part, this curvature averaging does not work because the Schwarzschild curvature components diverge as 1 /r3 near the center and therefore are not integrable.

  18. Quantum and classical dynamics of water dissociation on Ni(111): A test of the site-averaging model in dissociative chemisorption of polyatomic molecules

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Bin [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Department of Chemical Physics, University of Science and Technology of China, Hefei 230026 (China); Guo, Hua, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2015-10-28

    Recently, we reported the first highly accurate nine-dimensional global potential energy surface (PES) for water interacting with a rigid Ni(111) surface, built on a large number of density functional theory points [B. Jiang and H. Guo, Phys. Rev. Lett. 114, 166101 (2015)]. Here, we investigate site-specific reaction probabilities on this PES using a quasi-seven-dimensional quantum dynamical model. It is shown that the site-specific reactivity is largely controlled by the topography of the PES instead of the barrier height alone, underscoring the importance of multidimensional dynamics. In addition, the full-dimensional dissociation probability is estimated by averaging fixed-site reaction probabilities with appropriate weights. To validate this model and gain insights into the dynamics, additional quasi-classical trajectory calculations in both full and reduced dimensions have also been performed and important dynamical factors such as the steering effect are discussed.

  19. Numerical research on flow and thermal transport in cooling pool of electrical power station using three depth-averaged turbulence models

    Directory of Open Access Journals (Sweden)

    Li-ren Yu

    2009-09-01

    Full Text Available This paper describes a numerical simulation of thermal discharge in the cooling pool of an electrical power station, aiming to develop general-purpose computational programs for grid generation and flow/pollutant transport in the complex domains of natural and artificial waterways. Three depth-averaged two-equation closure turbulence models, {m1}, {m2}, and {m3}, were used to close the quasi three-dimensional hydrodynamic model. The {m3} model was recently established by the authors and is still in the testing process. The general-purpose computational programs and turbulence models will be involved in a software that is under development. The SIMPLE (Semi-Implicit Method for Pressure-Linked Equation algorithm and multi-grid iterative method are used to solve the hydrodynamic fundamental governing equations, which are discretized on non-orthogonal boundary-fitted grids with a variable collocated arrangement. The results calculated with the three turbulence models were compared with one another. In addition to the steady flow and thermal transport simulation, the unsteady process of waste heat inpouring and development in the cooling pool was also investigated.

  20. Numerical research on flow and thermal transport in cooling pool of electrical power station using three depth-averaged turbulence models

    Directory of Open Access Journals (Sweden)

    Li-ren YU

    2009-09-01

    Full Text Available This paper describes a numerical simulation of thermal discharge in the cooling pool of an electrical power station, aiming to develop general-purpose computational programs for grid generation and flow/pollutant transport in the complex domains of natural and artificial waterways. Three depth-averaged two-equation closure turbulence models, κ- ε, κ-w , and κ-ω, were used to close the quasi three-dimensional hydrodynamic model. The κ-ω model was recently established by the authors and is still in the testing process. The general-purpose computational programs and turbulence models will be involved in a software that is under development. The SIMPLE (Semi-Implicit Method for Pressure-Linked Equation algorithm and multi-grid iterative method are used to solve the hydrodynamic fundamental governing equations, which are discretized on non-orthogonal boundary-fitted grids with a variable collocated arrangement. The results calculated with the three turbulence models were compared with one another. In addition to the steady flow and thermal transport simulation, the unsteady process of waste heat inpouring and development in the cooling pool was also investigated.

  1. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C.J.

    2016-11-01

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach

  2. Evaluation of the inverse dispersion modelling method for estimating ammonia multi-source emissions using low-cost long time averaging sensor

    Science.gov (United States)

    Loubet, Benjamin; Carozzi, Marco

    2015-04-01

    Tropospheric ammonia (NH3) is a key player in atmospheric chemistry and its deposition is a threat for the environment (ecosystem eutrophication, soil acidification and reduction in species biodiversity). Most of the NH3 global emissions derive from agriculture, mainly from livestock manure (storage and field application) but also from nitrogen-based fertilisers. Inverse dispersion modelling has been widely used to infer emission sources from a homogeneous source of known geometry. When the emission derives from different sources inside of the measured footprint, the emission should be treated as multi-source problem. This work aims at estimating whether multi-source inverse dispersion modelling can be used to infer NH3 emissions from different agronomic treatment, composed of small fields (typically squares of 25 m side) located near to each other, using low-cost NH3 measurements (diffusion samplers). To do that, a numerical experiment was designed with a combination of 3 x 3 square field sources (625 m2), and a set of sensors placed at the centre of each field at several heights as well as at 200 m away from the sources in each cardinal directions. The concentration at each sensor location was simulated with a forward Lagrangian Stochastic (WindTrax) and a Gaussian-like (FIDES) dispersion model. The concentrations were averaged over various integration times (3 hours to 28 days), to mimic the diffusion sampler behaviour with several sampling strategy. The sources were then inferred by inverse modelling using the averaged concentration and the same models in backward mode. The sources patterns were evaluated using a soil-vegetation-atmosphere model (SurfAtm-NH3) that incorporates the response of the NH3 emissions to surface temperature. A combination emission patterns (constant, linear decreasing, exponential decreasing and Gaussian type) and strengths were used to evaluate the uncertainty of the inversion method. Each numerical experiment covered a period of 28

  3. Modeling of frequency-domain scalar wave equation with the average-derivative optimal scheme based on a multigrid-preconditioned iterative solver

    Science.gov (United States)

    Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue

    2018-01-01

    An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.

  4. Trends in average living children at the time of terminal contraception: A time series analysis over 27 years using ARIMA (p, d, q nonseasonal model

    Directory of Open Access Journals (Sweden)

    Sachin S Mumbare

    2014-01-01

    Full Text Available Background: India′s National Family Welfare Programme is dominated by sterilization, particularly tubectomy. Sterilization, being a terminal method of contraception, decides the final number of children for that couple. Many studies have shown the declining trend in the average number of living children at the time of sterilization over a short period of time. So this study was planned to do time series analysis of the average children at the time of terminal contraception, to do forecasting till 2020 for the same and to compare the rates of change in various subgroups of the population. Materials and Methods: Data was preprocessed in MS Access 2007 by creating and running SQL queries. After testing stationarity of every series with augmented Dickey-Fuller test, time series analysis and forecasting was done using best-fit Box-Jenkins ARIMA (p, d, q nonseasonal model. To compare the rates of change of average children in various subgroups, at sterilization, analysis of covariance (ANCOVA was applied. Results: Forecasting showed that the replacement level of 2.1 total fertility rate (TFR will be achieved in 2018 for couples opting for sterilization. The same will be achieved in 2020, 2016, 2018, and 2019 for rural area, urban area, Hindu couples, and Buddhist couples, respectively. It will not be achieved till 2020 in Muslim couples. Conclusion: Every stratum of population showed the declining trend. The decline for male children and in rural area was significantly faster than the decline for female children and in urban area, respectively. The decline was not significantly different in Hindu, Muslim, and Buddhist couples.

  5. Trends in Average Living Children at the Time of Terminal Contraception: A Time Series Analysis Over 27 Years Using ARIMA (p, d, q) Nonseasonal Model.

    Science.gov (United States)

    Mumbare, Sachin S; Gosavi, Shriram; Almale, Balaji; Patil, Aruna; Dhakane, Supriya; Kadu, Aniruddha

    2014-10-01

    India's National Family Welfare Programme is dominated by sterilization, particularly tubectomy. Sterilization, being a terminal method of contraception, decides the final number of children for that couple. Many studies have shown the declining trend in the average number of living children at the time of sterilization over a short period of time. So this study was planned to do time series analysis of the average children at the time of terminal contraception, to do forecasting till 2020 for the same and to compare the rates of change in various subgroups of the population. Data was preprocessed in MS Access 2007 by creating and running SQL queries. After testing stationarity of every series with augmented Dickey-Fuller test, time series analysis and forecasting was done using best-fit Box-Jenkins ARIMA (p, d, q) nonseasonal model. To compare the rates of change of average children in various subgroups, at sterilization, analysis of covariance (ANCOVA) was applied. Forecasting showed that the replacement level of 2.1 total fertility rate (TFR) will be achieved in 2018 for couples opting for sterilization. The same will be achieved in 2020, 2016, 2018, and 2019 for rural area, urban area, Hindu couples, and Buddhist couples, respectively. It will not be achieved till 2020 in Muslim couples. Every stratum of population showed the declining trend. The decline for male children and in rural area was significantly faster than the decline for female children and in urban area, respectively. The decline was not significantly different in Hindu, Muslim, and Buddhist couples.

  6. Comparison of Three Statistical Downscaling Methods and Ensemble Downscaling Method Based on Bayesian Model Averaging in Upper Hanjiang River Basin, China

    Directory of Open Access Journals (Sweden)

    Jiaming Liu

    2016-01-01

    Full Text Available Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables to assess the hydrological impacts of climate change. To improve the simulation accuracy of downscaling methods, the Bayesian Model Averaging (BMA method combined with three statistical downscaling methods, which are support vector machine (SVM, BCC/RCG-Weather Generators (BCC/RCG-WG, and Statistics Downscaling Model (SDSM, is proposed in this study, based on the statistical relationship between the larger scale climate predictors and observed precipitation in upper Hanjiang River Basin (HRB. The statistical analysis of three performance criteria (the Nash-Sutcliffe coefficient of efficiency, the coefficient of correlation, and the relative error shows that the performance of ensemble downscaling method based on BMA for rainfall is better than that of each single statistical downscaling method. Moreover, the performance for the runoff modelled by the SWAT rainfall-runoff model using the downscaled daily rainfall by four methods is also compared, and the ensemble downscaling method has better simulation accuracy. The ensemble downscaling technology based on BMA can provide scientific basis for the study of runoff response to climate change.

  7. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Science.gov (United States)

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; Morton, Don; Hinzman, Larry; Nijssen, Bart

    2017-09-01

    simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.

  8. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    Directory of Open Access Journals (Sweden)

    A. Endalamaw

    2017-09-01

    -basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.

  9. Comparison of magnetic field observations of an average magnetic cloud with a simple force free model: the importance of field compression and expansion

    Directory of Open Access Journals (Sweden)

    R. P. Lepping

    2008-01-01

    Full Text Available We investigate the ability of the cylindrically symmetric force-free magnetic cloud (MC fitting model of Lepping et al. (1990 to faithfully reproduce actual magnetic field observations by examining two quantities: (1 a difference angle, called β, i.e., the angle between the direction of the observed magnetic field (Bobs and the derived force free model field (Bmod and (2 the difference in magnitudes between the observed and modeled fields, i.e., ΔB(=|Bobs|−|Bmod|, and a normalized ΔB (i.e., ΔB/<B> is also examined, all for a judiciously chosen set of 50 WIND interplanetary MCs, based on quality considerations. These three quantities are developed as a percent of MC duration and averaged over this set of MCs to obtain average profiles. It is found that, although <ΔB> and its normalize version are significantly enhanced (from a broad central average value early in an average MC (and to a lesser extent also late in the MC, the angle <β> is small (less than 8° and approximately constant all throughout the MC. The field intensity enhancements are due mainly to interaction of the MC with the surrounding solar wind plasma causing field compression at front and rear. For example, for a typical MC, ΔB/<B> is: 0.21±0.27 very early in the MC, −0.11±0.10 at the center (and −0.085±0.12 averaged over the full "central region," i.e., for 30% to 80% of duration, and 0.05±0.29 very late in the MC, showing a double sign change as we travel from front to center to back, in the MC. When individual MCs are examined we find that over 80% of them possess field enhancements within several to many hours of the front boundary, but only about 30% show such enhancements at their rear portions. The enhancement of the MC's front field is also due to MC expansion, but this is usually a lesser effect

  10. Averaged RMHD equations

    Energy Technology Data Exchange (ETDEWEB)

    Ichiguchi, Katsuji [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1998-08-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  11. Your Average Nigga

    Science.gov (United States)

    Young, Vershawn Ashanti

    2004-01-01

    "Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…

  12. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  13. A simulation-based assessment of the bias produced when using averages from small DHS clusters as contextual variables in multilevel models

    Directory of Open Access Journals (Sweden)

    Øystein Kravdal

    2006-07-01

    Full Text Available There is much interest these days in the importance of community institutions and resources for individual mortality and fertility. DHS data may seem to be a valuable source for such multilevel analysis. For example, researchers may consider including in their models the average education within the sample (cluster of approximately 25 women interviewed in each primary sampling unit (PSU. However, this is only a proxy for the theoretically more interesting average among all women in the PSU, and, in principle, the estimated effect of the sample mean may differ markedly from the effect of the latter variable. Fortunately, simulation experiments show that the bias actually is fairly small - less than 14% - when education effects on first birth timing are estimated from DHS surveys in sub-Saharan Africa. If other data are used, or if the focus is turned to other independent variables than education, the bias may, of course, be very different. In some situations, it may be even smaller; in others, it may be unacceptably large. That depends on the size of the clusters, and on how the independent variables are distributed within and across communities. Some general advice is provided.

  14. A Study of Single- and Double-Averaged Second-Order Models to Evaluate Third-Body Perturbation Considering Elliptic Orbits for the Perturbing Body

    Directory of Open Access Journals (Sweden)

    R. C. Domingos

    2013-01-01

    Full Text Available The equations for the variations of the Keplerian elements of the orbit of a spacecraft perturbed by a third body are developed using a single average over the motion of the spacecraft, considering an elliptic orbit for the disturbing body. A comparison is made between this approach and the more used double averaged technique, as well as with the full elliptic restricted three-body problem. The disturbing function is expanded in Legendre polynomials up to the second order in both cases. The equations of motion are obtained from the planetary equations, and several numerical simulations are made to show the evolution of the orbit of the spacecraft. Some characteristics known from the circular perturbing body are studied: circular, elliptic equatorial, and frozen orbits. Different initial eccentricities for the perturbed body are considered, since the effect of this variable is one of the goals of the present study. The results show the impact of this parameter as well as the differences between both models compared to the full elliptic restricted three-body problem. Regions below, near, and above the critical angle of the third-body perturbation are considered, as well as different altitudes for the orbit of the spacecraft.

  15. Development of realistic high-resolution whole-body voxel models of Japanese adult males and females of average height and weight, and application of models to radio-frequency electromagnetic-field dosimetry

    Science.gov (United States)

    Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio

    2004-01-01

    With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method.

  16. QSPR models based on molecular mechanics and quantum chemical calculations. 1. Construction of Boltzmann-averaged descriptors for alkanes, alcohols, diols, ethers and cyclic compounds.

    Science.gov (United States)

    Dyekjaer, Jane; Rasmussen, Kjeld; Jónsdóttir, Svava

    2002-09-01

    Values for nine descriptors for QSPR (quantitative structure-property relationships) modeling of physical properties of 96 alkanes, alcohols, ethers, diols, triols and cyclic alkanes and alcohols in conjunction with the program Codessa are presented. The descriptors are Boltzmann-averaged by selection of the most relevant conformers out of a set of possible molecular conformers generated by a systematic scheme presented in this paper. Six of these descriptors are calculated with molecular mechanics and three with quantum chemical methods. Especially interesting descriptors are the relative van der Waals energies and the molecular polarizabilities, which correlate very well with boiling points. Five more simple descriptors that only depend on the molecular constitutional formula are also discussed briefly.

  17. Average-atom model for two-temperature states and ionic transport properties of aluminum in the warm dense matter regime

    Science.gov (United States)

    Hou, Yong; Fu, Yongsheng; Bredow, Richard; Kang, Dongdong; Redmer, Ronald; Yuan, Jianmin

    2017-03-01

    The average-atom model combined with the hyper-netted chain approximation is an efficient tool for electronic and ionic structure calculations for warm dense matter. Here we generalize this method in order to describe non-equilibrium states with different electron and ion temperature as produced in laser-matter interactions on ultra-short time scales. In particular, the electron-ion and ion-ion correlation effects are considered when calculating the electron structure. We derive an effective ion-ion pair-potential using the electron densities in the framework of temperature-depended density functional theory. Using this ion-ion potential we perform molecular dynamics simulations in order to determine the ionic transport properties such as the ionic diffusion coefficient and the shear viscosity through the ionic velocity autocorrelation functions.

  18. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  19. Covariant approximation averaging

    Science.gov (United States)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  20. Synchronous Boxcar Averager

    Science.gov (United States)

    Rogers, Thomas W.

    1988-01-01

    Digital electronic filtering system produces series of moving-average samples of fluctuating signal in manner resulting in removal of undesired periodic signal component of known frequency. Filter designed to pass steady or slowly varying components of fluctuating pressure, flow, pump speed, and pump torque in slurry-pumping system. Concept useful for monitoring or control in variety of applications including machinery, power supplies, and scientific instrumentation.

  1. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Arithmetic mean of objects in a space need not lie in the space. [Frechet; 1948] Finding mean of right-angled triangles. S = {(x,y,z) ∈ R+3 : x2 + y2 = z2}. = {. [ z x − ιy x + ιy z. ] : x,y,z > 0,z2 = x2 + y2}. Surface of right triangles : Arithmetic mean not on S. Tanvi Jain. Averaging operations on matrices ...

  2. Averaged number of visits.

    Science.gov (United States)

    Haydn, N; Lunedei, E; Vaienti, S

    2007-09-01

    We introduce a new indicator for dynamical systems, namely the averaged number of visits, to estimate the frequency of visits in small regions when a map is iterated up to the inverse of the measure of this region. We compute this quantity analytically and numerically for various systems and we show that it depends on the ergodic properties of the systems and on their topological properties, such as the presence of periodic points.

  3. Sub-grid scale representation of vegetation in global land surface schemes: implications for estimation of the terrestrial carbon sink

    Directory of Open Access Journals (Sweden)

    J. R. Melton

    2014-02-01

    Full Text Available Terrestrial ecosystem models commonly represent vegetation in terms of plant functional types (PFTs and use their vegetation attributes in calculations of the energy and water balance as well as to investigate the terrestrial carbon cycle. Sub-grid scale variability of PFTs in these models is represented using different approaches with the "composite" and "mosaic" approaches being the two end-members. The impact of these two approaches on the global carbon balance has been investigated with the Canadian Terrestrial Ecosystem Model (CTEM v 1.2 coupled to the Canadian Land Surface Scheme (CLASS v 3.6. In the composite (single-tile approach, the vegetation attributes of different PFTs present in a grid cell are aggregated and used in calculations to determine the resulting physical environmental conditions (soil moisture, soil temperature, etc. that are common to all PFTs. In the mosaic (multi-tile approach, energy and water balance calculations are performed separately for each PFT tile and each tile's physical land surface environmental conditions evolve independently. Pre-industrial equilibrium CLASS-CTEM simulations yield global totals of vegetation biomass, net primary productivity, and soil carbon that compare reasonably well with observation-based estimates and differ by less than 5% between the mosaic and composite configurations. However, on a regional scale the two approaches can differ by > 30%, especially in areas with high heterogeneity in land cover. Simulations over the historical period (1959–2005 show different responses to evolving climate and carbon dioxide concentrations from the two approaches. The cumulative global terrestrial carbon sink estimated over the 1959–2005 period (excluding land use change (LUC effects differs by around 5% between the two approaches (96.3 and 101.3 Pg, for the mosaic and composite approaches, respectively and compares well with the observation-based estimate of 82.2 ± 35 Pg C over the same

  4. Interpreting the variability of space-borne CO2 column-averaged volume mixing ratios over North America using a chemistry transport model

    Directory of Open Access Journals (Sweden)

    P. S. Monks

    2008-10-01

    Full Text Available We use the GEOS-Chem chemistry transport model to interpret the sources and sinks of CO2 that determine variability of column-averaged volume mixing ratios (CVMRs, as observed by the SCIAMACHY satellite instrument, during the 2003 North American growing season. GEOS-Chem generally reproduces the magnitude and seasonal cycle of observed CO2 surface VMRs across North America and is quantitatively consistent with column VMRs in later years. However, it cannot reproduce the magnitude or variability of FSI-WFM-DOAS SCIAMACHY CVMRs. We use model tagged tracers to show that local fluxes largely determine CVMR variability over North America, with the largest individual CVMR contributions (1.1% from the land biosphere. Fuel sources are relatively constant while biomass burning makes a significant contribution only during midsummer. We also show that non-local sources contribute significantly to total CVMRs over North America, with the boreal Asian land biosphere contributing close to 1% in midsummer at high latitudes. We used the monthly-mean Jacobian matrix for North America to illustrate that:~1 North American CVMRs represent a superposition of many weak flux signatures, but differences in flux distributions should permit independent flux estimation; and 2 the atmospheric e-folding lifetimes for many of these flux signatures are 3–4 months, beyond which time they are too well-mixed to interpret. These long lifetimes will improve the efficacy of observed CVMRs as surface CO2 flux constraints.

  5. A depth-averaged debris-flow model that includes the effects of evolving dilatancy: II. Numerical predictions and experimental tests.

    Science.gov (United States)

    George, David L.; Iverson, Richard M.

    2014-01-01

    We evaluate a new depth-averaged mathematical model that is designed to simulate all stages of debris-flow motion, from initiation to deposition. A companion paper shows how the model’s five governing equations describe simultaneous evolution of flow thickness, solid volume fraction, basal pore-fluid pressure, and two components of flow momentum. Each equation contains a source term that represents the influence of state-dependent granular dilatancy. Here we recapitulate the equations and analyze their eigenstructure to show that they form a hyperbolic system with desirable stability properties. To solve the equations we use a shock-capturing numerical scheme with adaptive mesh refinement, implemented in an open-source software package we call D-Claw. As tests of D-Claw, we compare model output with results from two sets of large-scale debris-flow experiments. One set focuses on flow initiation from landslides triggered by rising pore-water pressures, and the other focuses on downstream flow dynamics, runout, and deposition. D-Claw performs well in predicting evolution of flow speeds, thicknesses, and basal pore-fluid pressures measured in each type of experiment. Computational results illustrate the critical role of dilatancy in linking coevolution of the solid volume fraction and pore-fluid pressure, which mediates basal Coulomb friction and thereby regulates debris-flow dynamics.

  6. Time line cell tracking for the approximation of lagrangian coherent structures with subgrid accuracy

    KAUST Repository

    Kuhn, Alexander

    2013-12-05

    Lagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  7. A nonlinear structural subgrid-scale closure for compressible MHD. I. Derivation and energy dissipation properties

    Energy Technology Data Exchange (ETDEWEB)

    Vlaykov, Dimitar G., E-mail: Dimitar.Vlaykov@ds.mpg.de [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Dynamik und Selbstorganisation, Am Faßberg 17, D-37077 Göttingen (Germany); Grete, Philipp [Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Schmidt, Wolfram [Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, D-21029 Hamburg (Germany); Schleicher, Dominik R. G. [Departamento de Astronomía, Facultad Ciencias Físicas y Matemáticas, Universidad de Concepción, Av. Esteban Iturra s/n Barrio Universitario, Casilla 160-C (Chile)

    2016-06-15

    Compressible magnetohydrodynamic (MHD) turbulence is ubiquitous in astrophysical phenomena ranging from the intergalactic to the stellar scales. In studying them, numerical simulations are nearly inescapable, due to the large degree of nonlinearity involved. However, the dynamical ranges of these phenomena are much larger than what is computationally accessible. In large eddy simulations (LESs), the resulting limited resolution effects are addressed explicitly by introducing to the equations of motion additional terms associated with the unresolved, subgrid-scale dynamics. This renders the system unclosed. We derive a set of nonlinear structural closures for the ideal MHD LES equations with particular emphasis on the effects of compressibility. The closures are based on a gradient expansion of the finite-resolution operator [W. K. Yeo (CUP, 1993)] and require no assumptions about the nature of the flow or magnetic field. Thus, the scope of their applicability ranges from the sub- to the hyper-sonic and -Alfvénic regimes. The closures support spectral energy cascades both up and down-scale, as well as direct transfer between kinetic and magnetic resolved and unresolved energy budgets. They implicitly take into account the local geometry, and in particular, the anisotropy of the flow. Their properties are a priori validated in Paper II [P. Grete et al., Phys. Plasmas 23, 062317 (2016)] against alternative closures available in the literature with respect to a wide range of simulation data of homogeneous and isotropic turbulence.

  8. What is the optimal level of population alcohol consumption for chronic disease prevention in England? Modelling the impact of changes in average consumption levels.

    Science.gov (United States)

    Nichols, Melanie; Scarborough, Peter; Allender, Steven; Rayner, Mike

    2012-01-01

    To estimate the impact of achieving alternative average population alcohol consumption levels on chronic disease mortality in England. A macro-simulation model was built to simultaneously estimate the number of deaths from coronary heart disease, stroke, hypertensive disease, diabetes, liver cirrhosis, epilepsy and five cancers that would be averted or delayed annually as a result of changes in alcohol consumption among English adults. Counterfactual scenarios assessed the impact on alcohol-related mortalities of changing (1) the median alcohol consumption of drinkers and (2) the percentage of non-drinkers. Risk relationships were drawn from published meta-analyses. Age- and sex-specific distributions of alcohol consumption (grams per day) for the English population in 2006 were drawn from the General Household Survey 2006, and age-, sex- and cause-specific mortality data for 2006 were provided by the Office for National Statistics. The optimum median consumption level for drinkers in the model was 5 g/day (about half a unit), which would avert or delay 4579 (2544 to 6590) deaths per year. Approximately equal numbers of deaths from cancers and liver disease would be delayed or averted (∼2800 for each), while there was a small increase in cardiovascular mortality. The model showed no benefit in terms of reduced mortality when the proportion of non-drinkers in the population was increased. Current government recommendations for alcohol consumption are well above the level likely to minimise chronic disease. Public health targets should aim for a reduction in population alcohol consumption in order to reduce chronic disease mortality.

  9. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... to a non-linear manifold and re-normalization or orthogonalization must be applied to obtain proper rotations. These latter steps have been viewed as ad hoc corrections for the errors introduced by assuming a vector space. The article shows that the two approximative methods can be derived from natural...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation....

  10. HLY1002_Averaged

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Models project the Arctic Ocean will become undersaturated with respect to carbonate minerals in the next decade. Recent field results indicate parts may already be...

  11. HLY1001_Averaged

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Models project the Arctic Ocean will become undersaturated with respect to carbonate minerals in the next decade. Recent field results indicate parts may already be...

  12. Modelling depth-averaged flow in the fluvial-tidal zone: the spatio-temporal interaction of river discharge and tidal cycle

    Science.gov (United States)

    Sandbach, S. D.; Nicholas, A.; Ashworth, P. J.; Best, J. L.; Parsons, D. R.; Sambrook Smith, G.; Simpson, C.; Keevil, C.

    2012-12-01

    The Columbia River has a strong seasonal variation in discharge, with a discharge around 3000 m3/s during the fall-early spring increasing to 12000 m3/s during the spring freshet. In this paper, we present flow predictions obtained using a 2D depth-averaged model (Delft3D) and flow measurements obtained using an Acoustic Doppler Current Profiler (ADCP). The 80 km-long and 3-5 km-wide modelled reach of the Columbia River Estuary (USA) extends from an upriver boundary at the Beaver Army Terminal (BAT) to the downstream water level boundary at the Hammond Tide Gauge (HTG) in the lower part of the estuary. The simulations were conducted using a boundary-fitted mesh with 120,000 ~40 m grid cells. Gauged discharge at BAT and water levels at HTG provide boundary conditions for the model. The bed topography was derived from a series of bathymetric (multi- and single-beam echo soundings) and LiDAR surveys conducted during 2009 and 2010 by NOAA, USACE and LCREP, and collated to generate a Digital Elevation Model (DEM). Bed shear stress was estimated using a variety of roughness parameterisation methods including Chezy, Colebrook-White and Manning and a uniform eddy-viscosity turbulence closure. The ADCP data were collected during low river flow in October 2011 (~4000 m3/s) and high river flow in June 2011 (~14000 m3/s) and June 2012 (~12000 m3/s). ADCP data were collected from six transects located within the fluvial-tidal zone for each data collection period. The width of these transects ranged from 0.5-2 km and the data were collected during both flood and ebb. The model was calibrated using water level measurements within the modelled reach and these results compare well with the measured ADCP data. Both field data and model results reveal the importance of the balance between the hydrodynamic forcing of the tidal flood and river flow, combined with the resultant interaction with bar topography and bed roughness. These complex interactions result in the flow reversals

  13. Comparison of population-averaged and cluster-specific models for the analysis of cluster randomized trials with missing binary outcomes: a simulation study

    Directory of Open Access Journals (Sweden)

    Ma Jinhui

    2013-01-01

    Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.

  14. MATHEMATICAL MODELING OF FLOW PARAMETERS FOR SINGLE WIND TURBINE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available It is known that on the territory of the Russian Federation the construction of several large wind farms is planned. The tasks connected with design and efficiency evaluation of wind farm work are in demand today. One of the possible directions in design is connected with mathematical modeling. The method of large eddy simulation developed within the direction of computational hydrodynamics allows to reproduce unsteady structure of the flow in details and to determine various integrated values. The calculation of work for single wind turbine installation by means of large eddy simulation and Actuator Line Method along the turbine blade is given in this work. For problem definition the numerical method in the form of a box was considered and the adapted unstructured grid was used.The mathematical model included the main equations of continuity and momentum equations for incompressible fluid. The large-scale vortex structures were calculated by means of integration of the filtered equations. The calculation was carried out with Smagorinsky model for determination of subgrid scale turbulent viscosity. The geometrical parametersof wind turbine were set proceeding from open sources in the Internet.All physical values were defined at center of computational cell. The approximation of items in equations was ex- ecuted with the second order of accuracy for time and space. The equations for coupling velocity and pressure were solved by means of iterative algorithm PIMPLE. The total quantity of the calculated physical values on each time step was equal to 18. So, the resources of a high performance cluster were required.As a result of flow calculation in wake for the three-bladed turbine average and instantaneous values of velocity, pressure, subgrid kinetic energy and turbulent viscosity, components of subgrid stress tensor were worked out. The re- ceived results matched the known results of experiments and numerical simulation, testify the opportunity

  15. Spatial averaging of fields from half-wave dipole antennas and corresponding SAR calculations in the NORMAN human voxel model between 65 MHz and 2 GHz.

    Science.gov (United States)

    Findlay, R P; Dimbylow, P J

    2009-04-21

    If an antenna is located close to a person, the electric and magnetic fields produced by the antenna will vary in the region occupied by the human body. To obtain a mean value of the field for comparison with reference levels, the Institute of Electrical and Electronic Engineers (IEEE) and International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommend spatially averaging the squares of the field strength over the height the body. This study attempts to assess the validity and accuracy of spatial averaging when used for half-wave dipoles at frequencies between 65 MHz and 2 GHz and distances of lambda/2, lambda/4 and lambda/8 from the body. The differences between mean electric field values calculated using ten field measurements and that of the true averaged value were approximately 15% in the 600 MHz to 2 GHz range. The results presented suggest that the use of modern survey equipment, which takes hundreds rather than tens of measurements, is advisable to arrive at a sufficiently accurate mean field value. Whole-body averaged and peak localized SAR values, normalized to calculated spatially averaged fields, were calculated for the NORMAN voxel phantom. It was found that the reference levels were conservative for all whole-body SAR values, but not for localized SAR, particularly in the 1-2 GHz region when the dipole was positioned very close to the body. However, if the maximum field is used for normalization of calculated SAR as opposed to the lower spatially averaged value, the reference levels provide a conservative estimate of the localized SAR basic restriction for all frequencies studied.

  16. QSPR models based on molecular mechanics and quantum chemical calculations. 1. Construction of Boltzmann averaged descriptors for alkanes, alcohols, diols, ethers and cyclic compounds

    DEFF Research Database (Denmark)

    Dyekjær, Jane Dannow; Rasmussen, Kjeld; Jonsdottir, Svava Osk

    2002-01-01

    -averaged by selection of the most relevant conformers out of a set of possible molecular conformers generated by a systematic scheme presented in this paper. Six of these descriptors are calculated with molecular mechanics and three with quantum chemical methods. Especially interesting descriptors are the relative van...

  17. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 water-level projections: average conditions in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm Modeling...

  18. Modelling transport and deposition of caesium and iodine from the Chernobyl accident using the DREAM model

    Directory of Open Access Journals (Sweden)

    J. Brandt

    2002-01-01

    Full Text Available A tracer model, DREAM (the Danish Rimpuff and Eulerian Accidental release Model, has been developed for modelling transport, dispersion and deposition (wet and dry of radioactive material from accidental releases, as the Chernobyl accident. The model is a combination of a Lagrangian model, that includes the near source dispersion, and an Eulerian model describing the long-range transport. The performance of the transport model has previously been tested within the European Tracer Experiment, ETEX, which included transport and dispersion of an inert, non-depositing tracer from a controlled release. The focus of this paper is the model performance with respect to the total deposition of  137Cs, 134Cs and 131I from the Chernobyl accident, using different relatively simple and comprehensive parameterizations for dry- and wet deposition. The performance, compared to measurements, of using different combinations of two different wet deposition parameterizations and three different parameterizations of dry deposition has been evaluated, using different statistical tests. The best model performance, compared to measurements, is obtained when parameterizing the total deposition combined of a simple method for dry deposition and a subgrid-scale averaging scheme for wet deposition based on relative humidities. The same major conclusion is obtained for all the three different radioactive isotopes and using two different deposition measurement databases. Large differences are seen in the results obtained by using the two different parameterizations of wet deposition based on precipitation rates and relative humidities, respectively. The parameterization based on subgrid-scale averaging is, in all cases, performing better than the parameterization based on precipitation rates. This indicates that the in-cloud scavenging process is more important than the below cloud scavenging process for the submicron particles and that the precipitation rates are

  19. Physical modelling of interactions between interfaces and turbulence; Modelisation physique des interactions entre interfaces et turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Toutant, A

    2006-12-15

    The complex interactions between interfaces and turbulence strongly impact the flow properties. Unfortunately, Direct Numerical Simulations (DNS) have to entail a number of degrees of freedom proportional to the third power of the Reynolds number to correctly describe the flow behaviour. This extremely hard constraint makes it impossible to use DNS for industrial applications. Our strategy consists in using and improving DNS method in order to develop the Interfaces and Sub-grid Scales concept. ISS is a two-phase equivalent to the single-phase Large Eddy Simulation (LES) concept. The challenge of ISS is to integrate the two-way coupling phenomenon into sub-grid models. Applying a space filter, we have exhibited correlations or sub-grid terms that require closures. We have shown that, in two-phase flows, the presence of a discontinuity leads to specific sub-grid terms. Comparing the maximum of the norm of the sub-grid terms with the maximum of the norm of the advection tensor, we have found that sub-grid terms related to interfacial forces and viscous effect are negligible. Consequently, in the momentum balance, only the sub-grid terms related to inertia have to be closed. Thanks to a priori tests performed on several DNS data, we demonstrate that the scale similarity hypothesis, reinterpreted near discontinuity, provides sub-grid models that take into account the two-way coupling phenomenon. These models correspond to the first step of our work. Indeed, in this step, interfaces are smooth and, interactions between interfaces and turbulence occur in a transition zone where each physical variable varies sharply but continuously. The next challenge has been to determine the jump conditions across the sharp equivalent interface corresponding to the sub-grid models of the transition zone. We have used the matched asymptotic expansion method to obtain the jump conditions. The first tests on the velocity of the sharp equivalent interface are very promising (author)

  20. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 ocean-currents projections: average conditions in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived ocean current velocities (in meters per second) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The...

  1. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  2. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  3. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in Orange County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. The Coastal Storm Modeling System (CoSMoS) makes...

  4. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 water-level projections: average conditions in Ventura County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Model-derived total water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. The Coastal Storm Modeling System (CoSMoS) makes...

  5. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 ocean-currents projections: average conditions in Ventura County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Model-derived ocean current velocities (in meters per second) for the given storm condition and sea-level rise (SLR) scenario. The Coastal Storm Modeling System...

  6. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 ocean-currents projections: average conditions in Orange County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Model-derived ocean current velocities (in meters per second) for the given storm condition and sea-level rise (SLR) scenario. The Coastal Storm Modeling System...

  7. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in Ventura County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. The Coastal Storm Modeling System (CoSMoS) makes...

  8. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 wave-hazard projections: average conditions in Santa Barbara County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. The Coastal Storm Modeling System (CoSMoS) makes...

  9. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 water-level projections: average conditions in Orange County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Model-derived total water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. The Coastal Storm Modeling System (CoSMoS) makes...

  10. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 water-level projections: average conditions in Santa Barbara County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Model-derived total water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. The Coastal Storm Modeling System (CoSMoS) makes...

  11. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 ocean-currents projections: average conditions in San Diego County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived ocean current velocities (in meters per second) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The...

  12. CoSMoS (Coastal Storm Modeling System) Southern California v3.0 Phase 2 water-level projections: average conditions in Los Angeles County

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived total water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...

  13. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density ...

  14. Average Convexity in Communication Situations

    NARCIS (Netherlands)

    Slikker, M.

    1998-01-01

    In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the

  15. Alternatives to the Moving Average

    Science.gov (United States)

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  16. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  17. Mapping average GPP, RE, and NEP for 2000 to 2013 using satellite data integrated into regression-tree models in the conterminous United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large...

  18. The model evaluation of subsonic aircraft effect on the ozone and radiative forcing

    Energy Technology Data Exchange (ETDEWEB)

    Rozanov, E.; Zubov, V.; Egorova, T.; Ozolin, Y. [Main Geophysical Observatory, St.Petersburg (Russian Federation)

    1997-12-31

    Two dimensional transient zonally averaged model was used for the evaluation of the effect of subsonic aircraft exhausts upon the ozone, trace gases and radiation in the troposphere and lower stratosphere. The mesoscale transformation of gas composition was included on the base of the box model simulations. It has been found that the transformation of the exhausted gases in sub-grid scale is able to influence the results of the modelling. The radiative forcing caused by gas, sulfate aerosol, soot and contrails changes was estimated as big as 0.12-0.15 W/m{sup 2} (0.08 W/m{sup 2} globally and annually averaged). (author) 10 refs.

  19. Bayesian model averaging in vector autoregressive processes with an investigation of stability of the US great ratios and risk of a liquidity trap in the USA, UK and Japan

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2007-01-01

    textabstractA Bayesian model averaging procedure is presented within the class of vector autoregressive (VAR) processes and applied to two empirical issues. First, stability of the "Great Ratios" in U.S. macro-economic time series is investigated, together with the presence and e¤ects of permanent

  20. A nonlinear structural subgrid-scale closure for compressible MHD Part I: derivation and energy dissipation properties

    CERN Document Server

    Vlaykov, Dimitar G; Schmidt, Wolfram; Schleicher, Dominik R G

    2016-01-01

    Compressible magnetohydrodynamic (MHD) turbulence is ubiquitous in astrophysical phenomena ranging from the intergalactic to the stellar scales. In studying them, numerical simulations are nearly inescapable, due to the large degree of nonlinearity involved. However the dynamical ranges of these phenomena are much larger than what is computationally accessible. In large eddy simulations (LES), the resulting limited resolution effects are addressed explicitly by introducing to the equations of motion additional terms associated with the unresolved, subgrid-scale (SGS) dynamics. This renders the system unclosed. We derive a set of nonlinear structural closures for the ideal MHD LES equations with particular emphasis on the effects of compressibility. The closures are based on a gradient expansion of the finite-resolution operator (W.K. Yeo CUP 1993, ed. Galperin & Orszag) and require no assumptions about the nature of the flow or magnetic field. Thus the scope of their applicability ranges from the sub- to ...

  1. Dynamic Multiscale Averaging (DMA) of Turbulent Flow

    Energy Technology Data Exchange (ETDEWEB)

    Richard W. Johnson

    2012-09-01

    A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical

  2. Global dust simulations in the multiscale modeling framework

    Science.gov (United States)

    Hsieh, W. C.; Rosa, D.; Collins, W. D.

    2013-03-01

    This study investigates the role of subgrid vertical transport in global simulations of soil-dust aerosols. The evolution and long-range transport of aerosols are strongly affected by vertical transport. In conventional global models, convective and turbulent transport is highly parameterized. This study applies the superparameterization (SP) framework in which a cloud-resolving model (CRM) is embedded in each grid cell of a global model to replace these parametric treatments with explicit simulation of subgrid processes at the cloud-system scale. We apply the implementation of the SP framework in the National Center for Atmospheric Research community atmospheric model (CAM) denoted by SPCAM for dust simulations. We focus on the effects of subgrid transport on dust simulations; thus, the sources and sinks of dust are calculated in the large-scale CAM grids, and the vertical transport of dust is computed in the CRM. We simulate present-day distributions of soil-dust aerosols using CAM and SPCAM operated in chemical transport mode with large-scale meteorological fields prescribed using the same meteorological reanalysis. Therefore, the differences of dust fields between two models caused by explicit versus parameterized treatments of convective transport are examined. Comparison of dust profiles shows that SPCAM predicts less dust in the low to mid troposphere but relatively higher concentration in the upper troposphere. The larger dust mass in upper troposphere in SPCAM may be related to the dust implementation approach in this study, in which the larger resolved updrafts in CRM for deep convection transport more dust aloft but are not accounted by the removal processes in the CRM grid scale. A slightly higher mobilization flux of less than 5% on an average is shown in SPCAM when compared with CAM. Similar patterns of elevated dry deposition are also produced with increases larger than 100% in some areas. For wet deposition, on average CAM is ˜31% higher than SPCAM

  3. Impact of an additional radiative CO2 cooling induced by subgrid-scale gravity waves in the middle and upper atmosphere

    Science.gov (United States)

    Medvedev, A. S.; Yigit, E.; Kutepov, A.; Feofilov, A.

    2011-12-01

    Atmospheric fluctuations produced by GWs are a substantial source of momentum and energy in the thermosphere (Yigit et al., 2009). These fluctuations also affect radiative transfer and, ultimately, the radiative heating/cooling rates. Recently, Kutepov et al. (2007) developed a methodology to account for radiative effects of subgrid-scale GWs not captured by general circulation models (GCMs). It has been extended by Kutepov et al (2011) to account not only for wave-induced variations of temperature, but also of CO2 and atomic oxygen. It was shown that these GWs can cause additional cooling of up to 3 K/day around mesopause. A key parameter for calculating the additional cooling is the temperature variance associated with GWs, which is a subproduct of conventional GW schemes. In this study, the parameterization of Kutepov et al. (2011) has been implemented into a 3-D comprehensive GCM that incorporates the effects of unresolved GWs via the extended nonlinear scheme of Yigit et al. (2008). Simulated net effects of the additional radiative CO2 cooling on the temperature and wind in the mesosphere and lower thermosphere are presented and discussed for solstice conditions. 1. Kutepov, A. A, A. G. Feofilov, A. S. Medvedev, A. W. A. Pauldrach, and P. Hartogh (2007), Geophys. Res. Lett. 34, L24807, doi:10.1029/2007GL032392. 2. Kutepov, A. A., A. G. Feofilov, A. S. Medvedev, U. Berger, and M. Kaufmann (2011), submitted to Geophys. Res. Letts. 3. Yigit, E., A. D. Aylward, and A. S. Medvedev (2008), J. Geophys. Res., 113, D19106, doi:10.1029/2008JD010135. 4. Yigit, E., A. S. Medvedev, A. D. Aylward, P. Hartogh, and M. J. Harris (2009), J. Geophys. Res., 114, D07101, doi:10.1029/2008JD011132.

  4. Cosmic inhomogeneities and averaged cosmological dynamics.

    Science.gov (United States)

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  5. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    cosmological models, but using purely spatial averages. As remarked at the end of the previous subsection, all known non-singular models were 'cosmological' in the sense that they could not describe a finite star surrounded by a surface of vanishing pressure. However, it can certainly happen that (say) the energy density ...

  6. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  7. Average male and female virtual dummy model (BioRID and EvaRID) simulations with two seat concepts in the Euro NCAP low severity rear impact test configuration.

    Science.gov (United States)

    Linder, Astrid; Holmqvist, Kristian; Svensson, Mats Y

    2017-06-13

    Soft tissue neck injuries, also referred to as whiplash injuries, which can lead to long term suffering accounts for more than 60% of the cost of all injuries leading to permanent medical impairment for the insurance companies, with respect to injuries sustained in vehicle crashes. These injuries are sustained in all impact directions, however they are most common in rear impacts. Injury statistics have since the mid-1960s consistently shown that females are subject to a higher risk of sustaining this type of injury than males, on average twice the risk of injury. Furthermore, some recently developed anti-whiplash systems have revealed they provide less protection for females than males. The protection of both males and females should be addresses equally when designing and evaluating vehicle safety systems to ensure maximum safety for everyone. This is currently not the case. The norm for crash test dummies representing humans in crash test laboratories is an average male. The female part of the population is not represented in tests performed by consumer information organisations such as NCAP or in regulatory tests due to the absence of a physical dummy representing an average female. Recently, the world first virtual model of an average female crash test dummy was developed. In this study, simulations were run with both this model and an average male dummy model, seated in a simplified model of a vehicle seat. The results of the simulations were compared to earlier published results from simulations run in the same test set-up with a vehicle concepts seat. The three crash pulse severities of the Euro NCAP low severity rear impact test were applied. The motion of the neck, head and upper torso were analysed in addition to the accelerations and the Neck Injury Criterion (NIC). Furthermore, the response of the virtual models was compared to the response of volunteers as well as the average male model, to that of the response of a physical dummy model. Simulations

  8. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  9. Ultra-reducing conditions in average mantle peridotites and in podiform chromitites: a thermodynamic model for moissanite (SiC) formation

    Science.gov (United States)

    Golubkova, Anastasia; Schmidt, Max W.; Connolly, James A. D.

    2016-05-01

    Natural moissanite (SiC) is reported from mantle-derived samples ranging from lithospheric mantle keel diamonds to serpentinites to podiform chromitites in ophiolites related to suprasubduction zone settings (Luobusa, Dongqiao, Semail, and Ray-Iz). To simulate ultra-reducing conditions and the formation of moissanite, we compiled thermodynamic data for alloys (Fe-Si-C and Fe-Cr), carbides (Fe3C, Fe7C3, SiC), and Fe-silicides; these data were augmented by commonly used thermodynamic data for silicates and oxides. Computed phase diagram sections then constrain the P- T- fO2 conditions of SiC stability in the upper mantle. Our results demonstrate that: Moissanite only occurs at oxygen fugacities 6.5-7.5 log units below the iron-wustite buffer; moissanite and chromite cannot stably coexist; increasing pressure does not lead to the stability of this mineral pair; and silicates that coexist with moissanite have X Mg > 0.99. At upper mantle conditions, chromite reduces to Fe-Cr alloy at fO2 values 3.7-5.3 log units above the moissanite-olivine-(ortho)pyroxene-carbon (graphite or diamond) buffer (MOOC). The occurrence of SiC in chromitites and the absence of domains with almost Fe-free silicates suggest that ultra-reducing conditions allowing for SiC are confined to grain scale microenvironments. In contrast to previous ultra-high-pressure and/or temperature hypotheses for SiC origin, we postulate a low to moderate temperature mechanism, which operates via ultra-reducing fluids. In this model, graphite-/diamond-saturated moderately reducing fluids evolve in chemical isolation from the bulk rock to ultra-reducing methane-dominated fluids by sequestering H2O into hydrous phases (serpentine, brucite, phase A). Carbon isotope compositions of moissanite are consistent with an origin of such fluids from sediments originally rich in organic compounds. Findings of SiC within rocks mostly comprised by hydrous phases (serpentine + brucite) support this model. Both the hydrous phases

  10. On Parametric Sensitivity of Reynolds-Averaged Navier-Stokes SST Turbulence Model: 2D Hypersonic Shock-Wave Boundary Layer Interactions

    Science.gov (United States)

    Brown, James L.

    2014-01-01

    Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.

  11. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    .uk. Abstract. The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump ...

  12. From averaged to simultaneous controllability

    OpenAIRE

    Lohéac, Jérôme; Zuazua, Enrique

    2016-01-01

    International audience; We consider a linear finite dimensional control system depending on unknown parameters. We aim to design controls, independent of the parameters, to control the system in some optimal sense. We discuss the notions of averaged control, according to which one aims to control only the average of the states with respect to the unknown parameters, and the notion of simultaneous control in which the goal is to control the system for all values of these parameters. We show ho...

  13. Convergence of multiple ergodic averages

    OpenAIRE

    Host, Bernard

    2006-01-01

    These notes are based on a course for a general audience given at the Centro de Modeliamento Matem\\'atico of the University of Chile, in December 2004. We study the mean convergence of multiple ergodic averages, that is, averages of a product of functions taken at different times. We also describe the relations between this area of ergodic theory and some classical and some recent results in additive number theory.

  14. Average inbreeding or equilibrium inbreeding?

    OpenAIRE

    Hedrick, P. W.

    1986-01-01

    The equilibrium inbreeding is always higher than the average inbreeding. For human populations with high inbreeding levels, the inbreeding equilibrium is more than 25% higher than the average inbreeding. Assuming no initial inbreeding in the population, the equilibrium inbreeding value is closely approached in 10 generations or less. A secondary effect of this higher inbreeding level is that the equilibrium frequency of recessive detrimental alleles is somewhat lower than expected using avera...

  15. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  16. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  17. Polyhedral Painting with Group Averaging

    Science.gov (United States)

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  18. Fuzzy Weighted Average: Analytical Solution

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.

    2009-01-01

    An algorithm is presented for the computation of analytical expressions for the extremal values of the α-cuts of the fuzzy weighted average, for triangular or trapeizoidal weights and attributes. Also, an algorithm for the computation of the inverses of these expressions is given, providing exact

  19. Gaussian moving averages and semimartingales

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2008-01-01

    are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...

  20. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  1. On flux terms in volume averaging

    NARCIS (Netherlands)

    Chu, S.G.; Prosperetti, Andrea

    2016-01-01

    This note examines the modeling of non-convective fluxes (e.g., stress, heat flux and others) as they appear in the general, unclosed form of the volume-averaged equations of multiphase flows. By appealing to the difference between slowly and rapidly varying quantities, it is shown that the natural

  2. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  3. Final Report: Systematic Development of a Subgrid Scaling Framework to Improve Land Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dickinson, Robert Earl [Univ. of Texas, Austin, TX (United States)

    2016-07-11

    We carried out research to development improvements of the land component of climate models and to understand the role of land in climate variability and change. A highlight was the development of a 3D canopy radiation model. More than a dozen publications resulted.

  4. Stochastic Simulation of Hourly Average Wind Speed in Umudike ...

    African Journals Online (AJOL)

    Ten years of hourly average wind speed data were used to build a seasonal autoregressive integrated moving average (SARIMA) model. The model was used to simulate hourly average wind speed and recommend possible uses at Umudike, South eastern Nigeria. Results showed that the simulated wind behaviour was ...

  5. Disk-averaged synthetic spectra of Mars.

    Science.gov (United States)

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2005-08-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.

  6. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...... of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure...

  7. Modeling the moist-convective atmosphere with a Quasi-3-D Multiscale Modeling Framework (Q3D MMF)

    Science.gov (United States)

    Jung, Joon-Hee; Arakawa, Akio

    2014-03-01

    The Q3D MMF (Quasi-Three-Dimensional Multiscale Modeling Framework) is a new generation of MMF that replaces the conventional subgrid-scale parameterizations in general circulation models (GCMs) with explicit simulations of cloud and associated processes by cloud-resolving models (CRMs). In the Q3D MMF, 3-D CRMs are applied to the channel domains that extend over GCM grid cells. To avoid "double counting" of the large-scale effects, only the eddy effects simulated by the CRMs are implemented into the GCM as far as the transports are concerned, while the total effects are implemented for diabatic processes. The CRMs recognize the large-scale horizontal inhomogeneity through the lateral boundary conditions obtained from the GCM through interpolation. To maintain compatibility between the GCM and CRMs, the averages of CRM variables over the GCM grid spacing are relaxed to the corresponding GCM variables with the advective time scale. To evaluate the Q3D MMF, a transition from a wave to strong vortices is simulated in an idealized horizontal domain. Comparison with a fully 3-D benchmark simulation shows that the Q3D MMF successfully predicts the evolution of the vortices. It also captures important statistics such as the domain-averaged surface precipitation rate, turbulent fluxes and subgrid-scale (co)variances. From tests with 3-D and 2-D CRMs, respectively, it is concluded that the ability to recognize large-scale inhomogeneities is primarily responsible for the successful performance of the Q3D MMF. It is also demonstrated that the use of two perpendicular sets of CRMs has positive impacts on the simulation.

  8. Self-Averaging Expectation Propagation

    DEFF Research Database (Denmark)

    Cakmak, Burak; Opper, Manfred; Fleury, Bernard Henri

    We investigate the problem of approximate inference using Expectation Propagation (EP) for large systems under some statistical assumptions. Our approach tries to overcome the numerical bottleneck of EP caused by the inversion of large matrices. Assuming that the measurement matrices...... are realizations of specific types of random matrix ensembles – called invariant ensembles – the EP cavity variances have an asymptotic self-averaging property. They can be pre-computed using specific generating functions which do not require matrix inversions. We demonstrate the performance of our approach...

  9. Application of a New Hybrid Model with Seasonal Auto-Regressive Integrated Moving Average (ARIMA) and Nonlinear Auto-Regressive Neural Network (NARNN) in Forecasting Incidence Cases of HFMD in Shenzhen, China

    Science.gov (United States)

    Tan, Li; Jiang, Hongbo; Wang, Ying; Wei, Sheng; Nie, Shaofa

    2014-01-01

    Background Outbreaks of hand-foot-mouth disease (HFMD) have been reported for many times in Asia during the last decades. This emerging disease has drawn worldwide attention and vigilance. Nowadays, the prevention and control of HFMD has become an imperative issue in China. Early detection and response will be helpful before it happening, using modern information technology during the epidemic. Method In this paper, a hybrid model combining seasonal auto-regressive integrated moving average (ARIMA) model and nonlinear auto-regressive neural network (NARNN) is proposed to predict the expected incidence cases from December 2012 to May 2013, using the retrospective observations obtained from China Information System for Disease Control and Prevention from January 2008 to November 2012. Results The best-fitted hybrid model was combined with seasonal ARIMA and NARNN with 15 hidden units and 5 delays. The hybrid model makes the good forecasting performance and estimates the expected incidence cases from December 2012 to May 2013, which are respectively −965.03, −1879.58, 4138.26, 1858.17, 4061.86 and 6163.16 with an obviously increasing trend. Conclusion The model proposed in this paper can predict the incidence trend of HFMD effectively, which could be helpful to policy makers. The usefulness of expected cases of HFMD perform not only in detecting outbreaks or providing probability statements, but also in providing decision makers with a probable trend of the variability of future observations that contains both historical and recent information. PMID:24893000

  10. High resolution modelling of extreme precipitation events in urban areas

    Science.gov (United States)

    Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave

    2015-04-01

    The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with

  11. Fluctuations of wavefunctions about their classical average

    CERN Document Server

    Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  12. An open-source distributed mesoscale hydrologic model (mHM)

    Science.gov (United States)

    Samaniego, Luis; Kumar, Rohini; Zink, Matthias; Thober, Stephan; Mai, Juliane; Cuntz, Matthias; Schäfer, David; Schrön, Martin; Musuuza, Jude; Prykhodko, Vladyslav; Dalmasso, Giovanni; Attinger, Sabine; Spieler, Diana; Rakovec, Oldrich; Craven, John; Langenberg, Ben

    2014-05-01

    The mesoscale hydrological model (mHM) is based on numerical approximations of dominant hydrological processes that have been tested in various hydrological models such as: HBV and VIC. In general, mHM simulates the following processes: canopy interception, snow accumulation and melting, soil moisture dynamics (n-horizons), infiltration and surface runoff, evapotranspiration, subsurface storage and discharge generation, deep percolation and baseflow, and discharge attenuation and flood routing. The main characteristic of mHM is the treatment of the sub-grid variability of input variables and model parameters which clearly distinguishes this model from existing precipitation-runoff models or land surface models. It uses a Multiscale Parameter Regionalization (MPR) to account for the sub-grid variability and to avoid continuous re-calibration. Effective model parameters are location and time dependent (e.g., soil porosity). They are estimated through upscaling operators that link sub-grid morphologic information (e.g., soil texture) with global transfer-function parameters, which, in turn, are found through multi-basin optimization. Global parameters estimated with the MPR technique are quasi-scale invariant and guarantee flux-matching across scales. mHM is an open source code, written in Fortran 2003 (standard), fully modular, with high computational efficiency, and parallelized. It is portable to multiple platforms (Linux, OS X, Windows) and includes a number of algorithms for sensitivity analysis, analysis of parameter uncertainty (MCMC), and optimization (DDS, SA, SCE). All simulated state variables and outputs can be stored as netCDF files for further analysis and visualization. mHM has been evaluated in all major river basins in Germany and over 80 US and 250 European river basins. The model efficiency (NSE) during validation at proxy locations is on average greater than 0.6. During last years, mHM had been used for number of hydrologic applications such as

  13. Average neutron detection efficiency for DEMON detectors

    Science.gov (United States)

    Zhang, S.; Lin, W.; Rodrigues, M. R. D.; Huang, M.; Wada, R.; Liu, X.; Zhao, M.; Jin, Z.; Chen, Z.; Keutgen, T.; Kowalski, S.; Hagel, K.; Barbui, M.; Bonasera, A.; Bottosso, C.; Materna, T.; Natowitz, J. B.; Qin, L.; Sahu, P. K.; Schmidt, K. J.; Wang, J.

    2013-05-01

    The neutron detection efficiency of a DEMON detector, averaged over the whole volume, was calculated using GEANT and applied to determine neutron multiplicities in an intermediate heavy ion reaction. When a neutron source is set at a distance of about 1 m from the front surface of the detector, the average efficiency, ɛav, is found to be significantly lower (20-30%) than the efficiency measured at the center of the detector, ɛ0. In the GEANT simulation the ratio R=ɛav/ɛ0 was calculated as a function of neutron energy. The experimental central efficiency multiplied by R was then used to determine the average efficiency. The results were applied to a study of the 64Zn+112Sn reaction at 40 A MeV which employed 16 DEMON detectors. The neutron multiplicity was extracted using a moving source fit. The derived multiplicities are compared well with those determined using the neutron ball in the NIMROD detector array in a separate experiment. Both are in good agreement with multiplicities predicted by a transport model calculation using an antisymmetric molecular dynamics (AMD) model code.

  14. Thermodynamic properties of average-atom interatomic potentials for alloys

    OpenAIRE

    Nöhring, Wolfram Georg; Curtin, William

    2016-01-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom po...

  15. Towards filtered drag force model for non-cohesive and cohesive particle-gas flows

    Science.gov (United States)

    Ozel, Ali; Gu, Yile; Milioli, Christian C.; Kolehmainen, Jari; Sundaresan, Sankaran

    2017-10-01

    Euler-Lagrange simulations of gas-solid flows in unbounded domains have been performed to study sub-grid modeling of the filtered drag force for non-cohesive and cohesive particles. The filtered drag forces under various microstructures and flow conditions were analyzed in terms of various sub-grid quantities: the sub-grid drift velocity, which stems from the sub-grid correlation between the local fluid velocity and the local particle volume fraction, and the scalar variance of solid volume fraction, which is a measure to identify the degree of local inhomogeneity of volume fraction within a filter volume. The results show that the drift velocity and the scalar variance exert systematic effects on the filtered drag force. Effects of particle and domain sizes, gravitational accelerations, and mass loadings on the filtered drag are also studied, and it is shown that these effects can be captured by both sub-grid quantities. Additionally, the effect of cohesion force through the van der Waals interaction on the filtered drag force is investigated, and it is found that there is no significant difference on the dependence of the filtered drag coefficient of cohesive and non-cohesive particles on the sub-grid drift velocity or the scalar variance of solid volume fraction. The assessment of predictabilities of sub-grid quantities was performed by correlation coefficient analyses in a priori manner, and it is found that the drift velocity is superior. However, the drift velocity is not available in "coarse-grid" simulations and a specific closure is needed. A dynamic scale-similarity approach was used to model drift velocity but the predictability of that model is not entirely satisfactory. It is concluded that one must develop a more elaborate model for estimating the drift velocity in "coarse-grid" simulations.

  16. Sex Effect on Average Bioequivalence.

    Science.gov (United States)

    Ibarra, Manuel; Vázquez, Marta; Fagiolino, Pietro

    2017-01-01

    Generic formulations are by far the most prescribed drugs. This scenario is highly beneficial for society because medication expenses are significantly reduced after expiration of the exclusivity period conceded to the branded name drug. Correspondingly, these formulations must be adequately evaluated to avoid drug inefficacy and toxicity in the overall patient population. Bioequivalence studies are the only in vivo evaluation that a generic drug must overcome to reach the market. These clinical trials have not been exempt from underrepresentation of female subjects and a lack of sex-based analysis. Frequently, conclusions obtained in men are extrapolated to women. Furthermore, the obtained results are not analyzed to determine sex differences. The aim of this study was to discuss the effect that male and female differences in gastrointestinal physiology can have on bioequivalence conclusions and to show why a sex-based analysis must be conducted in these studies to improve the evaluation of generic drugs. This discussion was based on observed sex differences in product bioavailability discrimination (sex-by-formulation interaction) and on residual variability through an analysis of average bioequivalence data previously reported by other researchers and data collected by our center. Bioequivalence studies of oral formulations, with a 2-period, 2-sequence, 2-treatment random crossover design performed in healthy subjects with at least 6 subjects of each sex, were included. In addition, the bioequivalence conclusion that would have been reached in each study if performed with only 1 sex was estimated. The data reveal that differences in both product bioavailability discrimination and residual variability occur with a significant incidence in bioequivalence studies. In either Cmax or AUC, a significant sex-by-formulation interaction was present in 1 of 3 reviewed studies, whereas differences in residual variability between sexes were significant for >50% of studies

  17. SU-F-T-202: An Evaluation Method of Lifetime Attributable Risk for Comparing Between Proton Beam Therapy and Intensity Modulated X-Ray Therapy for Pediatric Cancer Patients by Averaging Four Dose-Response Models for Carcinoma Induction

    Energy Technology Data Exchange (ETDEWEB)

    Tamura, M; Shirato, H [Department of Radiation Oncology, Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido (Japan); Ito, Y [Department of Biostatistics, Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido (Japan); Sakurai, H; Mizumoto, M; Kamizawa, S [Proton Medical Research Center, University of Tsukuba, Tsukuba, Ibaraki (Japan); Murayama, S; Yamashita, H [Proton Therapy Division, Shizuoka Cancer Center Hospital, Nagaizumi, Shizuoka (Japan); Takao, S; Suzuki, R [Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido (Japan)

    2016-06-15

    Purpose: To examine how much lifetime attributable risk (LAR) as an in silico surrogate marker of radiation-induced secondary cancer would be lowered by using proton beam therapy (PBT) in place of intensity modulated x-ray therapy (IMXT) in pediatric patients. Methods: From 242 pediatric patients with cancers who were treated with PBT, 26 patients were selected by random sampling after stratification into four categories: a) brain, head, and neck, b) thoracic, c) abdominal, and d) whole craniospinal (WCNS) irradiation. IMXT was re-planned using the same computed tomography and region of interest. Using dose volume histogram (DVH) of PBT and IMXT, the LAR of Schneider et al. was calculated for the same patient. The published four dose-response models for carcinoma induction: i) full model, ii) bell-shaped model, iii) plateau model, and ix) linear model were tested for organs at risk. In the case that more than one dose-response model was available, the LAR for this patient was calculated by averaging LAR for each dose-response model. Results: Calculation of the LARs of PBT and IMXT based on DVH was feasible for all patients. The mean±standard deviation of the cumulative LAR difference between PBT and IMXT for the four categories was a) 0.77±0.44% (n=7, p=0.0037), b) 23.1±17.2%,(n=8, p=0.0067), c) 16.4±19.8% (n=8, p=0.0525), and d) 49.9±21.2% (n=3, p=0.0275, one tailed t-test), respectively. The LAR was significantly lower by PBT than IMXT for the the brain, head, and neck region, thoracic region, and whole craniospinal irradiation. Conclusion: In pediatric patients who had undergone PBT, the LAR of PBT was significantly lower than the LAR of IMXT estimated by in silico modeling. This method was suggested to be useful as an in silico surrogate marker of secondary cancer induced by different radiotherapy techniques. This research was supported by the Translational Research Network Program, JSPS KAKENHI Grant No. 15H04768 and the Global Institution for

  18. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...

  19. Site Averaged Neutron Soil Moisture: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the neutron probe soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  20. Site Averaged Gravimetric Soil Moisture: 1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  1. Site Averaged Gravimetric Soil Moisture: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  2. Site Averaged Gravimetric Soil Moisture: 1987 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  3. Site Averaged Gravimetric Soil Moisture: 1987 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  4. Modelo matemático para estimativa da temperatura média diária do ar no Estado de Goiás Mathematical model for estimating daily average air temperature in Goiás, Brazil

    Directory of Open Access Journals (Sweden)

    Jorge Cesar dos Anjos Antonini

    2009-04-01

    Full Text Available O objetivo deste trabalho foi desenvolver um modelo matemático de estimativa da temperatura média diária do ar no Estado de Goiás, que considera simultaneamente as variações espacial e temporal. O modelo foi desenvolvido por meio de uma combinação linear da altitude, latitude, longitude e da série trigonométrica de Fourier incompleta usando os três primeiros coeficientes harmônicos. Os parâmetros do modelo foram ajustados aos dados de 21 estações meteorológicas, por meio de regressão linear múltipla. O coeficiente de correlação resultante do ajuste do modelo foi de 0,91, e o índice de concordância de Willmott foi igual a 1. O modelo foi testado com os dados de três estações de altitudes diferentes: elevada (1.100 m, média (554 m e baixa (431 m. O desempenho foi considerado mediano para altitudes baixas e elevadas, e muito bom para altitudes médias.The objective of this work was to develop a mathematical model to predict the daily average of air temperature in Goiás, Brazil. The model was developed through a linear combination of altitude, latitude, longitude, and the incomplete trigonometric Fourier series using the first three harmonic coefficients. The parameters of the model were adjusted with data from 21 weather stations, using multiple linear regression. The resulting correlation coefficient of the model was 0.91, and the Willmott's index of agreement was close to 1. The model was tested with data from three additional weather stations at different altitudes: high (1,100 m, medium (554 m, and low (431 m. The performance of the model was reasonable for both high and low altitude stations, and very good for the medium altitude station.

  5. Analysis of aggregation and disaggregation effects for grid-based hydrological models and the development of improved precipitation disaggregation procedures for GCMs

    Directory of Open Access Journals (Sweden)

    H. S. Wheater

    1999-01-01

    Full Text Available Appropriate representation of hydrological processes within atmospheric General Circulation Models (GCMs is important with respect to internal model dynamics (e.g. surface feedback effects on atmospheric fluxes, continental runoff production and to simulation of terrestrial impacts of climate change. However, at the scale of a GCM grid-square, several methodological problems arise. Spatial disaggregation of grid-square average climatological parameters is required in particular to produce appropriate point intensities from average precipitation. Conversely, aggregation of land surface heterogeneity is necessary for grid-scale or catchment scale application. The performance of grid-based hydrological models is evaluated for two large (104km2 UK catchments. Simple schemes, using sub-grid average of individual land use at 40 km scale and with no calibration, perform well at the annual time-scale and, with the addition of a (calibrated routing component, at the daily and monthly time-scale. Decoupling of hillslope and channel routing does not necessarily improve performance or identifiability. Scale dependence is investigated through application of distribution functions for rainfall and soil moisture at 100 km scale. The results depend on climate, but show interdependence of the representation of sub-grid rainfall and soil moisture distribution. Rainfall distribution is analysed directly using radar rainfall data from the UK and the Arkansas Red River, USA. Among other properties, the scale dependence of spatial coverage upon radar pixel resolution and GCM grid-scale, as well as the serial correlation of coverages are investigated. This leads to a revised methodology for GCM application, as a simple extension of current procedures. A new location-based approach using an image processing technique is then presented, to allow for the preservation of the spatial memory of the process.

  6. Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling

    CERN Document Server

    Harouna, S Kadri

    2016-01-01

    We explore the potential of a formulation of the Navier-Stokes equations incorporating a random description of the small-scale velocity component. This model, established from a version of the Reynolds transport theorem adapted to a stochastic representation of the flow, gives rise to a large-scale description of the flow dynamics in which emerges an anisotropic subgrid tensor, reminiscent to the Reynolds stress tensor, together with a drift correction due to an inhomogeneous turbulence. The corresponding subgrid model, which depends on the small scales velocity variance, generalizes the Boussinesq eddy viscosity assumption. However, it is not anymore obtained from an analogy with molecular dissipation but ensues rigorously from the random modeling of the flow. This principle allows us to propose several subgrid models defined directly on the resolved flow component. We assess and compare numerically those models on a standard Green-Taylor vortex flow at Reynolds 1600. The numerical simulations, carried out w...

  7. Bounce-averaged approach to radial diffusion modeling: From a new derivation of the instantaneous rate of change of the third adiabatic invariant to the characterization of the radial diffusion process

    Science.gov (United States)

    Lejosne, SolèNe; Boscher, Daniel; Maget, Vincent; Rolland, Guy

    2012-08-01

    In this paper, a new approach for the derivation of the instantaneous rate of change of the third adiabatic invariant is introduced. It is based on the tracking of the bounce-averaged motion of guiding centers with assumptions that are only kept to the necessary conditions for definition and conservation of the first two adiabatic invariants. The derivation is first given in the case of trapped equatorial particles drifting in a time varying magnetic field in the absence of electrostatic potential. It is then extended to more general cases including time varying electric potentials and non-equatorial particles. Finally, the general formulation of the third adiabatic invariant time derivative is related to the description of the radial diffusion process occurring in the radiation belts. It highlights the links that exist between previous theoretical works with the objective of a better understanding of the radial diffusion process. A theoretical validation in the specific case of equatorial particles drifting in a magnetic field model whose disturbed part is limited to the first terms of a spherical expansion is also presented.

  8. Isotropic averaging for cell-dynamical-system simulation of spinodal ...

    Indian Academy of Sciences (India)

    Anand Kumar. Research Articles Volume 61 Issue 1 July 2003 pp 1-5 ... Averagings are employed in the cell-dynamical-system simulation of spinodal decomposition for inter-cell coupling. The averagings used in ... CSIR Centre for Mathematical Modelling and Computer Simulation, Belur Campus, Bangalore 560 037, India ...

  9. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video...

  10. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  11. Obligatory and adaptive averaging in visual short-term memory.

    Science.gov (United States)

    Dubé, Chad; Sekuler, Robert

    2015-01-01

    Visual memory can draw upon averaged perceptual representations, a dependence that could be both adaptive and obligatory. In support of this idea, we review a wide range of evidence, including findings from our own lab. This evidence shows that time- and space-averaged memory representations influence detection and recognition responses, and do so without instruction to compute or report an average. Some of the work reviewed exploits fine-grained measures of retrieval from visual short-term memory to closely track the influence of stored averages on recall and recognition of briefly presented visual textures. Results show that reliance on perceptual averages is greatest when memory resources are taxed or when subjects are uncertain about the fidelity of their memory representation. We relate these findings to models of how summary statistics impact visual short-term memory, and discuss a neural signature for contexts in which perceptual averaging exerts maximal influence.

  12. Cloud base vertical velocity statistics: a comparison between an atmospheric mesoscale model and remote sensing observations

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2011-09-01

    Full Text Available The statistics of cloud base vertical velocity simulated by the non-hydrostatic mesoscale model AROME are compared with Cloudnet remote sensing observations at two locations: the ARM SGP site in central Oklahoma, and the DWD observatory at Lindenberg, Germany. The results show that AROME significantly underestimates the variability of vertical velocity at cloud base compared to observations at their nominal resolution; the standard deviation of vertical velocity in the model is typically 4–8 times smaller than observed, and even more during the winter at Lindenberg. Averaging the observations to the horizontal scale corresponding to the physical grid spacing of AROME (2.5 km explains 70–80 % of the underestimation by the model. Further averaging of the observations in the horizontal is required to match the model values for the standard deviation in vertical velocity. This indicates an effective horizontal resolution for the AROME model of at least 10 km in the presented case. Adding a TKE-term on the resolved grid-point vertical velocity can compensate for the underestimation, but only for altitudes below approximately the boundary layer top height. The results illustrate the need for a careful consideration of the scales the model is able to accurately resolve, as well as for a special treatment of sub-grid scale variability of vertical velocities in kilometer-scale atmospheric models, if processes such as aerosol-cloud interactions are to be included in the future.

  13. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  14. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  15. Noise properties of analog correlators with exponentially weighted average

    Science.gov (United States)

    Dmowski, K.; Pióro, Z.

    1987-11-01

    Detailed calculations of the root-mean-square value of the output noise of correlators with an exponentially weighted average and an arbitrarily chosen weighting function for three commonly used mathematical models of white noise are derived. A comparative analysis has been made of noise properties of correlators with exponentially weighted average and true average based on two figures of merits: the output signal-to-noise ratio and the signal-to-noise improvement ratio. An analysis of noise properties of a boxcar averager for any gate width is performed. Expressions for the output signal-to-noise ratio and the signal-to-noise improvement ratio of a boxcar averager are derived.

  16. Evaluation of edge detectors using average risk

    NARCIS (Netherlands)

    Spreeuwers, Lieuwe Jan; van der Heijden, Ferdinand

    1992-01-01

    A new method for evaluation of edge detectors, based on the average risk of a decision, is discussed. The average risk is a performance measure well-known in Bayesian decision theory. Since edge detection can be regarded as a compound decision making process, the performance of an edge detector is

  17. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  18. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW

  19. Water balance model for Kings Creek

    Science.gov (United States)

    Wood, Eric F.

    1990-01-01

    Particular attention is given to the spatial variability that affects the representation of water balance at the catchment scale in the context of macroscale water-balance modeling. Remotely sensed data are employed for parameterization, and the resulting model is developed so that subgrid spatial variability is preserved and therefore influences the grid-scale fluxes of the model. The model permits the quantitative evaluation of the surface-atmospheric interactions related to the large-scale hydrologic water balance.

  20. Multipartite analysis of average-subsystem entropies

    Science.gov (United States)

    Alonso-Serrano, Ana; Visser, Matt

    2017-11-01

    So-called average subsystem entropies are defined by first taking partial traces over some pure state to define density matrices, then calculating the subsystem entropies, and finally averaging over the pure states to define the average subsystem entropies. These quantities are standard tools in quantum information theory, most typically applied in bipartite systems. We shall first present some extensions to the usual bipartite analysis (including a calculation of the average tangle and a bound on the average concurrence), follow this with some useful results for tripartite systems, and finally extend the discussion to arbitrary multipartite systems. A particularly nice feature of tripartite and multipartite analyses is that this framework allows one to introduce an "environment" to which small subsystems can couple.

  1. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  2. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).

  3. A Priori Direct Numerical Simulation Modelling of the Curvature Term of the Flame Surface Density Transport Equation for Nonunity Lewis Number Flames in the Context of Large Eddy Simulations

    Directory of Open Access Journals (Sweden)

    Mohit Katragadda

    2012-01-01

    Full Text Available A Direct Numerical Simulation (DNS database of freely propagating statistically planar turbulent premixed flames with Lewis numbers Le ranging from 0.34 to 1.2 has been used to analyse the statistical behaviours of the curvature term of the generalised Flame surface Density (FSD transport equation, in the context of the Large Eddy Simulation (LES. Lewis number is shown to have significant influences on the statistical behaviours of the resolved and sub-grid parts of the FSD curvature term. It has been found that the existing models for the sub-grid curvature term Csg do not capture the qualitative behaviour of this term extracted from the DNS database for flames with Le<<1. The existing models of Csg only predict negative values, whereas the sub-grid curvature term is shown to assume positive values within the flame brush for the Le=0.34 and 0.6 flames. Here the sub-grid curvature terms arising from combined reaction and normal diffusion and tangential diffusion components of displacement speed are individually modelled, and the new model of the sub-grid curvature term has been found to capture Csg extracted from DNS data satisfactorily for all the different Lewis number flames considered here for a wide range of filter widths.

  4. On Average Risk-sensitive Markov Control Processes

    OpenAIRE

    Shen, Yun; Obermayer, Klaus; Stannat, Wilhelm

    2014-01-01

    We introduce the Lyapunov approach to optimal control problems of average risk-sensitive Markov control processes with general risk maps. Motivated by applications in particular to behavioral economics, we consider possibly non-convex risk maps, modeling behavior with mixed risk preference. We introduce classical objective functions to the risk-sensitive setting and we are in particular interested in optimizing the average risk in the infinite-time horizon for Markov Control Processes on gene...

  5. Averaging of Legendrian submanifolds of contact manifolds

    OpenAIRE

    Zambon, Marco

    2004-01-01

    We give a procedure to ``average'' canonically $C^1$-close Legendrian submanifolds of contact manifolds. As a corollary we obtain that, whenever a compact group action leaves a Legendrian submanifold almost invariant, there is an invariant Legendrian submanifold nearby.

  6. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...

  7. Quetelet, the average man and medical knowledge

    National Research Council Canada - National Science Library

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population...

  8. Site Averaged AMS Data: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the Portable Automatic Meteorological Station (AMS) data acquired during the 1987-1989 FIFE experiment. Data are in 30-minute time intervals...

  9. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  10. Average Vegetation Growth 1993 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  11. Average Vegetation Growth 2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  12. Average Vegetation Growth 1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1999 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  13. Average Vegetation Growth 2005 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  14. Average Vegetation Growth 1996 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  15. Average Vegetation Growth 1992 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  16. Average Vegetation Growth 2000 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  17. Average Vegetation Growth 2003 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2003 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  18. Average Vegetation Growth 2004 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2004 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  19. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  20. Appeals Council Requests - Average Processing Time

    Data.gov (United States)

    Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...

  1. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  2. Microprocessor-based boxcar signal averager

    Science.gov (United States)

    Bano, S. S.; Reddy, P. N.; Reddy, B. P. N.; Eswara Reddy, N. C.

    1987-10-01

    A boxcar signal averager using Intel 8085AH, an 8-bit microprocessor developed for processing free-induction decay (FID) signals from a pulsed nuclear-magnetic-resonance (NMR) spectrometer, is described. The boxcar signal averager works either in single-point mode or in scan mode. In addition to the software developed, the constructional features, circuit details, and the operation of the boxcar are discussed in detail.

  3. The concept of the average eye

    Directory of Open Access Journals (Sweden)

    R.D. van Gool

    2005-01-01

    Full Text Available For most quantitative studies one needs to calculate an average. In the case of refraction an average is readily computed as the arithmetic average of dioptric power matrices. Refraction, however, is only one aspect of the first-order optical  character  of  an  eye.  The  question  is: How does one determine an average that rep-resents the average optical character of a set of eyes  completely  to  first  order? The  exponen-tial-mean-log  transference  has  been  proposed recently  but  it  is  not  without  its  difficulties.  There  are  four  matrices,  naturally  related  to the  transference  and  called  the  characteristics or characteristic matrices, whose mathematical features suggest that they may provide alterna-tive  solutions  to  the  problem  of  the  average eye. Accordingly the purpose of this paper is to propose averages based on these characteristics, to  examine  their  nature  and  to  calculate  and compare them in the case of a particular sample of 30 eyes. The eyes may be stigmatic or astig-matic and component elements may be centred or  decentred.  None  turns  out  to  be  a  perfect average. One of the four averages (that based on one of the two mixed characteristics is proba-bly of little or no use in the context of eyes. The other three, particularly the point-characteristic average, seem to be potentially useful.

  4. LTB universes as alternatives to dark energy: does positive averaged acceleration imply positive cosmic acceleration?

    OpenAIRE

    Romano, Antonio Enea

    2006-01-01

    We show that positive averaged acceleration obtained in LTB models through spatial averaging can require integration over a region beyond the event horizon of the central observer. We provide an example of a LTB model with positive averaged acceleration in which the luminosity distance does not contain information about the entire spatially averaged region, making the averaged acceleration unobservable. Since the cosmic acceleration is obtained from fitting the observed luminosity distance to...

  5. Fleet average NO{sub x} emission performance of 2004 model year light-duty vehicles, light-duty trucks and medium-duty passenger vehicles[In relation to the On-Road Vehicle and Engine Emission Regulations under the Canadian Environmental Protection Act, 1999

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-05-15

    The On-Road Vehicle and Engine Emission Regulations came into effect on January 1, 2004. The regulations introduced more stringent national emission standards for on-road vehicles and engines, and also required that companies submit reports containing information concerning the company's fleets. This report presented a summary of the regulatory requirements relating to nitric oxide (NO{sub x}) fleet average emissions for light-duty vehicles, light-duty trucks, and medium-duty passenger vehicles under the new regulations. The effectiveness of the Canadian fleet average NO{sub x} emission program at achieving environmental performance objectives was also evaluated. A summary of the fleet average NO{sub x} emission performance of individual companies was presented, as well as the overall Canadian fleet average of the 2004 model year based on data submitted by companies in their end of model year reports. A total of 21 companies submitted reports covering 2004 model year vehicles in 10 test groups, comprising 1,350,719 vehicles of the 2004 model year manufactured or imported for the purpose of sale in Canada. The average NO{sub x} value for the entire Canadian LDV/LDT fleet was 0.2016463 grams per mile. The average NO{sub x} values for the entire Canadian HLDT/MDPV fleet was 0.321976 grams per mile. It was concluded that the NO{sub x} values for both fleets were consistent with the environmental performance objectives of the regulations for the 2004 model year. 9 tabs.

  6. MODFLOW-2005 and MODPATH6 models used to delineate areas contributing groundwater to selected surface receiving waters for long-term average hydrologic stress conditions from 1968 to 1983, Long Island, New York

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A regional groundwater-flow model and particle-tracking program were used to delineate areas contributing groundwater to coastal and freshwater bodies and to...

  7. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  8. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  9. Backus and Wyllie Averages for Seismic Attenuation

    Science.gov (United States)

    Qadrouh, Ayman N.; Carcione, José M.; Ba, Jing; Gei, Davide; Salim, Ahmed M.

    2017-09-01

    Backus and Wyllie equations are used to obtain average seismic velocities at zero and infinite frequencies, respectively. Here, these equations are generalized to obtain averages of the seismic quality factor (inversely proportional to attenuation). The results indicate that the Wyllie velocity is higher than the corresponding Backus quantity, as expected, since the ray velocity is a high-frequency limit. On the other hand, the Wyllie quality factor is higher than the Backus one, following the velocity trend, i.e., the higher the velocity (the stiffer the medium), the higher the attenuation. Since the quality factor can be related to properties such as porosity, permeability, and fluid viscosity, these averages can be useful for evaluating reservoir properties.

  10. Matrix averages relating to Ginibre ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Forrester, Peter J [Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia); Rains, Eric M [Department of Mathematics, California Institute of Technology, Pasadena, CA 91125 (United States)], E-mail: p.forrester@ms.unimelb.edu.au

    2009-09-25

    The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument AX, where A is a fixed matrix and X is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khoruzhenko (2009 J. Phys. A: Math. Theor. 42 222002), and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions; these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.

  11. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  12. When Is the Local Average Treatment Close to the Average? Evidence from Fertility and Labor Supply

    Science.gov (United States)

    Ebenstein, Avraham

    2009-01-01

    The local average treatment effect (LATE) may differ from the average treatment effect (ATE) when those influenced by the instrument are not representative of the overall population. Heterogeneity in treatment effects may imply that parameter estimates from 2SLS are uninformative regarding the average treatment effect, motivating a search for…

  13. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  14. An improved moving average technical trading rule

    Science.gov (United States)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  15. The modulated average structure of mullite.

    Science.gov (United States)

    Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X

    2015-06-01

    Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real

  16. Spherical Averages on Regular and Semiregular Graphs

    OpenAIRE

    Douma, Femke

    2008-01-01

    In 1966, P. Guenther proved the following result: Given a continuous function f on a compact surface M of constant curvature -1 and its periodic lift g to the universal covering, the hyperbolic plane, then the averages of the lift g over increasing spheres converge to the average of the function f over the surface M. In this article, we prove similar results for functions on the vertices and edges of regular and semiregular graphs, with special emphasis on the convergence rate. We also consid...

  17. Evaluation of an ARPS-based canopy flow modeling system for use in future operational smoke prediction efforts

    Science.gov (United States)

    M. T. Kiefer; S. Zhong; W. E. Heilman; J. J. Charney; X. Bian

    2013-01-01

    Efforts to develop a canopy flow modeling system based on the Advanced Regional Prediction System (ARPS) model are discussed. The standard version of ARPS is modified to account for the effect of drag forces on mean and turbulent flow through a vegetation canopy, via production and sink terms in the momentum and subgrid-scale turbulent kinetic energy (TKE) equations....

  18. A mathematical model for describing the retinal nerve fiber bundle trajectories in the human eye : Average course, variability, and influence of refraction, optic disc size and optic disc position

    NARCIS (Netherlands)

    Jansonius, Nomdo M.; Schiefer, Julia; Nevalainen, Jukka; Paetzold, Jens; Schiefer, Ulrich

    2012-01-01

    Previously we developed a mathematical model for describing the retinal nerve fiber bundle trajectories in the superior-temporal and inferior-temporal regions of the human retina, based on traced trajectories extracted from fundus photographs. Aims of the current study were to (i) validate the

  19. RF dosimetry: a comparison between power absorption of female and male numerical models from 0.1 to 4 GHz

    Science.gov (United States)

    Sandrini, L.; Vaccari, A.; Malacarne, C.; Cristoforetti, L.; Pontalti, R.

    2004-11-01

    Realistic numerical models of human subjects and their surrounding environment represent the basic points of radiofrequency (RF) electromagnetic dosimetry. This also involves differentiating the human models in men and women, possibly with different body shapes and postures. In this context, the aims of this paper are, firstly, to propose a female dielectric anatomical model (fDAM) and, secondly, to compare the power absorption distributions of a male and a female model from 0.1 to 4 GHz. For realizing the fDAM, a magnetic resonance imaging tomographer to acquire images and a recent technique which avoids the discrete segmentation of body tissues into different types have been used. Simulations have been performed with the FDTD method by using a novel filtering-based subgridding algorithm. The latter is applied here for the first time to dosimetry, allowing an abrupt mesh refinement by a factor of up to 7. The results show that the whole-body-averaged specific absorption rate (WBA-SAR) of the female model is higher than that of the male counterpart, mainly because of a thicker subcutaneous fat layer. In contrast, the maximum averaged SAR over 1 g (1gA-SAR) and 10 g (10gA-SAR) does not depend on gender, because it occurs in regions where no subcutaneous fat layer is present.

  20. Optimizing CO2 observing networks in the presence of model error: results from TransCom 3

    Directory of Open Access Journals (Sweden)

    R. J Rayner

    2004-01-01

    Full Text Available We use a genetic algorithm to construct optimal observing networks of atmospheric concentration for inverse determination of net sources. Optimal networks are those that produce a minimum in average posterior uncertainty plus a term representing the divergence among source estimates for different transport models. The addition of this last term modifies the choice of observing sites, leading to larger networks than would be chosen under the traditional estimated variance metric. Model-model differences behave like sub-grid heterogeneity and optimal networks try to average over some of this. The optimization does not, however, necessarily reject apparently difficult sites to model. Although the results are so conditioned on the experimental set-up that the specific networks chosen are unlikely to be the best choices in the real world, the counter-intuitive behaviour of the optimization suggests the model error contribution should be taken into account when designing observing networks. Finally we compare the flux and total uncertainty estimates from the optimal network with those from the 3 control case. The  3 control case performs well under the chosen uncertainty metric and the flux estimates are close to those from the optimal case. Thus the 3 findings would have been similar if minimizing the total uncertainty guided their network choice.