Sample records for sub-grid scale sgs

  1. One-equation sub-grid scale (SGS) modelling for Euler-Euler large eddy simulation (EELES) of dispersed bubbly flow

    NARCIS (Netherlands)

    Niceno, B.; Dhotre, M.T.; Deen, N.G.


    In this work, we have presented a one-equation model for sub-grid scale (SGS) kinetic energy and applied it for an Euler-Euler large eddy simulation (EELES) of a bubble column reactor. The one-equation model for SGS kinetic energy shows improved predictions over the state-of-the-art dynamic

  2. Sub-Grid Scale Plume Modeling

    Directory of Open Access Journals (Sweden)

    Greg Yarwood


    Full Text Available Multi-pollutant chemical transport models (CTMs are being routinely used to predict the impacts of emission controls on the concentrations and deposition of primary and secondary pollutants. While these models have a fairly comprehensive treatment of the governing atmospheric processes, they are unable to correctly represent processes that occur at very fine scales, such as the near-source transport and chemistry of emissions from elevated point sources, because of their relatively coarse horizontal resolution. Several different approaches have been used to address this limitation, such as using fine grids, adaptive grids, hybrid modeling, or an embedded sub-grid scale plume model, i.e., plume-in-grid (PinG modeling. In this paper, we first discuss the relative merits of these various approaches used to resolve sub-grid scale effects in grid models, and then focus on PinG modeling which has been very effective in addressing the problems listed above. We start with a history and review of PinG modeling from its initial applications for ozone modeling in the Urban Airshed Model (UAM in the early 1980s using a relatively simple plume model, to more sophisticated and state-of-the-science plume models, that include a full treatment of gas-phase, aerosol, and cloud chemistry, embedded in contemporary models such as CMAQ, CAMx, and WRF-Chem. We present examples of some typical results from PinG modeling for a variety of applications, discuss the implications of PinG on model predictions of source attribution, and discuss possible future developments and applications for PinG modeling.

  3. Effect of reactions in small eddies on biomass gasification with eddy dissipation concept - Sub-grid scale reaction model. (United States)

    Chen, Juhui; Yin, Weijie; Wang, Shuai; Meng, Cheng; Li, Jiuru; Qin, Bai; Yu, Guangbin


    Large-eddy simulation (LES) approach is used for gas turbulence, and eddy dissipation concept (EDC)-sub-grid scale (SGS) reaction model is employed for reactions in small eddies. The simulated gas molar fractions are in better agreement with experimental data with EDC-SGS reaction model. The effect of reactions in small eddies on biomass gasification is emphatically analyzed with EDC-SGS reaction model. The distributions of the SGS reaction rates which represent the reactions in small eddies with particles concentration and temperature are analyzed. The distributions of SGS reaction rates have the similar trend with those of total reactions rates and the values account for about 15% of the total reactions rates. The heterogeneous reaction rates with EDC-SGS reaction model are also improved during the biomass gasification process in bubbling fluidized bed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Two-fluid sub-grid-scale viscosity in nonlinear simulation of ballooning modes in a heliotron device (United States)

    Miura, H.; Hamba, F.; Ito, A.


    A large eddy simulation (LES) approach is introduced to enable the study of the nonlinear growth of ballooning modes in a heliotron-type device, by solving fully 3D two-fluid magnetohydrodynamic (MHD) equations numerically over a wide range of parameter space, keeping computational costs as low as possible. A model to substitute the influence of scales smaller than the grid size, at sub-grid scale (SGS), and at the scales larger than it—grid scale (GS)—has been developed for LES. The LESs of two-fluid MHD equations with SGS models have successfully reproduced the growth of the ballooning modes in the GS and nonlinear saturation. The numerical results show the importance of SGS effects on the GS components, or the effects of turbulent fluctuation at small scales in low-wavenumber unstable modes, over the course of the nonlinear saturation process. The results also show the usefulness of the LES approach in studying instability in a heliotron device. It is shown through a parameter survey over many SGS model coefficients that turbulent small-scale components in experiments can contribute to keeping the plasma core pressure from totally collapsing.

  5. Sub-grid-scale description of turbulent magnetic reconnection in magnetohydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Widmer, F., E-mail: [Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, 37077 Göttingen (Germany); Institut für Astrophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen (Germany); Büchner, J. [Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, 37077 Göttingen (Germany); Yokoi, N. [Institute of Industrial Science, University of Tokyo, 4-6-1 Komaba, Meguro, Tokyo 153-8505 (Japan)


    Magnetic reconnection requires, at least locally, a non-ideal plasma response. In collisionless space and astrophysical plasmas, turbulence could transport energy from large to small scales where binary particle collisions are rare. We have investigated the influence of small scale magnetohydrodynamics (MHD) turbulence on the reconnection rate in the framework of a compressible MHD approach including sub-grid-scale (SGS) turbulence. For this sake, we considered Harris-type and force-free current sheets with finite guide magnetic fields directed out of the reconnection plane. The goal is to find out whether unresolved by conventional simulations MHD turbulence can enhance the reconnection process in high-Reynolds-number astrophysical plasmas. Together with the MHD equations, we solve evolution equations for the SGS energy and cross-helicity due to turbulence according to a Reynolds-averaged turbulence model. The SGS turbulence is self-generated and -sustained through the inhomogeneities of the mean fields. By this way, the feedback of the unresolved turbulence into the MHD reconnection process is taken into account. It is shown that the turbulence controls the regimes of reconnection by its characteristic timescale τ{sub t}. The dependence on resistivity was investigated for large-Reynolds-number plasmas for Harris-type as well as force-free current sheets with guide field. We found that magnetic reconnection depends on the relation between the molecular and apparent effective turbulent resistivity. We found that the turbulence timescale τ{sub t} decides whether fast reconnection takes place or whether the stored energy is just diffused away to small scale turbulence. If the amount of energy transferred from large to small scales is enhanced, fast reconnection can take place. Energy spectra allowed us to characterize the different regimes of reconnection. It was found that reconnection is even faster for larger Reynolds numbers controlled by the molecular

  6. Evapotranspiration and cloud variability at regional sub-grid scales (United States)

    Vila-Guerau de Arellano, Jordi; Sikma, Martin; Pedruzo-Bagazgoitia, Xabier; van Heerwaarden, Chiel; Hartogensis, Oscar; Ouwersloot, Huug


    In regional and global models uncertainties arise due to our incomplete understanding of the coupling between biochemical and physical processes. Representing their impact depends on our ability to calculate these processes using physically sound parameterizations, since they are unresolved at scales smaller than the grid size. More specifically over land, the coupling between evapotranspiration, turbulent transport of heat and moisture, and clouds lacks a combined representation to take these sub-grid scales interactions into account. Our approach is based on understanding how radiation, surface exchange, turbulent transport and moist convection are interacting from the leaf- to the cloud scale. We therefore place special emphasis on plant stomatal aperture as the main regulator of CO2-assimilation and water transpiration, a key source of moisture source to the atmosphere. Plant functionality is critically modulated by interactions with atmospheric conditions occurring at very short spatiotemporal scales such as cloud radiation perturbations or water vapour turbulent fluctuations. By explicitly resolving these processes, the LES (large-eddy simulation) technique is enabling us to characterize and better understand the interactions between canopies and the local atmosphere. This includes the adaption time of vegetation to rapid changes in atmospheric conditions driven by turbulence or the presence of cumulus clouds. Our LES experiments are based on explicitly coupling the diurnal atmospheric dynamics to a plant physiology model. Our general hypothesis is that different partitioning of direct and diffuse radiation leads to different responses of the vegetation. As a result there are changes in the water use efficiencies and shifts in the partitioning of sensible and latent heat fluxes under the presence of clouds. Our presentation is as follows. First, we discuss the ability of LES to reproduce the surface energy balance including photosynthesis and CO2 soil

  7. Assessment of sub-grid scale dispersion closure with regularized deconvolution method in a particle-laden turbulent jet (United States)

    Wang, Qing; Zhao, Xinyu; Ihme, Matthias


    Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).

  8. Numerical aspects of drift kinetic turbulence: Ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    KAUST Repository

    Samtaney, Ravi


    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter. © 2012 IOP Publishing Ltd.

  9. Stochastic fields method for sub-grid scale emission heterogeneity in mesoscale atmospheric dispersion models


    M. Cassiani; Vinuesa, J.F.; Galmarini, S.; Denby, B


    The stochastic fields method for turbulent reacting flows has been applied to the issue of sub-grid scale emission heterogeneity in a mesoscale model. This method is a solution technique for the probability density function (PDF) transport equation and can be seen as a straightforward extension of currently used mesoscale dispersion models. It has been implemented in an existing mesoscale model and the results are compared with Large-Eddy Simulation (LES) data devised to test specifically the...

  10. Stochastic fields method for sub-grid scale emission heterogeneity in mesoscale atmospheric dispersion models

    Directory of Open Access Journals (Sweden)

    M. Cassiani


    Full Text Available The stochastic fields method for turbulent reacting flows has been applied to the issue of sub-grid scale emission heterogeneity in a mesoscale model. This method is a solution technique for the probability density function (PDF transport equation and can be seen as a straightforward extension of currently used mesoscale dispersion models. It has been implemented in an existing mesoscale model and the results are compared with Large-Eddy Simulation (LES data devised to test specifically the effect of sub-grid scale emission heterogeneity on boundary layer concentration fluctuations. The sub-grid scale emission variability is assimilated in the model as a PDF of the emissions. The stochastic fields method shows excellent agreement with the LES data without adjustment of the constants used in the mesoscale model. The stochastic fields method is a stochastic solution of the transport equations for the concentration PDF of dispersing scalars, therefore it possesses the ability to handle chemistry of any complexity without the need to introduce additional closures for the high order statistics of chemical species. This study shows for the first time the feasibility of applying this method to mesoscale chemical transport models.

  11. Sub-Grid-Scale Description of Turbulent Magnetic Reconnection in Magnetohydrodynamics

    CERN Document Server

    Widmer, Fabien; Yokoi, Nobumitsu


    Magnetic reconnection requires, at least locally, a non-ideal plasma response. In collisionless space and astrophysical plasmas, turbulence could permit this instead of the too rare binary collisions. We investigated the influence of turbulence on the reconnection rate in the framework of a single fluid compressible MHD approach. The goal is to find out, whether unresolved, sub-grid for MHD simulations, turbulence can enhance the reconnection process in high Reynolds number astrophysical plasma. We solve, simultaneously with the grid-scale MHD equations, evolution equations for the sub-grid turbulent energy and cross helicity according to Yokoi's model (Yokoi (2013)) where turbulence is self-generated and -sustained through the inhomogeneities of the mean fields. Simulations of Harris and force free sheets confirm the results of Higashimori et al. (2013) and new results are obtained about the dependence on resistivity for large Reynolds number as well as guide field effects. The amount of energy transferred f...

  12. Improving sub-grid scale accuracy of boundary features in regional finite-difference models (United States)

    Panday, Sorab; Langevin, Christian D.


    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  13. Near-wall Behavior of a Scale Self-Recognition Mixed SGS Model (United States)

    Kihara, Mizuki; Minamoto, Yuki; Naka, Yoshitsugu; Fukushima, Naoya; Shimura, Masayasu; Tanahashi, Mamoru


    A Scale Self-Recognition Mixed SGS Model was developed in terms of GS-SGS energy transfer in homogenius isotropic turbulence by Fukushima et al. (2015). In the present research, the near-wall characteristics of the Smagorinsky coefficient, CS are investigated in terms of GS-SGS energy transfer by analyzing DNS data of turbulent channel flows at Reτ = 400, 800 and 1270. CS is dependent on grid anisotropy, and this cause dependences of CS on Reτ . It is revealed that CS obtained directly from the DNS data is independent of Reτ and dependent on only dimensionless wall distance, y+ and filter-width to Kolmogorov scale ratio corrected by f, f . Δ / η , when the grid anisotropy is isolated from CS by using the correction function f proposed by Scotti et al. (1993). The contributions of Leonard, cross and Reynolds terms to total energy transfer are also independent of Reτ and dependent on only y+ and f . Δ / η in the near-wall region. These results suggest that CS can be determined dynamically from f . Δ / η in the wall turbulence if η is sufficiently predicted from the grid scale quantities.

  14. Impact of Sub-grid Soil Textural Properties on Simulations of Hydrological Fluxes at the Continental Scale Mississippi River Basin (United States)

    Kumar, R.; Samaniego, L. E.; Livneh, B.


    Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked

  15. Sub-grid scale models for discontinuous Galerkin methods based on the Mori-Zwanzig formalism (United States)

    Parish, Eric; Duraisamy, Karthk


    The optimal prediction framework of Chorin et al., which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived closure models. The M-Z formalism provides a methodology to reformulate a high-dimensional Markovian dynamical system as a lower-dimensional, non-Markovian (non-local) system. In this lower-dimensional system, the effects of the unresolved scales on the resolved scales are non-local and appear as a convolution integral. The non-Markovian system is an exact statement of the original dynamics and is used as a starting point for model development. In this work, we investigate the development of M-Z-based closures model within the context of the Variational Multiscale Method (VMS). The method relies on a decomposition of the solution space into two orthogonal subspaces. The impact of the unresolved subspace on the resolved subspace is shown to be non-local in time and is modeled through the M-Z-formalism. The models are applied to hierarchical discontinuous Galerkin discretizations. Commonalities between the M-Z closures and conventional flux schemes are explored. This work was supported in part by AFOSR under the project ''LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  16. Effect of Considering Sub-Grid Scale Uncertainties on the Forecasts of a High-Resolution Limited Area Ensemble Prediction System (United States)

    Kim, SeHyun; Kim, Hyun Mee


    The ensemble prediction system (EPS) is widely used in research and at operation center because it can represent the uncertainty of predicted atmospheric state and provide information of probabilities. The high-resolution (so-called "convection-permitting") limited area EPS can represent the convection and turbulence related to precipitation phenomena in more detail, but it is also much sensitive to small-scale or sub-grid scale processes. The convection and turbulence are represented using physical processes in the model and model errors occur due to sub-grid scale processes that were not resolved. This study examined the effect of considering sub-grid scale uncertainties using the high-resolution limited area EPS of the Korea Meteorological Administration (KMA). The developed EPS has horizontal resolution of 3 km and 12 ensemble members. The initial and boundary conditions were provided by the global model. The Random Parameters (RP) scheme was used to represent sub-grid scale uncertainties. The EPSs with and without the RP scheme were developed and the results were compared. During the one month period of July, 2013, a significant difference was shown in the spread of 1.5 m temperature and the Root Mean Square Error and spread of 10 m zonal wind due to application of the RP scheme. For precipitation forecast, the precipitation tended to be overestimated relative to the observation when the RP scheme was applied. Moreover, the forecast became more accurate for heavy precipitations and the longer forecast lead times. For two heavy rainfall cases occurred during the research period, the higher Equitable Threat Score was observed for heavy precipitations in the system with the RP scheme compared to the one without, demonstrating consistency with the statistical results for the research period. Therefore, the predictability for heavy precipitation phenomena that affected the Korean Peninsula increases if the RP scheme is used to consider sub-grid scale uncertainties

  17. Sub-grid scale representation of vegetation in global land surface schemes: implications for estimation of the terrestrial carbon sink

    Directory of Open Access Journals (Sweden)

    J. R. Melton


    Full Text Available Terrestrial ecosystem models commonly represent vegetation in terms of plant functional types (PFTs and use their vegetation attributes in calculations of the energy and water balance as well as to investigate the terrestrial carbon cycle. Sub-grid scale variability of PFTs in these models is represented using different approaches with the "composite" and "mosaic" approaches being the two end-members. The impact of these two approaches on the global carbon balance has been investigated with the Canadian Terrestrial Ecosystem Model (CTEM v 1.2 coupled to the Canadian Land Surface Scheme (CLASS v 3.6. In the composite (single-tile approach, the vegetation attributes of different PFTs present in a grid cell are aggregated and used in calculations to determine the resulting physical environmental conditions (soil moisture, soil temperature, etc. that are common to all PFTs. In the mosaic (multi-tile approach, energy and water balance calculations are performed separately for each PFT tile and each tile's physical land surface environmental conditions evolve independently. Pre-industrial equilibrium CLASS-CTEM simulations yield global totals of vegetation biomass, net primary productivity, and soil carbon that compare reasonably well with observation-based estimates and differ by less than 5% between the mosaic and composite configurations. However, on a regional scale the two approaches can differ by > 30%, especially in areas with high heterogeneity in land cover. Simulations over the historical period (1959–2005 show different responses to evolving climate and carbon dioxide concentrations from the two approaches. The cumulative global terrestrial carbon sink estimated over the 1959–2005 period (excluding land use change (LUC effects differs by around 5% between the two approaches (96.3 and 101.3 Pg, for the mosaic and composite approaches, respectively and compares well with the observation-based estimate of 82.2 ± 35 Pg C over the same

  18. Development of Near-Wall SGS Model Based on a Localized Low-Dimensional Approach (United States)

    Juttijudata, Vejapong; Rempfer, Dietmar; Lumley, John


    An alternative way to model sub-grid scale stresses (SGS) in the near-wall region is proposed. The key concept of this approach is that near-wall SGS are computed directly by filtering the instantaneous velocity estimated from the velocity reconstruction of a localized low-dimensional model. A blending function is introduced to blend the near-wall SGS to the core-region SGS calculated from a mixed SGS model. As a preliminary study, a globalized low-dimensional model in a wide channel is considered. The model is constructed by projection of the Navier-Stokes equations onto a three-dimensional vector Proper Orthogonal Decomposition (POD). The results show considerable promise. Further study of a localized low-dimensional model is being conducted. The issues of filtering procedure and velocity/pressure boundary conditions at the interface (y^+=85) of the localized low-dimensional model are seriously considered. Some results from the localized low-dimensional model are discussed.

  19. Sensitivity of boreal forest regional water flux and net primary production simulations to sub-grid-scale land cover complexity (United States)

    Kimball, J. S.; Running, S. W.; Saatchi, S. S.


    We use a general ecosystem process model (BIOME-BGC) coupled with remote sensing information to evaluate the sensitivity of boreal forest regional evapotranspiration (ET) and net primary production (NPP) to land cover spatial scale. Simulations were conducted over a 3 year period (1994-1996) at spatial scales ranging from 30 to 50 km within the BOREAS southern modeling subarea. Simulated fluxes were spatially complex, ranging from 0.1 to 3.9 Mg C ha-1 yr-1 and from 18 to 29 cm yr-1. Biomass and leaf area index heterogeneity predominantly controlled this complexity, while biophysical differences between deciduous and coniferous vegetation were of secondary importance. Spatial aggregation of land cover characteristics resulted in mean monthly NPP estimation bias from 25 to 48% (0.11-0.20 g C m-2 d-1) and annual estimation errors from 2 to 14% (0.04-0.31 Mg C ha-1 yr-1). Error was reduced at longer time intervals because coarse scale overestimation errors during spring were partially offset by underestimation of fine scale results during summer and winter. ET was relatively insensitive to land cover spatial scale with an average bias of less than 5% (0.04 kg m-2 d-1). Factors responsible for differences in scaling behavior between ET and NPP included compensating errors for ET calculations and boreal forest spatial and temporal NPP complexity. Careful consideration of landscape spatial and temporal heterogeneity is necessary to identify and mitigate potential error sources when using plot scale information to understand regional scale patterns. Remote sensing data integrated within an ecological process model framework provides an efficient mechanism to evaluate scaling behavior, interpret patterns in coarse resolution data, and identify appropriate scales of operation for various processes.

  20. Modeling lightning-NOx chemistry on a sub-grid scale in a global chemical transport model

    Directory of Open Access Journals (Sweden)

    A. Gressent


    Full Text Available For the first time, a plume-in-grid approach is implemented in a chemical transport model (CTM to parameterize the effects of the nonlinear reactions occurring within high concentrated NOx plumes from lightning NOx emissions (LNOx in the upper troposphere. It is characterized by a set of parameters including the plume lifetime, the effective reaction rate constant related to NOx–O3 chemical interactions, and the fractions of NOx conversion into HNO3 within the plume. Parameter estimates were made using the Dynamical Simple Model of Atmospheric Chemical Complexity (DSMACC box model, simple plume dispersion simulations, and the 3-D Meso-NH (non-hydrostatic mesoscale atmospheric model. In order to assess the impact of the LNOx plume approach on the NOx and O3 distributions on a large scale, simulations for the year 2006 were performed using the GEOS-Chem global model with a horizontal resolution of 2° × 2.5°. The implementation of the LNOx parameterization implies an NOx and O3 decrease on a large scale over the region characterized by a strong lightning activity (up to 25 and 8 %, respectively, over central Africa in July and a relative increase downwind of LNOx emissions (up to 18 and 2 % for NOx and O3, respectively, in July. The calculated variability in NOx and O3 mixing ratios around the mean value according to the known uncertainties in the parameter estimates is at a maximum over continental tropical regions with ΔNOx [−33.1, +29.7] ppt and ΔO3 [−1.56, +2.16] ppb, in January, and ΔNOx [−14.3, +21] ppt and ΔO3 [−1.18, +1.93] ppb, in July, mainly depending on the determination of the diffusion properties of the atmosphere and the initial NO mixing ratio injected by lightning. This approach allows us (i to reproduce a more realistic lightning NOx chemistry leading to better NOx and O3 distributions on the large scale and (ii to focus on other improvements to reduce remaining uncertainties from processes

  1. Influence of Sub-grid-Scale Isentropic Transports on McRAS Evaluations using ARM-CART SCM Datasets (United States)

    Sud, Y. C.; Walker, G. K.; Tao, W. K.


    In GCM-physics evaluations with the currently available ARM-CART SCM datasets, McRAS produced very similar character of near surface errors of simulated temperature and humidity containing typically warm and moist biases near the surface and cold and dry biases aloft. We argued it must have a common cause presumably rooted in the model physics. Lack of vertical adjustment of horizontal transport was thought to be a plausible source. Clearly, debarring such a freedom would force the incoming air to diffuse into the grid-cell which would naturally bias the surface air to become warm and moist while the upper air becomes cold and dry, a characteristic feature of McRAS biases. Since, the errors were significantly larger in the two winter cases that contain potentially more intense episodes of cold and warm advective transports, it further reaffirmed our argument and provided additional motivation to introduce the corrections. When the horizontal advective transports were suitably modified to allow rising and/or sinking following isentropic pathways of subgrid scale motions, the outcome was to cool and dry (or warm and moisten) the lower (or upper) levels. Ever, crude approximations invoking such a correction reduced the temperature and humidity biases considerably. The tests were performed on all the available ARM-CART SCM cases with consistent outcome. With the isentropic corrections implemented through two different numerical approximations, virtually similar benefits were derived further confirming the robustness of our inferences. These results suggest the need for insentropic advective transport adjustment in a GCM due to subgrid scale motions.

  2. Integrating land management into Earth system models: the importance of land use transitions at sub-grid-scale (United States)

    Pongratz, Julia; Wilkenskjeld, Stiig; Kloster, Silvia; Reick, Christian


    Recent studies indicate that changes in surface climate and carbon fluxes caused by land management (i.e., modifications of vegetation structure without changing the type of land cover) can be as large as those caused by land cover change. Further, such effects may occur on substantial areas: while about one quarter of the land surface has undergone land cover change, another fifty percent are managed. This calls for integration of management processes in Earth system models (ESMs). This integration increases the importance of awareness and agreement on how to diagnose effects of land use in ESMs to avoid additional model spread and thus unnecessary uncertainties in carbon budget estimates. Process understanding of management effects, their model implementation, as well as data availability on management type and extent pose challenges. In this respect, a significant step forward has been done in the framework of the current IPCC's CMIP5 simulations (Coupled Model Intercomparison Project Phase 5): The climate simulations were driven with the same harmonized land use dataset that, different from most datasets commonly used before, included information on two important types of management: wood harvest and shifting cultivation. However, these new aspects were employed by only part of the CMIP5 models, while most models continued to use the associated land cover maps. Here, we explore the consequences for the carbon cycle of including subgrid-scale land transformations ("gross transitions"), such as shifting cultivation, as example of the current state of implementation of land management in ESMs. Accounting for gross transitions is expected to increase land use emissions because it represents simultaneous clearing and regrowth of natural vegetation in different parts of the grid cell, reducing standing carbon stocks. This process cannot be captured by prescribing land cover maps ("net transitions"). Using the MPI-ESM we find that ignoring gross transitions

  3. Predicting the impacts of fishing canals on Floodplain Dynamics in Northern Cameroon using a small-scale sub-grid hydraulic model (United States)

    Shastry, A. R.; Durand, M. T.; Fernandez, A.; Hamilton, I.; Kari, S.; Labara, B.; Laborde, S.; Mark, B. G.; Moritz, M.; Neal, J. C.; Phang, S. C.


    Modeling Regime Shifts in the Logone floodplain (MORSL) is an ongoing interdisciplinary project at The Ohio State University studying the ecological, social and hydrological system of the region. This floodplain, located in Northern Cameroon, is part of the Lake Chad basin. Between September and October the floodplain is inundated by the overbank flow from the Logone River, which is important for agriculture and fishing. Fishermen build canals to catch fish during the flood's recession to the river by installing fishnets at the intersection of the canals and the river. Fishing canals thus connect the river to natural depressions of the terrain, which act as seasonal ponds during this part of the year. Annual increase in the number of canals affect hydraulics and hence fishing in the region. In this study, the Bara region (1 km2) of the Logone floodplain, through which Lorome Mazra flows, is modeled using LISFLOOD-FP, a raster-based model with sub-grid parameterizations of canals. The aim of the study is to find out how the small-scale, local features like canals and fishnets govern the flow, so that it can be incorporated in a large-scale model of the floodplain at a coarser spatial resolution. We will also study the effect of increasing number of canals on the flooding pattern. We use a simplified version of the hydraulic system at a grid-cell size of 30-m, using synthetic topography, parameterized fishing canals, and representing fishnets as trash screens. The inflow at Bara is obtained from a separate, lower resolution (1-km grid-cell) model run, which is forced by daily discharge records obtained from Katoa, located about 25-km to the south of Bara. The model appropriately captures the rise and recession of the annual flood, supporting use of the LISFLOOD-FP approach. Predicted water levels at specific points in the river, the canals, the depression and the floodplain will be compared to field measured heights of flood recession in Bara, November 2014.

  4. A new downscaling method for sub-grid turbulence modeling

    Directory of Open Access Journals (Sweden)

    L. Rottner


    Full Text Available In this study we explore a new way to model sub-grid turbulence using particle systems. The ability of particle systems to model small-scale turbulence is evaluated using high-resolution numerical simulations. These high-resolution data are averaged to produce a coarse-grid velocity field, which is then used to drive a complete particle-system-based downscaling. Wind fluctuations and turbulent kinetic energy are compared between the particle simulations and the high-resolution simulation. Despite the simplicity of the physical model used to drive the particles, the results show that the particle system is able to represent the average field. It is shown that this system is able to reproduce much finer turbulent structures than the numerical high-resolution simulations. In addition, this study provides an estimate of the effective spatial and temporal resolution of the numerical models. This highlights the need for higher-resolution simulations in order to evaluate the very fine turbulent structures predicted by the particle systems. Finally, a study of the influence of the forcing scale on the particle system is presented.

  5. Design and fabrication of SGS plutonium standards

    Energy Technology Data Exchange (ETDEWEB)

    Hsue, S.T.; Simmonds, S.M.; Longmire, V.L.; Long, S.M.


    This paper describes our experience of fabricating four sets of plutonium segmented gamma scanner (SGS) can standards. The fabrication involves careful planning, meticulous execution in weighing the plutonium oxide while minimizing contamination, chemical analyses by three different national laboratories to get accurate and independent plutonium concentrations, vertical scanning to assure mixing of the plutonium and the diluent, and finally the nondestructive verification measurement. By following these steps, we successfully fabricated 4 sets or 20 SGS can standards. 4 refs., 5 figs., 3 tabs.

  6. Analytical study on the SGS force around an elliptic Burgers vortex (United States)

    Kobayashi, Hiromichi


    The subgrid-scale (SGS) force around an elliptic Burgers vortex is analytically examined. In turbulence, there are a lot of vortex-tubes whose cross sections are known to be approximated as the ellipse. In this study, the biaxial elliptic Burgers vortex is produced by adding the compressive and extensional background straining flow to the conventional Burgers vortex. By using a filtering operation, we revealed that the energy transfer by the Reynolds stress term applying the Bardina model exhibits negative correlation to that by the true SGS stress term. However, it has been recently reported that a combination of the Bardina Reynolds term and the eddy viscosity model gives good performance even for the coarse LES of turbulent channel flows. In order to understand that, we discuss some SGS forces: by the true SGS stress tensor, by the eddy viscosity model, by the modified Leonard term and by the Bardina Reynolds term. This work was supported by JSPS KAKENHI Grant Number 26420122.

  7. Combination of Lidar Elevations, Bathymetric Data, and Urban Infrastructure in a Sub-Grid Model for Predicting Inundation in New York City during Hurricane Sandy

    CERN Document Server

    Loftis, Jon Derek; Hamilton, Stuart E; Forrest, David R


    We present the geospatial methods in conjunction with results of a newly developed storm surge and sub-grid inundation model which was applied in New York City during Hurricane Sandy in 2012. Sub-grid modeling takes a novel approach for partial wetting and drying within grid cells, eschewing the conventional hydrodynamic modeling method by nesting a sub-grid containing high-resolution lidar topography and fine scale bathymetry within each computational grid cell. In doing so, the sub-grid modeling method is heavily dependent on building and street configuration provided by the DEM. The results of spatial comparisons between the sub-grid model and FEMA's maximum inundation extents in New York City yielded an unparalleled absolute mean distance difference of 38m and an average of 75% areal spatial match. An in-depth error analysis reveals that the modeled extent contour is well correlated with the FEMA extent contour in most areas, except in several distinct areas where differences in special features cause sig...

  8. Wide variation in spatial genetic structure between natural populations of the European beech (Fagus sylvatica) and its implications for SGS comparability. (United States)

    Jump, A S; Rico, L; Coll, M; Peñuelas, J


    Identification and quantification of spatial genetic structure (SGS) within populations remains a central element of understanding population structure at the local scale. Understanding such structure can inform on aspects of the species' biology, such as establishment patterns and gene dispersal distance, in addition to sampling design for genetic resource management and conservation. However, recent work has identified that variation in factors such as sampling methodology, population characteristics and marker system can all lead to significant variation in SGS estimates. Consequently, the extent to which estimates of SGS can be relied on to inform on the biology of a species or differentiate between experimental treatments is open to doubt. Following on from a recent report of unusually extensive SGS when assessed using amplified fragment length polymorphisms in the tree Fagus sylvatica, we explored whether this marker system led to similarly high estimates of SGS extent in other apparently similar populations of this species. In the three populations assessed, SGS extent was even stronger than this previously reported maximum, extending up to 360 m, an increase in up to 800% in comparison with the generally accepted maximum of 30-40 m based on the literature. Within this species, wide variation in SGS estimates exists, whether quantified as SGS intensity, extent or the Sp parameter. Consequently, we argue that greater standardization should be applied in sample design and SGS estimation and highlight five steps that can be taken to maximize the comparability between SGS estimates.

  9. Sub-Grid Modeling of Electrokinetic Effects in Micro Flows (United States)

    Chen, C. P.


    Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this

  10. The Storm Surge and Sub-Grid Inundation Modeling in New York City during Hurricane Sandy

    Directory of Open Access Journals (Sweden)

    Harry V. Wang


    Full Text Available Hurricane Sandy inflicted heavy damage in New York City and the New Jersey coast as the second costliest storm in history. A large-scale, unstructured grid storm tide model, Semi-implicit Eulerian Lagrangian Finite Element (SELFE, was used to hindcast water level variation during Hurricane Sandy in the mid-Atlantic portion of the U.S. East Coast. The model was forced by eight tidal constituents at the model’s open boundary, 1500 km away from the coast, and the wind and pressure fields from atmospheric model Regional Atmospheric Modeling System (RAMS provided by Weatherflow Inc. The comparisons of the modeled storm tide with the NOAA gauge stations from Montauk, NY, Long Island Sound, encompassing New York Harbor, Atlantic City, NJ, to Duck, NC, were in good agreement, with an overall root mean square error and relative error in the order of 15–20 cm and 5%–7%, respectively. Furthermore, using large-scale model outputs as the boundary conditions, a separate sub-grid model that incorporates LIDAR data for the major portion of the New York City was also set up to investigate the detailed inundation process. The model results compared favorably with USGS’ Hurricane Sandy Mapper database in terms of its timing, local inundation area, and the depth of the flooding water. The street-level inundation with water bypassing the city building was created and the maximum extent of horizontal inundation was calculated, which was within 30 m of the data-derived estimate by USGS.

  11. The effects of the sub-grid variability of soil and land cover data on agricultural droughts in Germany (United States)

    Kumar, Rohini; Samaniego, Luis; Zink, Matthias


    Simulated soil moisture from land surface or water balance models is increasingly used to characterize and/or monitor the development of agricultural droughts at regional and global scales (e.g. NLADS, EDO, GLDAS). The skill of these models to accurately replicate hydrologic fluxes and state variables is strongly dependent on the quality meteorological forcings, the conceptualization of dominant processes, and the parameterization scheme used to incorporate the variability of land surface properties (e.g. soil, topography, and vegetation) at a coarser spatial resolutions (e.g. at least 4 km). The goal of this study is to analyze the effects of the sub-grid variability of soil texture and land cover properties on agricultural drought statistics such as duration, severity, and areal extent. For this purpose, a process based mesoscale hydrologic model (mHM) is used to create two sets of daily soil moisture fields over Germany at the spatial resolution of (4 × 4) km2 from 1950 to 2011. These simulations differ from each other only on the manner in which the land surface properties are accounted within the model. In the first set, soil moisture fields are obtained with the multiscale parameter regionalization (MPR) scheme (Samaniego, et. al. 2010, Kumar et. al. 2012), which explicitly takes the sub-grid variability of soil texture and land cover properties into account. In the second set, on the contrary, a single dominant soil and land cover class is used for ever grid cell at 4 km. Within each set, the propagation of the parameter uncertainty into the soil moisture simulations is also evaluated using an ensemble of 100 best global parameter sets of mHM (Samaniego, et. al. 2012). To ensure comparability, both sets of this ensemble simulations are forced with the same fields of meteorological variables (e.g., precipitation, temperature, and potential evapotranspiration). Results indicate that both sets of model simulations, with and without the sub-grid variability of

  12. A novel particle SGS model based on differential filter for LES of particle-laden turbulent flows (United States)

    Park, George; Urzay, Javier; Moin, Parviz


    When performing LES of particle-turbulence interactions, proper modelling of the effect of subgrid-scale (SGS) fluid motions on the particle dynamics is critical for accurate prediction of particle dispersion. Existing particle SGS models recover the missing SGS fluid velocities required in the particle equation of motion by assuming stochastic evolution of SGS fluctuations seen by particles, or by deconvolving the LES solution with an approximate inverse of the filter. In this study, we investigate the use of the differential filter for deconvolution-based particle SGS modelling. Deconvolution with a differential filter is potentially an attractive alternative to the existing Pade-filter based approximate deconvolution techniques. Exact deconvolution can be done trivially with differential filter, because the filter is defined in the inverse-filter form, and the method can be easily extended to unstructured grids. LES of one-way coupled particle-turbulence interaction in isotropic turbulence is performed, and model performance is analysed in terms of particle dispersion statistics. A dynamic procedure for determining the coefficient related to the filter width is under development, and the resulting formulation will be compared to constant coefficient models. This study was supported by DOE PSAAP2 Program.

  13. Autonomous Operation of Hybrid Microgrid with AC and DC Sub-Grids

    DEFF Research Database (Denmark)

    Loh, Poh Chiang; Blaabjerg, Frede


    This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power electronic interfaces. The main challenge here is to manage...... the power flow among all the sources distributed throughout the two types of sub-grids, which certainly is tougher than previous efforts developed for only either ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc...... sources, ac sources and interlinking converters. Suitable control and normalization schemes are therefore developed for controlling them with results presented for showing the overall performance of the hybrid microgrid....

  14. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs. (United States)

    Krueger, S. K.; Belochitski, A.; Moorthi, S.; Bogenschutz, P.; Pincus, R.


    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation and cloudiness. Unlike other similar methods, only one new prognostic variable, turbulent kinetic energy (TKE), needs to be intoduced, making the technique computationally efficient.SHOC code was adopted for a global model environment from its origins in a cloud resolving model, and incorporated into NCEP GFS. SHOC was first tested in a non-interactive mode, a configuration where SHOC receives inputs from the host model, but its outputs are not returned to the GFS. In this configuration: a) SGS TKE values produced by GFS SHOC are consistent with those produced by SHOC in a CRM, b) SGS TKE in GFS SHOC exhibits a well defined diurnal cycle, c) there's enhanced boundary layer turbulence in the subtropical stratocumulus and tropical transition-to-cumulus areas d) buoyancy flux diagnosed from the assumed PDF is consistent with independently calculated Brunt-Vaisala frequency in identifying stable and unstable regions.Next, SHOC was coupled to GFS, namely turbulent diffusion coefficients computed by SHOC are now used in place of those currently produced by the GFS boundary layer and shallow convection schemes (Han and Pan, 2011), as well as condensation and cloud fraction diagnosed from the SGS PDF replace those calculated in the current large-scale cloudines scheme (Zhao and Carr, 1997). Ongoing activities consist of debugging the fully coupled GFS/SHOC.Future work will consist of evaluating model performance and tuning the physics if necessary, by performing medium-range NWP forecasts with prescribed initial conditions, and AMIP-type climate


    National Aeronautics and Space Administration — The ODY MARS GAMMA RAY SPECTROMETER 5 SGS (SGS) data set is a collection of data tables that contain a gamma spectrum and the associated engineering data that has...

  16. Evaluation of Leray, LANS and Verstappen regularizations in LES, without and with added SGS modeling (United States)

    Winckelmans, G.; Bourgeois, N.; Collet, Y.; Duponcheel, M.


    Regularization approaches (Leray, LANS and Verstappen) for the ``restriction in the production of small-scales'' in turbulence simulations have regained some interest in the LES community, because of their potentially appealing properties due to filtering. Their potential is here investigated using the best possible numerics (dealiased pseudo-spectral code) and on simple problems: transition of the Taylor-Green vortex (TGV) and its ensuing turbulence, developed homogeneous isotropic turbulence (HIT). The filtered velocity field is obtained using discrete filters, also of various orders (2 and 6). Diagnostics include energy, enstrophy, and spectra. The performance of the regularizations on the TGV is first evaluated in inviscid mode (96^3 Euler), then in viscous mode at Re=1600 (256^3 DNS and 48^3 LES). Although they delay the production of small scales, none of the regularizations can perform LES when the flow has become turbulent: the small scales are still too energized, and thus added subgrid-scale (SGS) modeling is required. The combination of regularization and SGS modeling (here using the RVM multiscale model) is then also evaluated. Finally, 128^3 LES of fully developed HIT at very high Re is also investigated, providing the asymptotic behavior. In particular, it is found that the regularization helps increase a bit the true inertial subrange obtained with the RVM model.

  17. Evaluation of Leray and Verstappen regularizations in LES, without and with added SGS modeling (United States)

    Winckelmans, G.; Bourgeois, N.; Collet, Y.; Duponcheel, M.


    Regularization approaches (Leray and Verstappen) for the "restriction in the rate of production of small-scales" in turbulence simulations have regained some interest in the LES community. Their potential is here investigated using the best numerics (dealiased pseudo-spectral code) and on two cases: transition of the Taylor-Green vortex (TGV) and its ensuing turbulence, decaying homogeneous isotropic turbulence (HIT). The filtered velocity fields are obtained using discrete filters, also of various orders. Diagnostics include energy, enstrophy and spectra. The performance of the regularizations is first evaluated on the TGV in inviscid mode (96^3); then in viscous mode: 256^3 DNS at Re=1600, 128^3 LES at Re=5000 (compared to 1024^3 DNS). Although they indeed delay the rate of production of small scales, they cannot sustain LES when the flow has become turbulent: the small scales are still too energized. Added subgrid-scale (SGS) modeling is thus required. The combination of regularization and SGS modeling (here using the RVM multiscale model) is then also evaluated. Finally, 128^3 LES of fully developed HIT at very high Re is also investigated, providing the asymptotic behavior. The regularizations help increase the true inertial subrange obtained with the RVM model.

  18. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves (United States)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry


    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF

  19. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    Directory of Open Access Journals (Sweden)

    C. Montzka


    Full Text Available Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC and hydraulic conductivity (HCC curves are typically derived from soil texture via pedotransfer functions (PTFs. Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller–Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem–van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based

  20. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling (United States)

    Sarlak, Hamid


    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60,000 and simulations have been performed to primarily investigate the role of sub-grid scale (SGS) modeling on the dynamics of flow generated over the airfoil, which has not been dealt with in great detail in the past. It is seen that simulations are increasingly getting influenced by SGS modeling with increasing the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit LES offers closest pressure distribution predictions compared with literature.

  1. Modelling sub-grid wetland in the ORCHIDEE global land surface model: evaluation against river discharges and remotely sensed data

    Directory of Open Access Journals (Sweden)

    B. Ringeval


    Full Text Available The quality of the global hydrological simulations performed by land surface models (LSMs strongly depends on processes that occur at unresolved spatial scales. Approaches such as TOPMODEL have been developed, which allow soil moisture redistribution within each grid-cell, based upon sub-grid scale topography. Moreover, the coupling between TOPMODEL and a LSM appears as a potential way to simulate wetland extent dynamic and its sensitivity to climate, a recently identified research problem for biogeochemical modelling, including methane emissions. Global evaluation of the coupling between TOPMODEL and an LSM is difficult, and prior attempts have been indirect, based on the evaluation of the simulated river flow. This study presents a new way to evaluate this coupling, within the ORCHIDEE LSM, using remote sensing data of inundated areas. Because of differences in nature between the satellite derived information – inundation extent – and the variable diagnosed by TOPMODEL/ORCHIDEE – area at maximum soil water content, the evaluation focuses on the spatial distribution of these two quantities as well as on their temporal variation. Despite some difficulties in exactly matching observed localized inundated events, we obtain a rather good agreement in the distribution of these two quantities at a global scale. Floodplains are not accounted for in the model, and this is a major limitation. The difficulty of reproducing the year-to-year variability of the observed inundated area (for instance, the decreasing trend by the end of 90s is also underlined. Classical indirect evaluation based on comparison between simulated and observed river flow is also performed and underlines difficulties to simulate river flow after coupling with TOPMODEL. The relationship between inundation and river flow at the basin scale in the model is analyzed, using both methods (evaluation against remote sensing data and river flow. Finally, we discuss the potential of

  2. Assessment of zero-equation SGS models for simulating indoor environment (United States)

    Taghinia, Javad; Rahman, Md Mizanur; Tse, Tim K. T.


    The understanding of air-flow in enclosed spaces plays a key role to designing ventilation systems and indoor environment. The computational fluid dynamics aspects dictate that the large eddy simulation (LES) offers a subtle means to analyze complex flows with recirculation and streamline curvature effects, providing more robust and accurate details than those of Reynolds-averaged Navier-Stokes simulations. This work assesses the performance of two zero-equation sub-grid scale models: the Rahman-Agarwal-Siikonen-Taghinia (RAST) model with a single grid-filter and the dynamic Smagorinsky model with grid-filter and test-filter scales. This in turn allows a cross-comparison of the effect of two different LES methods in simulating indoor air-flows with forced and mixed (natural + forced) convection. A better performance against experiments is indicated with the RAST model in wall-bounded non-equilibrium indoor air-flows; this is due to its sensitivity toward both the shear and vorticity parameters.

  3. Sub-grid combustion modeling for compressible two-phase reacting flows (United States)

    Sankaran, Vaidyanathan


    A generic formulation for modeling the turbulent combustion in compressible, high Reynolds number, two-phase; reacting flows has been developed and validated. A sub-grid mixing/combustion model called Linear Eddy Mixing (LEM) model has been extended to compressible flows and used inside the framework of Large Eddy Simulation (LES) in this LES-LEM approach. The LES-LEM approach is based on the proposition that the basic mechanistic distinction between the convective and the molecular effects should be preserved for accurate prediction of complex flow-fields such as those encountered in many combustion systems. Liquid droplets (represented by computational parcels) are tracked using the Lagrangian approach wherein the Newton's equation of motion for the discrete particles are integrated explicitly in the Eulerian gas field. The gas phase LES velocity fields are used to estimate the instantaneous gas velocity at the droplet location. Drag effects due to the droplets on the gas phase and the heat transfer between the gas and the liquid phase are explicitly included. Thus, full coupling is achieved between the two phases in the simulation. Validation of the compressible LES-LEM approach is conducted by simulating the flow-field in an operational General Electric Aircraft Engines combustor (LM6000). The results predicted using the proposed approach compares well with the experiments and a conventional (G-equation) thin-flame model. Particle tracking algorithms used in the present study are validated by simulating droplet laden temporal mixing layers. Quantitative and qualitative comparison with the results of spectral DNS exhibits good agreement. Simulations using the current LES-LEM for freely propagating partially premixed flame in a droplet-laden isotropic turbulent field correctly captures the flame structure in the partially premixed flames. Due to the strong spatial variation of equivalence ratio a broad flame similar to a premixed flame is realized. The current

  4. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    Energy Technology Data Exchange (ETDEWEB)

    Buschman, Francis X., E-mail:; Aumiller, David L.


    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  5. Top2 and Sgs1-Top3 Act Redundantly to Ensure rDNA Replication Termination.

    Directory of Open Access Journals (Sweden)

    Kamilla Mundbjerg


    Full Text Available Faithful DNA replication with correct termination is essential for genome stability and transmission of genetic information. Here we have investigated the potential roles of Topoisomerase II (Top2 and the RecQ helicase Sgs1 during late stages of replication. We find that cells lacking Top2 and Sgs1 (or Top3 display two different characteristics during late S/G2 phase, checkpoint activation and accumulation of asymmetric X-structures, which are both independent of homologous recombination. Our data demonstrate that checkpoint activation is caused by a DNA structure formed at the strongest rDNA replication fork barrier (RFB during replication termination, and consistently, checkpoint activation is dependent on the RFB binding protein, Fob1. In contrast, asymmetric X-structures are formed independent of Fob1 at less strong rDNA replication fork barriers. However, both checkpoint activation and formation of asymmetric X-structures are sensitive to conditions, which facilitate fork merging and progression of replication forks through replication fork barriers. Our data are consistent with a redundant role of Top2 and Sgs1 together with Top3 (Sgs1-Top3 in replication fork merging at rDNA barriers. At RFB either Top2 or Sgs1-Top3 is essential to prevent formation of a checkpoint activating DNA structure during termination, but at less strong rDNA barriers absence of the enzymes merely delays replication fork merging, causing an accumulation of asymmetric termination structures, which are solved over time.

  6. Predictor-Corrector LU-SGS Discontinuous Galerkin Finite Element Method for Conservation Laws

    Directory of Open Access Journals (Sweden)

    Xinrong Ma


    Full Text Available Efficient implicit predictor-corrector LU-SGS discontinuous Galerkin (DG approach for compressible Euler equations on unstructured grids is investigated by adding the error compensation of high-order term. The original LU-SGS and GMRES schemes for DG method are discussed. Van Albada limiter is employed to make the scheme monotone. The numerical experiments performed for the transonic inviscid flows around NACA0012 airfoil, RAE2822 airfoil, and ONERA M6 wing indicate that the present algorithm has the advantages of low storage requirements and high convergence acceleration. The computational efficiency is close to that of GMRES scheme, nearly 2.1 times greater than that of LU-SGS scheme on unstructured grids for 2D cases, and almost 5.5 times greater than that of RK4 on unstructured grids for 3D cases.

  7. Evaluation of a Sub-Grid Topographic Drag Parameterizations for Modeling Surface Wind Speed During Storms Over Complex Terrain in the Northeast U.S. (United States)

    Frediani, M. E.; Hacker, J.; Anagnostou, E. N.; Hopson, T. M.


    This study aims at improving regional simulation of 10-meter wind speed by verifying PBL schemes for storms at different scales, including convective storms, blizzards, tropical storms and nor'easters over complex terrain in the northeast U.S. We verify a recently proposed sub-grid topographic drag scheme in stormy conditions and compare it with two PBL schemes (Mellor-Yamada and Yonsei University) from WRF-ARW over a region in the Northeast U.S. The scheme was designed to adjust the surface drag over regions with high subgrid-scale topographic variability. The schemes are compared using spatial, temporal, and pattern criteria against surface observations. The spatial and temporal criteria are defined by season, diurnal cycle, and topography; the pattern, is based on clusters derived using clustering analysis. Results show that the drag scheme reduces the positive bias of low wind speeds, but over-corrects the high wind speeds producing a magnitude-increasing negative bias with increasing speed. Both other schemes underestimate the most frequent low-speed mode and overestimate the high-speeds. Error characteristics of all schemes respond to seasonal and diurnal cycle changes. The Topo-wind experiment shows the best agreement with the observation quantiles in summer and fall, the best representation of the diurnal cycle in these seasons, and reduces the bias of all surface stations near the coast. In more stable conditions the Topo-wind scheme shows a larger negative bias. The cluster analysis reveals a correlation between bias and mean speed from the Mellor-Yamada and Yonsei University schemes that is not present when the drag scheme is used. When the drag scheme is used the bias correlates with wind direction; the bias increases when the meridional wind component is negative. This pattern corresponds to trajectories with more land interaction with the highest biases found in northwest circulation clusters.

  8. Esc2 and Sgs1 act in functionally distinct branches of the homologous recombination repair pathway in Saccharomyces cerevisiae

    DEFF Research Database (Denmark)

    Mankouri, Hocine W; Ngo, Hien-Ping; Hickson, Ian D


    homologous recombination repair (HRR) intermediates. These roles are qualitatively similar to those of Sgs1, the yeast ortholog of the human Bloom's syndrome protein, BLM. However, whereas mutation of either ESC2 or SGS1 leads to the accumulation of unprocessed HRR intermediates in the presence of MMS...

  9. Rmi1 stimulates decatenation of double Holliday junctions during dissolution by Sgs1-Top3

    DEFF Research Database (Denmark)

    Cejka, Petr; Plank, Jody L; Bachrati, Csanad Z


    3 proteins are sufficient to migrate and disentangle a dHJ to produce exclusively non-crossover recombination products, in a reaction termed "dissolution." We show that Rmi1 stimulates dHJ dissolution at low Sgs1-Top3 protein concentrations, although it has no effect on the initial rate of Holliday...


    Atmospheric processes and the associated transport and dispersion of atmospheric pollutants are known to be highly variable in time and space. Current air quality models that characterize atmospheric chemistry effects, e.g. the Community Multi-scale Air Quality (CMAQ), provide vo...

  11. An investigation of the sub-grid variability of trace gases and aerosols for global climate modeling

    Directory of Open Access Journals (Sweden)

    Y. Qian


    Full Text Available One fundamental property and limitation of grid based models is their inability to identify spatial details smaller than the grid cell size. While decades of work have gone into developing sub-grid treatments for clouds and land surface processes in climate models, the quantitative understanding of sub-grid processes and variability for aerosols and their precursors is much poorer. In this study, WRF-Chem is used to simulate the trace gases and aerosols over central Mexico during the 2006 MILAGRO field campaign, with multiple spatial resolutions and emission/terrain scenarios. Our analysis focuses on quantifying the sub-grid variability (SGV of trace gases and aerosols within a typical global climate model grid cell, i.e. 75×75 km2.

    Our results suggest that a simulation with 3-km horizontal grid spacing adequately reproduces the overall transport and mixing of trace gases and aerosols downwind of Mexico City, while 75-km horizontal grid spacing is insufficient to represent local emission and terrain-induced flows along the mountain ridge, subsequently affecting the transport and mixing of plumes from nearby sources. Therefore, the coarse model grid cell average may not correctly represent aerosol properties measured over polluted areas. Probability density functions (PDFs for trace gases and aerosols show that secondary trace gases and aerosols, such as O3, sulfate, ammonium, and nitrate, are more likely to have a relatively uniform probability distribution (i.e. smaller SGV over a narrow range of concentration values. Mostly inert and long-lived trace gases and aerosols, such as CO and BC, are more likely to have broad and skewed distributions (i.e. larger SGV over polluted regions. Over remote areas, all trace gases and aerosols are more uniformly distributed compared to polluted areas. Both CO and O3 SGV vertical profiles are nearly constant within the PBL during daytime, indicating that trace gases

  12. Sgs1's roles in DNA end resection, HJ dissolution, and crossover suppression require a two-step SUMO regulation dependent on Smc5/6. (United States)

    Bermúdez-López, Marcelino; Villoria, María Teresa; Esteras, Miguel; Jarmuz, Adam; Torres-Rosell, Jordi; Clemente-Blanco, Andres; Aragon, Luis


    The RecQ helicase Sgs1 plays critical roles during DNA repair by homologous recombination, from end resection to Holliday junction (HJ) dissolution. Sgs1 has both pro- and anti-recombinogenic roles, and therefore its activity must be tightly regulated. However, the controls involved in recruitment and activation of Sgs1 at damaged sites are unknown. Here we show a two-step role for Smc5/6 in recruiting and activating Sgs1 through SUMOylation. First, auto-SUMOylation of Smc5/6 subunits leads to recruitment of Sgs1 as part of the STR (Sgs1-Top3-Rmi1) complex, mediated by two SUMO-interacting motifs (SIMs) on Sgs1 that specifically recognize SUMOylated Smc5/6. Second, Smc5/6-dependent SUMOylation of Sgs1 and Top3 is required for the efficient function of STR. Sgs1 mutants impaired in recognition of SUMOylated Smc5/6 (sgs1-SIMΔ) or SUMO-dead alleles (sgs1-KR) exhibit unprocessed HJs at damaged replication forks, increased crossover frequencies during double-strand break repair, and severe impairment in DNA end resection. Smc5/6 is a key regulator of Sgs1's recombination functions. © 2016 Bermúdez-López et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Members of the Salivary Gland Surface Protein (SGS) Family Are Major Immunogenic Components of Mosquito Saliva* (United States)

    King, Jonas G.; Vernick, Kenneth D.; Hillyer, Julián F.


    Mosquitoes transmit Plasmodium and certain arboviruses during blood feeding, when they are injected along with saliva. Mosquito saliva interferes with the host's hemostasis and inflammation response and influences the transmission success of some pathogens. One family of mosquito salivary gland proteins, named SGS, is composed of large bacterial-type proteins that in Aedes aegypti were implicated as receptors for Plasmodium on the basal salivary gland surface. Here, we characterize the biology of two SGSs in the malaria mosquito, Anopheles gambiae, and demonstrate their involvement in blood feeding. Western blots and RT-PCR showed that Sgs4 and Sgs5 are produced exclusively in female salivary glands, that expression increases with age and after blood feeding, and that protein levels fluctuate in a circadian manner. Immunohistochemistry showed that SGSs are present in the acinar cells of the distal lateral lobes and in the salivary ducts of the proximal lobes. SDS-PAGE, Western blots, bite blots, and immunization via mosquito bites showed that SGSs are highly immunogenic and form major components of mosquito saliva. Last, Western and bioinformatic analyses suggest that SGSs are secreted via a non-classical pathway that involves cleavage into a 300-kDa soluble fragment and a smaller membrane-bound fragment. Combined, these data strongly suggest that SGSs play an important role in blood feeding. Together with their role in malaria transmission, we propose that SGSs could be used as markers of human exposure to mosquito bites and in the development of disease control strategies. PMID:21965675

  14. Assessment of the t-model as a SGS model for LES of high-Re turbulent flows (United States)

    Chandy, Abhilash; Frankel, Steven


    The recently developed optimal prediction-based t-model (PNAS, 2007) is quantitatively assessed as a SGS turbulence model for LES of decaying homogeneous turbulence (DHT) and transition to turbulence for the Taylor-Green vortex (TGV) through comparisons to laboratory measurements and DNS. The t-model is based on the idea the motion of a vortex at one scale is influenced by the past history of motion of vortices in other scales (``long memory'' effects). t-model predictions are compared to the classic non-dynamic Smagorinsky model. Regarding the t-model, this work represents its first application to decaying turbulence with comparison to active-grid-generated decaying turbulence measurements of Kang et al. (J. Fluid Mech., 2003) at Reλ 720 and Re=3000 DNS of transition to turbulence in the TGV of Drikakis et al. (J. Turb., 2007). For DHT non-dynamic Smagorinsky is in excellent agreement with measurements for t.k.e. but higher-order moments show slight discrepancies and for TGV, energy decay rates agree reasonably well with DNS. Regarding the t-model, predictions are worse than Smagorinsky at the same grid resolution due to the insufficient resolution of small scales. Improved results are obtained at higher resolutions, but are still not as good as Smagorinsky.

  15. Shu proteins promote the formation of homologous recombination intermediates that are processed by Sgs1-Rmi1-Top3

    DEFF Research Database (Denmark)

    Mankouri, Hocine W; Ngo, Hien-Ping; Hickson, Ian D


    CSM2, PSY3, SHU1, and SHU2 (collectively referred to as the SHU genes) were identified in Saccharomyces cerevisiae as four genes in the same epistasis group that suppress various sgs1 and top3 mutant phenotypes when mutated. Although the SHU genes have been implicated in homologous recombination...

  16. 78 FR 31970 - Accreditation and Approval of SGS North America, Inc., as a Commercial Gauger and Laboratory (United States)


    ..., Washington, DC 20229, tel. 202-344-1060. SUPPLEMENTARY INFORMATION: Notice is hereby given pursuant to 19 CPR 151.12 and 19 CPR 151.13, that SGS North America, Inc., 300 George Street, East Alton, IL 62024, has... vegetable oils for customs purposes, in accordance with the provisions of 19 CPR 151.12 and 19 CPR 151.13...

  17. Processing of homologous recombination repair intermediates by the Sgs1-Top3-Rmi1 and Mus81-Mms4 complexes

    DEFF Research Database (Denmark)

    Hickson, Ian D; Mankouri, Hocine W


     structures) following replicative stress. Further characterization of these X structures may reveal why loss of BLM (the human Sgs1 ortholog) leads to the human cancer predisposition disorder, Bloom syndrome. In two recent complementary studies, we examined the nature of the X structures arising in yeast strains...

  18. Clonal growth and fine-scale genetic structure in tanoak (Notholithocarpus densiflorus: Fagaceae) (United States)

    Richard S. Dodd; Wasima Mayer; Alejandro Nettel; Zara. Afzal-Rafii


    The combination of sprouting and reproduction by seed can have important consequences on fine-scale spatial distribution of genetic structure (SGS). SGS is an important consideration for species’ restoration because it determines the minimum distance among seed trees to maximize genetic diversity while not prejudicing locally adapted genotypes. Local environmental...

  19. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn


    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  20. Plasmodium relictum (lineages pSGS1 and pGRW11): complete synchronous sporogony in mosquitoes Culex pipiens pipiens. (United States)

    Kazlauskienė, Rita; Bernotienė, Rasa; Palinauskas, Vaidas; Iezhova, Tatjana A; Valkiūnas, Gediminas


    Plasmodium relictum is a widespread invasive agent of avian malaria, responsible for acute, chronic and debilitating diseases in many species of birds. Recent PCR-based studies revealed astonishing genetic diversity of avian malaria parasites (genus Plasmodium), with numerous genetic lineages deposited in GenBank. Many studies addressed distribution and evolutionary relationships of avian Plasmodium lineages, but information about patterns of development of different lineages in mosquito vectors remains insufficient. Here we present data on sporogonic development of 2 widespread mitochondrial cytochrome b lineages (cyt b) of P. relictum (pSGS1 and pGRW11) in mosquito Culex pipiens pipiens. Genetic distance between these lineages is 0.2%; they fall in a well-supported clade in the phylogenetic tree. Three P. relictum strains were isolated from common crossbill (Loxia curvirostra, lineage pSGS1), domestic canary (Serinus canaria domestica, pSGS1) and house sparrow (Passer domesticus, pGRW11). These strains were multiplied in domestic canaries and used as donors of malarial gametocytes to infect C. p. pipiens. Mosquitoes were allowed to take blood meal on infected canaries and then dissected on intervals to study development of sporogonic stages. All 3 strains developed synchronously and completed sporogony in this vector, with infective sporozoites reported in the salivary glands on the day 14 after infection. Ookinetes, oocysts and sporozoites of all strains were indistinguishable morphologically. This study shows that patterns of sporogonic development of the closely related lineages pSGS1 and pGRW11 and different strains of the lineage pSGS1 of P. relictum are similar indicating that phylogenetic trees based on the cyt b gene likely can be used for predicting sporogonic development of genetically similar avian malaria lineages in mosquito vectors. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacón Rebollo, Tomás


    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  2. SGS3 Cooperates with RDR6 in Triggering Geminivirus-Induced Gene Silencing and in Suppressing Geminivirus Infection in Nicotiana Benthamiana

    Directory of Open Access Journals (Sweden)

    Fangfang Li


    Full Text Available RNA silencing has an important role in defending against virus infection in plants. Plants with the deficiency of RNA silencing components often show enhanced susceptibility to viral infections. RNA-dependent RNA polymerase (RDRs mediated-antiviral defense has a pivotal role in resistance to many plant viruses. In RDR6-mediated defense against viral infection, a plant-specific RNA binding protein, Suppressor of Gene Silencing 3 (SGS3, was also found to fight against some viruses in Arabidopsis. In this study, we showed that SGS3 from Nicotiana benthamiana (NbSGS3 is required for sense-RNA induced post-transcriptional gene silencing (S-PTGS and initiating sense-RNA-triggered systemic silencing. Further, the deficiency of NbSGS3 inhibited geminivirus-induced endogenous gene silencing (GIEGS and promoted geminivirus infection. During TRV-mediated NbSGS3 or N. benthamiana RDR6 (NbRDR6 silencing process, we found that their expression can be effectively fine-tuned. Plants with the knock-down of both NbSGS3 and NbRDR6 almost totally blocked GIEGS, and were more susceptible to geminivirus infection. These data suggest that NbSGS3 cooperates with NbRDR6 against GIEGS and geminivirus infection in N. benthamiana, which provides valuable information for breeding geminivirus-resistant plants.

  3. A Rad53 independent function of Rad9 becomes crucial for genome maintenance in the absence of the Recq helicase Sgs1.

    Directory of Open Access Journals (Sweden)

    Ida Nielsen

    Full Text Available The conserved family of RecQ DNA helicases consists of caretaker tumour suppressors, that defend genome integrity by acting on several pathways of DNA repair that maintain genome stability. In budding yeast, Sgs1 is the sole RecQ helicase and it has been implicated in checkpoint responses, replisome stability and dissolution of double Holliday junctions during homologous recombination. In this study we investigate a possible genetic interaction between SGS1 and RAD9 in the cellular response to methyl methane sulphonate (MMS induced damage and compare this with the genetic interaction between SGS1 and RAD24. The Rad9 protein, an adaptor for effector kinase activation, plays well-characterized roles in the DNA damage checkpoint response, whereas Rad24 is characterized as a sensor protein also in the DNA damage checkpoint response. Here we unveil novel insights into the cellular response to MMS-induced damage. Specifically, we show a strong synergistic functionality between SGS1 and RAD9 for recovery from MMS induced damage and for suppression of gross chromosomal rearrangements, which is not the case for SGS1 and RAD24. Intriguingly, it is a Rad53 independent function of Rad9, which becomes crucial for genome maintenance in the absence of Sgs1. Despite this, our dissection of the MMS checkpoint response reveals parallel, but unequal pathways for Rad53 activation and highlights significant differences between MMS- and hydroxyurea (HU-induced checkpoint responses with relation to the requirement of the Sgs1 interacting partner Topoisomerase III (Top3. Thus, whereas earlier studies have documented a Top3-independent role of Sgs1 for an HU-induced checkpoint response, we show here that upon MMS treatment, Sgs1 and Top3 together define a minor but parallel pathway to that of Rad9.

  4. Effect of repeated exposure to Plasmodium relictum (lineage SGS1) on infection dynamics in domestic canaries. (United States)

    Cellier-Holzem, Elise; Esparza-Salas, Rodrigo; Garnier, Stéphane; Sorci, Gabriele


    Parasites are known to exert strong selection pressures on their hosts and, as such, favour the evolution of defence mechanisms. The negative impact of parasites on their host can have substantial consequences in terms of population persistence and the epidemiology of the infection. In natural populations, however, it is difficult to assess the cost of infection while controlling for other potentially confounding factors. For instance, individuals are repeatedly exposed to a variety of parasite strains, some of which can elicit immunological memory, further protecting the host from subsequent infections. Cost of infection is, therefore, expected to be particularly strong for primary infections and to decrease for individuals surviving the first infectious episode that are re-exposed to the pathogen. We tested this hypothesis experimentally using avian malaria parasites (Plasmodium relictum-lineage SGS1) and domestic canaries (Serinus canaria) as a model. Hosts were infected with a controlled dose of P. relictum as a primary infection and control birds were injected with non-infected blood. The changes in haematocrit and body mass were monitored during a 20 day period. A protein of the acute phase response (haptoglobin) was assessed as a marker of the inflammatory response mounted in response to the infection. Parasite intensity was also monitored. Surviving birds were then re-infected 37 days post primary infection. In agreement with the predictions, we found that primary infected birds paid a substantially higher cost in terms of infection-induced reduction in haematocrit compared with re-exposed birds. After the secondary infection, re-exposed hosts were also able to clear the infection at a faster rate than after the primary infection. These results have potential consequences for the epidemiology of avian malaria, since birds re-exposed to the pathogen can maintain parasitemia with low fitness costs, allowing the persistence of the pathogen within the host

  5. Holliday junction-containing DNA structures persist in cells lacking Sgs1 or Top3 following exposure to DNA damage

    DEFF Research Database (Denmark)

    Mankouri, Hocine W; Ashton, Thomas M; Hickson, Ian D


    and structurally unrelated Holliday junction (HJ) resolvases, Escherichia coli RusA or human GEN1(1-527), promotes the removal of these X-structures in vivo. Moreover, other types of DNA replication intermediates, including stalled replication forks and non-HRR-dependent X-structures, are refractory to RusA or GEN......The Sgs1-Rmi1-Top3 "dissolvasome" is required for the maintenance of genome stability and has been implicated in the processing of various types of DNA structures arising during DNA replication. Previous investigations have revealed that unprocessed (X-shaped) homologous recombination repair (HRR...

  6. SGS Analysis of the Evolution Equations of the Mixture Fraction and the Progress Variable Variances in the Presence of Spray Combustion

    Directory of Open Access Journals (Sweden)

    H. Meftah


    Full Text Available In this paper, direct numerical simulation databases have been generated to analyze the impact of the propagation of a spray flame on several subgrid scales (SGS models dedicated to the closure of the transport equations of the subgrid fluctuations of the mixture fraction Z and the progress variable c. Computations have been carried out starting from a previous inert database [22] where a cold flame has been ignited in the center of the mixture when the droplet segregation and evaporation rate were at their highest levels. First, a RANS analysis has shown a brutal increase of the mixture fraction fluctuations due to the fuel consumption by the flame. Indeed, local vapour mass fraction reaches then a minimum value, far from the saturation level. It leads to a strong increase of the evaporation rate, which is also accompanied by a diminution of the oxidiser level. In a second part of this paper, a detailed evaluation of the subgrid models allowing to close the variance and the dissipation rates of the mixture fraction and the progress variable has been carried out. Models that have been selected for their efficiency in inert flows have shown a very good behaviour in the framework of reactive flows.

  7. Evaluation of the Transport and Diffusion of Pollutants over an Urban Area Using a Local-Scale Advection-Diffusion Model and a Sub-Grid Street Model

    DEFF Research Database (Denmark)

    Salerno, R.; Vignati, E.


    Fifth International Conference on the Development and Application of Computer Techniques to Environmental Studies, Envirosoft/94.......Fifth International Conference on the Development and Application of Computer Techniques to Environmental Studies, Envirosoft/94....

  8. Analysis of spatial genetic structure in an expanding Pinus halepensis population reveals development of fine-scale genetic clustering over time. (United States)

    Troupin, D; Nathan, R; Vendramin, G G


    We analysed the change of spatial genetic structure (SGS) of reproductive individuals over time in an expanding Pinus halepensis population. To our knowledge, this is the first empirical study to analyse the temporal component of SGS by following the dynamics of successive cohorts of the same population over time, rather than analysing different age cohorts at a single time. SGS is influenced by various factors including restricted gene dispersal, microenvironmental selection, mating patterns and the spatial pattern of reproductive individuals. Several factors that affect SGS are expected to vary over time and as adult density increases. Using air photo analysis, tree-ring dating and molecular marker analysis we reconstructed the spread of reproductive individuals over 30 years beginning from five initial individuals. In the early stages, genotypes were distributed randomly in space. Over time and with increasing density, fine-scale (< 20 m) SGS developed and the magnitude of genetic clustering increased. The SGS was strongly affected by the initial spatial distribution and genetic variation of the founding individuals. The development of SGS may be explained by fine-scale environmental heterogeneity and possibly microenvironmental selection. Inbreeding and variation in reproductive success may have enhanced SGS magnitude over time.

  9. Srs2 and Sgs1-Top3 suppress crossovers during double-strand break repair in yeast. (United States)

    Ira, Grzegorz; Malkova, Anna; Liberi, Giordano; Foiani, Marco; Haber, James E


    Very few gene conversions in mitotic cells are associated with crossovers, suggesting that these events are regulated. This may be important for the maintenance of genetic stability. We have analyzed the relationship between homologous recombination and crossing-over in haploid budding yeast and identified factors involved in the regulation of crossover outcomes. Gene conversions unaccompanied by a crossover appear 30 min before conversions accompanied by exchange, indicating that there are two different repair mechanisms in mitotic cells. Crossovers are rare (5%), but deleting the BLM/WRN homolog, SGS1, or the SRS2 helicase increases crossovers 2- to 3-fold. Overexpressing SRS2 nearly eliminates crossovers, whereas overexpression of RAD51 in srs2Delta cells almost completely eliminates the noncrossover recombination pathway. We suggest Sgs1 and its associated topoisomerase Top3 remove double Holliday junction intermediates from a crossover-producing repair pathway, thereby reducing crossovers. Srs2 promotes the noncrossover synthesis-dependent strand-annealing (SDSA) pathway, apparently by regulating Rad51 binding during strand exchange.

  10. Srs2 and Sgs1–Top3 Suppress Crossovers during Double-Strand Break Repair in Yeast (United States)

    Ira, Grzegorz; Malkova, Anna; Liberi, Giordano; Foiani, Marco; Haber, James E.


    Summary Very few gene conversions in mitotic cells are associated with crossovers, suggesting that these events are regulated. This may be important for the maintenance of genetic stability. We have analyzed the relationship between homologous recombination and crossing-over in haploid budding yeast and identified factors involved in the regulation of crossover outcomes. Gene conversions unaccompanied by a crossover appear 30 min before conversions accompanied by exchange, indicating that there are two different repair mechanisms in mitotic cells. Crossovers are rare (5%), but deleting the BLM/WRN homolog, SGS1, or the SRS2 helicase increases crossovers 2- to 3-fold. Overexpressing SRS2 nearly eliminates crossovers, whereas overexpression of RAD51 in srs2Δ cells almost completely eliminates the noncrossover recombination pathway. We suggest Sgs1 and its associated topoisomerase Top3 remove double Holliday junction intermediates from a crossover-producing repair pathway, thereby reducing crossovers. Srs2 promotes the noncrossover synthesis-dependent strand-annealing (SDSA) pathway, apparently by regulating Rad51 binding during strand exchange. PMID:14622595

  11. Numerical simulation of the dynamic flow behavior in a bubble column: a study of closures for turbulence and interface forces

    NARCIS (Netherlands)

    Zhang, D.; Deen, N.G.; Kuipers, J.A.M.


    Numerical simulations of the bubbly flow in two square cross-sectioned bubble columns were conducted with the commercial CFD package CFX-4.4. The effect of the model constant used in the sub-grid scale (SGS) model, CS, as well as the interfacial closures for the drag, lift and virtual mass forces

  12. Gas-Solid Turbulent Flow in a Circulating Fluidized Bed Riser; Numerical Study of Binary Particle Mixtures

    NARCIS (Netherlands)

    He, Y; Deen, N.G.; van Sint Annaland, M.; Kuipers, J.A.M.


    A numerical simulation was performed on a turbulent gas-particle multi-phase flow in a circulating fluidized bed riser based on a hard-sphere discrete particle model (DPM) for the particle phase and the Navier-Stokes equations for the gas phase. The sub-grid scale stresses (SGS) were modeled with

  13. A nonlinear structural subgrid-scale closure for compressible MHD Part II: a priori comparison on turbulence simulation data

    CERN Document Server

    Grete, P; Schmidt, W; Schleicher, D R G


    Even though compressible plasma turbulence is encountered in many astrophysical phenomena, its effect is often not well understood. Furthermore, direct numerical simulations are typically not able to reach the extreme parameters of these processes. For this reason, large-eddy simulations (LES), which only simulate large and intermediate scales directly, are employed. The smallest, unresolved scales and the interactions between small and large scales are introduced by means of a subgrid-scale (SGS) model. We propose and verify a new set of nonlinear SGS closures for future application as an SGS model in LES of compressible magnetohydrodynamics (MHD). We use 15 simulations (without explicit SGS model) of forced, isotropic, homogeneous turbulence with varying sonic Mach number $\\mathrm{M_s} = 0.2$ to $20$ as reference data for the most extensive \\textit{a priori} tests performed so far in literature. In these tests we explicitly filter the reference data and compare the performance of the new closures against th...

  14. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour


    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  15. Arabidopsis RecQsim, a plant-specific member of the RecQ helicase family, can suppress the MMS hypersensitivity of the yeast sgs1 mutant

    NARCIS (Netherlands)

    Bagherieh-Najjar, MB; de Vries, OMH; Kroon, JTM; Wright, EL; Elborough, KM; Hille, J; Dijkwel, PP

    The Arabidopsis genome contains seven genes that belong to the RecQ family of ATP-dependent DNA helicases. RecQ members in Saccharomyces cerevisiae (SGS1) and man (WRN, BLM and RecQL4) are involved in DNA recombination, repair and genome stability maintenance, but little is known about the function

  16. A fusion tag to fold on: the S-layer protein SgsE confers improved folding kinetics to translationally fused enhanced green fluorescent protein. (United States)

    Ristl, Robin; Kainz, Birgit; Stadlmayr, Gerhard; Schuster, Heinrich; Pum, Dietmar; Messner, Paul; Obinger, Christian; Schaffer, Christina


    Genetic fusion of two proteins frequently induces beneficial effects to the proteins, such as increased solubility, besides the combination of two protein functions. Here, we study the effects of the bacterial surface layer protein SgsE from Geobacillus stearothermophilus NRS 2004/3a on the folding of a C-terminally fused enhanced green fluorescent protein (EGFP) moiety. Although GFPs are generally unable to adopt a functional confirmation in the bacterial periplasm of Escherichia coli cells, we observed periplasmic fluorescence from a chimera of a 150-amino-acid N-terminal truncation of SgsE and EGFP. Based on this finding, unfolding and refolding kinetics of different S-layer-EGFP chimeras, a maltose binding protein-EGFP chimera, and sole EGFP were monitored using green fluorescence as indicator for the folded protein state. Calculated apparent rate constants for unfolding and refolding indicated different folding pathways for EGFP depending on the fusion partner used, and a clearly stabilizing effect was observed for the SgsE_C fusion moiety. Thermal stability, as determined by differential scanning calorimetry, and unfolding equilibria were found to be independent of the fused partner. We conclude that the stabilizing effect SgsE_C exerts on EGFP is due to a reduction of degrees of freedom for folding of EGFP in the fused state.

  17. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map (United States)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong


    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by

  18. Survival and growth of yeast without telomere capping by Cdc13 in the absence of Sgs1, Exo1, and Rad9.

    Directory of Open Access Journals (Sweden)

    Hien-Ping Ngo


    Full Text Available Maintenance of telomere capping is absolutely essential to the survival of eukaryotic cells. Telomere capping proteins, such as Cdc13 and POT1, are essential for the viability of budding yeast and mammalian cells, respectively. Here we identify, for the first time, three genetic modifications that allow budding yeast cells to survive without telomere capping by Cdc13. We found that simultaneous inactivation of Sgs1, Exo1, and Rad9, three DNA damage response (DDR proteins, is sufficient to allow cell division in the absence of Cdc13. Quantitative amplification of ssDNA (QAOS was used to show that the RecQ helicase Sgs1 plays an important role in the resection of uncapped telomeres, especially in the absence of checkpoint protein Rad9. Strikingly, simultaneous deletion of SGS1 and the nuclease EXO1, further reduces resection at uncapped telomeres and together with deletion of RAD9 permits cell survival without CDC13. Pulsed-field gel electrophoresis studies show that cdc13-1 rad9Delta sgs1Delta exo1Delta strains can maintain linear chromosomes despite the absence of telomere capping by Cdc13. However, with continued passage, the telomeres of such strains eventually become short and are maintained by recombination-based mechanisms. Remarkably, cdc13Delta rad9Delta sgs1Delta exo1Delta strains, lacking any Cdc13 gene product, are viable and can grow indefinitely. Our work has uncovered a critical role for RecQ helicases in limiting the division of cells with uncapped telomeres, and this may provide one explanation for increased tumorigenesis in human diseases associated with mutations of RecQ helicases. Our results reveal the plasticity of the telomere cap and indicate that the essential role of telomere capping is to counteract specific aspects of the DDR.

  19. An SGS3-like protein functions in RNA-directed DNA methylation and transcriptional gene silencing in Arabidopsis

    KAUST Repository

    Zheng, Zhimin


    RNA-directed DNA methylation (RdDM) is an important epigenetic mechanism for silencing transgenes and endogenous repetitive sequences such as transposons. The RD29A promoter-driven LUCIFERASE transgene and its corresponding endogenous RD29A gene are hypermethylated and silenced in the Arabidopsis DNA demethylase mutant ros1. By screening for second-site suppressors of ros1, we identified the RDM12 locus. The rdm12 mutation releases the silencing of the RD29A-LUC transgene and the endogenous RD29A gene by reducing the promoter DNA methylation. The rdm12 mutation also reduces DNA methylation at endogenous RdDM target loci, including transposons and other repetitive sequences. In addition, the rdm12 mutation affects the levels of small interfering RNAs (siRNAs) from some of the RdDM target loci. RDM12 encodes a protein with XS and coiled-coil domains, and is similar to SGS3, which is a partner protein of RDR6 and can bind to double-stranded RNAs with a 5′ overhang, and is required for several post-transcriptional gene silencing pathways. Our results show that RDM12 is a component of the RdDM pathway, and suggest that RdDM may involve double-stranded RNAs with a 5′ overhang and the partnering between RDM12 and RDR2. © 2010 Blackwell Publishing Ltd.

  20. Heteroduplex DNA position defines the roles of the Sgs1, Srs2, and Mph1 helicases in promoting distinct recombination outcomes.

    Directory of Open Access Journals (Sweden)

    Katrina Mitchel

    Full Text Available The contributions of the Sgs1, Mph1, and Srs2 DNA helicases during mitotic double-strand break (DSB repair in yeast were investigated using a gap-repair assay. A diverged chromosomal substrate was used as a repair template for the gapped plasmid, allowing mismatch-containing heteroduplex DNA (hDNA formed during recombination to be monitored. Overall DSB repair efficiencies and the proportions of crossovers (COs versus noncrossovers (NCOs were determined in wild-type and helicase-defective strains, allowing the efficiency of CO and NCO production in each background to be calculated. In addition, the products of individual NCO events were sequenced to determine the location of hDNA. Because hDNA position is expected to differ depending on whether a NCO is produced by synthesis-dependent-strand-annealing (SDSA or through a Holliday junction (HJ-containing intermediate, its position allows the underlying molecular mechanism to be inferred. Results demonstrate that each helicase reduces the proportion of CO recombinants, but that each does so in a fundamentally different way. Mph1 does not affect the overall efficiency of gap repair, and its loss alters the CO-NCO by promoting SDSA at the expense of HJ-containing intermediates. By contrast, Sgs1 and Srs2 are each required for efficient gap repair, strongly promoting NCO formation and having little effect on CO efficiency. hDNA analyses suggest that all three helicases promote SDSA, and that Sgs1 and Srs2 additionally dismantle HJ-containing intermediates. The hDNA data are consistent with the proposed role of Sgs1 in the dissolution of double HJs, and we propose that Srs2 dismantles nicked HJs.

  1. Heteroduplex DNA position defines the roles of the Sgs1, Srs2, and Mph1 helicases in promoting distinct recombination outcomes. (United States)

    Mitchel, Katrina; Lehner, Kevin; Jinks-Robertson, Sue


    The contributions of the Sgs1, Mph1, and Srs2 DNA helicases during mitotic double-strand break (DSB) repair in yeast were investigated using a gap-repair assay. A diverged chromosomal substrate was used as a repair template for the gapped plasmid, allowing mismatch-containing heteroduplex DNA (hDNA) formed during recombination to be monitored. Overall DSB repair efficiencies and the proportions of crossovers (COs) versus noncrossovers (NCOs) were determined in wild-type and helicase-defective strains, allowing the efficiency of CO and NCO production in each background to be calculated. In addition, the products of individual NCO events were sequenced to determine the location of hDNA. Because hDNA position is expected to differ depending on whether a NCO is produced by synthesis-dependent-strand-annealing (SDSA) or through a Holliday junction (HJ)-containing intermediate, its position allows the underlying molecular mechanism to be inferred. Results demonstrate that each helicase reduces the proportion of CO recombinants, but that each does so in a fundamentally different way. Mph1 does not affect the overall efficiency of gap repair, and its loss alters the CO-NCO by promoting SDSA at the expense of HJ-containing intermediates. By contrast, Sgs1 and Srs2 are each required for efficient gap repair, strongly promoting NCO formation and having little effect on CO efficiency. hDNA analyses suggest that all three helicases promote SDSA, and that Sgs1 and Srs2 additionally dismantle HJ-containing intermediates. The hDNA data are consistent with the proposed role of Sgs1 in the dissolution of double HJs, and we propose that Srs2 dismantles nicked HJs.

  2. Large Eddy Simulation of Turbulent Flows in Wind Energy

    DEFF Research Database (Denmark)

    Chivaee, Hamid Sarlak

    Reynolds numbers, and thereafter, the fully-developed infinite wind farm boundary later simulations are performed. Sources of inaccuracy in the simulations are investigated and it is found that high Reynolds number flows are more sensitive to the choice of the SGS model than their low Reynolds number......This research is devoted to the Large Eddy Simulation (LES), and to lesser extent, wind tunnel measurements of turbulent flows in wind energy. It starts with an introduction to the LES technique associated with the solution of the incompressible Navier-Stokes equations, discretized using a finite...... volume method. The study is followed by a detailed investigation of the Sub-Grid Scale (SGS) modeling. New SGS models are implemented into the computing code, and the effect of SGS models are examined for different applications. Fully developed boundary layer flows are investigated at low and high...

  3. DYPTOP: a cost-efficient TOPMODEL implementation to simulate sub-grid spatio-temporal dynamics of global wetlands and peatlands

    Directory of Open Access Journals (Sweden)

    B. D. Stocker


    TOPMODEL (DYPTOP, which predicts the extent of inundation based on a computationally efficient TOPMODEL implementation. This approach rests on an empirical, grid-cell-specific relationship between the mean soil water balance and the flooded area. DYPTOP combines the simulated inundation extent and its temporal persistency with criteria for the ecosystem water balance and the modelled peatland-specific soil carbon balance to predict the global distribution of peatlands. We apply DYPTOP in combination with the LPX-Bern DGVM and benchmark the global-scale distribution, extent, and seasonality of inundation against satellite data. DYPTOP successfully predicts the spatial distribution and extent of wetlands and major boreal and tropical peatland complexes and reveals the governing limitations to peatland occurrence across the globe. Peatlands covering large boreal lowlands are reproduced only when accounting for a positive feedback induced by the enhanced mean soil water holding capacity in peatland-dominated regions. DYPTOP is designed to minimize input data requirements, optimizes computational efficiency and allows for a modular adoption in Earth system models.

  4. Fine-scale spatial genetic structure in predominantly selfing plants with limited seed dispersal: A rule or exception?

    Directory of Open Access Journals (Sweden)

    Sergei Volis


    Full Text Available Gene flow at a fine scale is still poorly understood despite its recognized importance for plant population demographic and genetic processes. We tested the hypothesis that intensity of gene flow will be lower and strength of spatial genetic structure (SGS will be higher in more peripheral populations because of lower population density. The study was performed on the predominantly selfing Avena sterilis and included: (1 direct measurement of dispersal in a controlled environment; and (2 analyses of SGS in three natural populations, sampled in linear transects at fixed increasing inter-plant distances. We found that in A. sterilis major seed dispersal is by gravity in close (less than 2 m vicinity of the mother plant, with a minor additional effect of wind. Analysis of SGS with six nuclear SSRs revealed a significant autocorrelation for the distance class of 1 m only in the most peripheral desert population, while in the two core populations with Mediterranean conditions, no genetic structure was found. Our results support the hypothesis that intensity of SGS increases from the species core to periphery as a result of decreased within-population gene flow related to low plant density. Our findings also show that predominant self-pollination and highly localized seed dispersal lead to SGS at a very fine scale, but only if plant density is not too high.

  5. Spatial Scales of Genetic Structure in Free-Standing and Strangler Figs (Ficus, Moraceae Inhabiting Neotropical Forests.

    Directory of Open Access Journals (Sweden)

    Katrin Heer

    Full Text Available Wind-borne pollinating wasps (Agaonidae can transport fig (Ficus sp., Moraceae pollen over enormous distances (> 100 km. Because of their extensive breeding areas, Neotropical figs are expected to exhibit weak patterns of genetic structure at local and regional scales. We evaluated genetic structure at the regional to continental scale (Panama, Costa Rica, and Peru for the free-standing fig species Ficus insipida. Genetic differentiation was detected only at distances > 300 km (Jost´s Dest = 0.68 ± 0.07 & FST = 0.30 ± 0.03 between Mesoamerican and Amazonian sites and evidence for phylogeographic structure (RST>>permuted RST was only significant in comparisons between Central and South America. Further, we assessed local scale spatial genetic structure (SGS, d ≤ 8 km in Panama and developed an agent-based model parameterized with data from F. insipida to estimate minimum pollination distances, which determine the contribution of pollen dispersal on SGS. The local scale data for F. insipida was compared to SGS data collected for an additional free-standing fig, F. yoponensis (subgenus Pharmacosycea, and two species of strangler figs, F. citrifolia and F. obtusifolia (subgenus Urostigma sampled in Panama. All four species displayed significant SGS (mean Sp = 0.014 ± 0.012. Model simulations indicated that most pollination events likely occur at distances > > 1 km, largely ruling out spatially limited pollen dispersal as the determinant of SGS in F. insipida and, by extension, the other fig species. Our results are consistent with the view that Ficus develops fine-scale SGS primarily as a result of localized seed dispersal and/or clumped seedling establishment despite extensive long-distance pollen dispersal. We discuss several ecological and life history factors that could have species- or subgenus-specific impacts on the genetic structure of Neotropical figs.

  6. Subgrid-scale turbulence in shock-boundary layer flows (United States)

    Jammalamadaka, Avinash; Jaberi, Farhad


    Data generated by direct numerical simulation (DNS) for a Mach 2.75 zero-pressure gradient turbulent boundary layer interacting with shocks of different intensities are used for a priori analysis of subgrid-scale (SGS) turbulence and various terms in the compressible filtered Navier-Stokes equations. The numerical method used for DNS is based on a hybrid scheme that uses a non-dissipative central scheme in the shock-free turbulent regions and a robust monotonicity-preserving scheme in the shock regions. The behavior of SGS stresses and their components, namely Leonard, Cross and Reynolds components, is examined in various regions of the flow for different shock intensities and filter widths. The backscatter in various regions of the flow is found to be significant only instantaneously, while the ensemble-averaged statistics indicate no significant backscatter. The budgets for the SGS kinetic energy equation are examined for a better understanding of shock-tubulence interactions at the subgrid level and also with the aim of providing useful information for one-equation LES models. A term-by-term analysis of SGS terms in the filtered total energy equation indicate that while each term in this equation is significant by itself, the net contribution by all of them is relatively small. This observation is consistent with our a posteriori analysis.

  7. The Sheep Grimace Scale as an indicator of post-operative distress and pain in laboratory sheep.

    Directory of Open Access Journals (Sweden)

    C Häger

    Full Text Available The EU Directive 2010/63/EU changed the requirements regarding the use of laboratory animals and raised important issues related to assessing the severity of all procedures undertaken on laboratory animals. However, quantifiable parameters to assess severity are rare, and improved assessment strategies need to be developed. Hence, a Sheep Grimace Scale (SGS was herein established by observing and interpreting sheep facial expressions as a consequence of pain and distress following unilateral tibia osteotomy. The animals were clinically investigated and scored five days before surgery and at 1, 3, 7, 10, 14 and 17 days afterwards. Additionally, cortisol levels in the saliva of the sheep were determined at the respective time points. For the SGS, video recording was performed, and pictures of the sheep were randomized and scored by blinded observers. Osteotomy in sheep resulted in an increased clinical severity score from days 1 to 17 post-surgery and elevated salivary cortisol levels one day post-surgery. An analysis of facial expressions revealed a significantly increased SGS on the day of surgery until day 3 post-surgery; this elevated level was sustained until day 17. Clinical severity and SGS scores correlated positively with a Pearson´s correlation coefficient of 0.47. Further investigations regarding the applicability of the SGS revealed a high inter-observer reliability with an intraclass correlation coefficient of 0.92 and an accuracy of 68.2%. In conclusion, the SGS represents a valuable approach for severity assessment that may help support and refine a widely used welfare assessment for sheep during experimental procedures, thereby meeting legislation requirements and minimizing the occurrence of unrecognized distress in animal experimentation.

  8. A new mixed subgrid-scale model for large eddy simulation of turbulent drag-reducing flows of viscoelastic fluids (United States)

    Li, Feng-Chen; Wang, Lu; Cai, Wei-Hua


    A mixed subgrid-scale (SGS) model based on coherent structures and temporal approximate deconvolution (MCT) is proposed for turbulent drag-reducing flows of viscoelastic fluids. The main idea of the MCT SGS model is to perform spatial filtering for the momentum equation and temporal filtering for the conformation tensor transport equation of turbulent flow of viscoelastic fluid, respectively. The MCT model is suitable for large eddy simulation (LES) of turbulent drag-reducing flows of viscoelastic fluids in engineering applications since the model parameters can be easily obtained. The LES of forced homogeneous isotropic turbulence (FHIT) with polymer additives and turbulent channel flow with surfactant additives based on MCT SGS model shows excellent agreements with direct numerical simulation (DNS) results. Compared with the LES results using the temporal approximate deconvolution model (TADM) for FHIT with polymer additives, this mixed SGS model MCT behaves better, regarding the enhancement of calculating parameters such as the Reynolds number. For scientific and engineering research, turbulent flows at high Reynolds numbers are expected, so the MCT model can be a more suitable model for the LES of turbulent drag-reducing flows of viscoelastic fluid with polymer or surfactant additives. Project supported by the China Postdoctoral Science Foundation (Grant No. 2011M500652), the National Natural Science Foundation of China (Grant Nos. 51276046 and 51206033), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20112302110020).

  9. The subgrid-scale scalar variance under supercritical pressure conditions (United States)

    Masi, Enrica; Bellan, Josette


    To model the subgrid-scale (SGS) scalar variance under supercritical-pressure conditions, an equation is first derived for it. This equation is considerably more complex than its equivalent for atmospheric-pressure conditions. Using a previously created direct numerical simulation (DNS) database of transitional states obtained for binary-species systems in the context of temporal mixing layers, the activity of terms in this equation is evaluated, and it is found that some of these new terms have magnitude comparable to that of governing terms in the classical equation. Most prominent among these new terms are those expressing the variation of diffusivity with thermodynamic variables and Soret terms having dissipative effects. Since models are not available for these new terms that would enable solving the SGS scalar variance equation, the adopted strategy is to directly model the SGS scalar variance. Two models are investigated for this quantity, both developed in the context of compressible flows. The first one is based on an approximate deconvolution approach and the second one is a gradient-like model which relies on a dynamic procedure using the Leonard term expansion. Both models are successful in reproducing the SGS scalar variance extracted from the filtered DNS database, and moreover, when used in the framework of a probability density function (PDF) approach in conjunction with the β-PDF, they excellently reproduce a filtered quantity which is a function of the scalar. For the dynamic model, the proportionality coefficient spans a small range of values through the layer cross-stream coordinate, boding well for the stability of large eddy simulations using this model.

  10. Large eddy simulations of turbulent reacting jets (United States)

    Garrick, Sean Clifford

    The "filtered density function" methodology is implemented for large eddy simulation (LES) of three-dimensional planar and round jet flows, under both non-reaction and chemically reacting conditions. In this methodology, the effects of the unresolved scalar fluctuations are taken into account by considering the probability density function (PDF) of the sub-grid scale (SGS) scalar quantities in a stochastic manner. The influences of scalar mixing and convention within the sub-grid are taken into account via conventional methods. The FDF transport equation is solved numerically via a Lagrangian Monte Carlo scheme in which the solutions of equivalent stochastic differential equations (SDEs) are obtained. The consistency of the approach, the convergence of the FDF solution, and the performance of the closures employed in the FDF transport equation are assessed by comparisons with results obtained by conventional LES via a finite difference method (LES-FD). In non-reacting flows, the FDF solution yields results similar to those via LES-FD for the first two SGS moments. The advantage of the FDF methodology is demonstrated by its use in LES of reacting flows. In the absence of a closure for the SGS scalar fluctuations, the LES-FD results are significantly different from those obtained by the FDF. The FDF is also appraised by comparative assessments against experimental data for a non-heat releasing turbulent round jet involving the ozone-nitric oxide chemical reaction.

  11. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.


    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  12. Development and Validation of the Body-Focused Shame and Guilt Scale (United States)

    Weingarden, Hilary; Renshaw, Keith D.; Tangney, June P.; Wilhelm, Sabine


    Body shame is described as central in clinical literature on body dysmorphic disorder (BDD). However, empirical investigations of body shame within BDD are rare. One potential reason for the scarcity of such research may be that existing measures of body shame focus on eating and weight-based content. Within BDD, however, body shame likely focuses more broadly on shame felt in response to perceived appearance flaws in one’s body parts. We describe the development and validation of the Body-Focused Shame and Guilt Scale (BF-SGS), a measure of BDD-relevant body shame, across two studies: a two time-point study of undergraduates, and a follow-up study in two Internet-recruited clinical samples (BDD, obsessive compulsive disorder) and healthy controls. Across both studies, the BF-SGS shame subscale demonstrated strong reliability and construct validity, with Study 2 providing initial clinical norms. PMID:26640760

  13. Estimation of turbulence dissipation rate by Large eddy PIV method in an agitated vessel

    Directory of Open Access Journals (Sweden)

    Kysela Bohuš


    Full Text Available The distribution of turbulent kinetic energy dissipation rate is important for design of mixing apparatuses in chemical industry. Generally used experimental methods of velocity measurements for measurement in complex geometries of an agitated vessel disallow measurement in resolution of small scales close to turbulence dissipation ones. Therefore, Particle image velocity (PIV measurement method improved by large eddy Ply approach was used. Large eddy PIV method is based on modeling of smallest eddies by a sub grid scale (SGS model. This method is similar to numerical calculations using Large Eddy Simulation (LES and the same SGS models are used. In this work the basic Smagorinsky model was employed and compared with power law approximation. Time resolved PIV data were processed by Large Eddy PIV approach and the obtained results of turbulent kinetic dissipation rate were compared in selected points for several operating conditions (impeller speed, operating liquid viscosity.

  14. Increased fire frequency promotes stronger spatial genetic structure and natural selection at regional and local scales in Pinus halepensis Mill. (United States)

    Budde, Katharina B; González-Martínez, Santiago C; Navascués, Miguel; Burgarella, Concetta; Mosca, Elena; Lorenzo, Zaida; Zabal-Aguirre, Mario; Vendramin, Giovanni G; Verdú, Miguel; Pausas, Juli G; Heuertz, Myriam


    The recurrence of wildfires is predicted to increase due to global climate change, resulting in severe impacts on biodiversity and ecosystem functioning. Recurrent fires can drive plant adaptation and reduce genetic diversity; however, the underlying population genetic processes have not been studied in detail. In this study, the neutral and adaptive evolutionary effects of contrasting fire regimes were examined in the keystone tree species Pinus halepensis Mill. (Aleppo pine), a fire-adapted conifer. The genetic diversity, demographic history and spatial genetic structure were assessed at local (within-population) and regional scales for populations exposed to different crown fire frequencies. Eight natural P. halepensis stands were sampled in the east of the Iberian Peninsula, five of them in a region exposed to frequent crown fires (HiFi) and three of them in an adjacent region with a low frequency of crown fires (LoFi). Samples were genotyped at nine neutral simple sequence repeats (SSRs) and at 251 single nucleotide polymorphisms (SNPs) from coding regions, some of them potentially important for fire adaptation. Fire regime had no effects on genetic diversity or demographic history. Three high-differentiation outlier SNPs were identified between HiFi and LoFi stands, suggesting fire-related selection at the regional scale. At the local scale, fine-scale spatial genetic structure (SGS) was overall weak as expected for a wind-pollinated and wind-dispersed tree species. HiFi stands displayed a stronger SGS than LoFi stands at SNPs, which probably reflected the simultaneous post-fire recruitment of co-dispersed related seeds. SNPs with exceptionally strong SGS, a proxy for microenvironmental selection, were only reliably identified under the HiFi regime. An increasing fire frequency as predicted due to global change can promote increased SGS with stronger family structures and alter natural selection in P. halepensis and in plants with similar life history traits.

  15. Evaluation of Subgrid-Scale Transport of Hydrometeors in a PDF-based Scheme using High-Resolution CRM Simulations (United States)

    Wong, M.; Ovchinnikov, M.; Wang, M.; Larson, V. E.


    In current climate models, the model resolution is too coarse to explicitly resolve deep convective systems. Parameterization schemes are therefore needed to represent the physical processes at the sub-grid scale. Recently, an approach based on assumed probability density functions (PDFs) has been developed to help unify the various parameterization schemes used in current global models. In particular, a unified parameterization scheme called the Cloud Layers Unified By Binormals (CLUBB) scheme has been developed and tested successfully for shallow boundary-layer clouds. CLUBB's implementation in the Community Atmosphere Model, version 5 (CAM5) is also being extended to treat deep convection cases, but parameterizing subgrid-scale vertical transport of hydrometeors remains a challenge. To investigate the roots of the problem and possible solutions, we generate a high-resolution benchmark simulation of a deep convection case using a cloud-resolving model (CRM) called System for Atmospheric Modeling (SAM). We use the high-resolution 3D CRM results to assess the prognostic and diagnostic higher-order moments in CLUBB that are in relation to the subgrid-scale transport of hydrometeors. We also analyze the heat and moisture budgets in terms of CLUBB variables from the SAM benchmark simulation. The results from this study will be used to devise a better representation of vertical subgrid-scale transport of hydrometeors by utilizing the sub-grid variability information from CLUBB.

  16. Vertical Velocities in Cumulus Convection: Implications for Climate and Prospects for Realistic Simulation at Cloud Scale (United States)

    Donner, Leo


    Cumulus mass fluxes are essential controls on the interactions between cumulus convection and large-scale flows. Cumulus parameterizations have generally been built around them, and these parameterizations are basic components of climate models. Several important questions in climate science depend also on cumulus vertical velocities. Interactions between aerosols and convection comprise a prominent example, and scale-aware cumulus parameterizations that require explicit information about cumulus areas are another. Basic progress on these problems requires realistic characterization of cumulus vertical velocities from observations and models. Recent deployments of dual-Doppler radars are providing unprecedented observations, which can be compared against cloud-resolving models (CRMs). The CRMs can subsequently be analyzed to develop and evaluate parameterizations of vertical velocities in climate models. Vertical velocities from several cloud models will be compared against observations in this presentation. CRM vertical velocities will be found to depend strongly on model resolution and treatment of sub-grid turbulence and microphysics. Although many current state-of-science CRMs do not simulate vertical velocities well, recent experiments with these models suggest that with appropriate treatments of sub-grid turbulence and microphysics robustly realistic modeling of cumulus vertical velocities is possible.

  17. Lagrangian filtered density function for LES-based stochastic modelling of turbulent dispersed flows

    CERN Document Server

    Innocenti, A; Chibbaro, S


    The Eulerian-Lagrangian approach based on Large-Eddy Simulation (LES) is one of the most promising and viable numerical tools to study turbulent dispersed flows when the computational cost of Direct Numerical Simulation (DNS) becomes too expensive. The applicability of this approach is however limited if the effects of the Sub-Grid Scales (SGS) of the flow on particle dynamics are neglected. In this paper, we propose to take these effects into account by means of a Lagrangian stochastic SGS model for the equations of particle motion. The model extends to particle-laden flows the velocity-filtered density function method originally developed for reactive flows. The underlying filtered density function is simulated through a Lagrangian Monte Carlo procedure that solves for a set of Stochastic Differential Equations (SDEs) along individual particle trajectories. The resulting model is tested for the reference case of turbulent channel flow, using a hybrid algorithm in which the fluid velocity field is provided b...

  18. A priori study of subgrid-scale features in turbulent Rayleigh-Bénard convection (United States)

    Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.


    At the crossroad between flow topology analysis and turbulence modeling, a priori studies are a reliable tool to understand the underlying physics of the subgrid-scale (SGS) motions in turbulent flows. In this paper, properties of the SGS features in the framework of a large-eddy simulation are studied for a turbulent Rayleigh-Bénard convection (RBC). To do so, data from direct numerical simulation (DNS) of a turbulent air-filled RBC in a rectangular cavity of aspect ratio unity and π spanwise open-ended distance are used at two Rayleigh numbers R a ∈{1 08,1 010 } [Dabbagh et al., "On the evolution of flow topology in turbulent Rayleigh-Bénard convection," Phys. Fluids 28, 115105 (2016)]. First, DNS at Ra = 108 is used to assess the performance of eddy-viscosity models such as QR, Wall-Adapting Local Eddy-viscosity (WALE), and the recent S3PQR-models proposed by Trias et al. ["Building proper invariants for eddy-viscosity subgrid-scale models," Phys. Fluids 27, 065103 (2015)]. The outcomes imply that the eddy-viscosity modeling smoothes the coarse-grained viscous straining and retrieves fairly well the effect of the kinetic unfiltered scales in order to reproduce the coherent large scales. However, these models fail to approach the exact evolution of the SGS heat flux and are incapable to reproduce well the further dominant rotational enstrophy pertaining to the buoyant production. Afterwards, the key ingredients of eddy-viscosity, νt, and eddy-diffusivity, κt, are calculated a priori and revealed positive prevalent values to maintain a turbulent wind essentially driven by the mean buoyant force at the sidewalls. The topological analysis suggests that the effective turbulent diffusion paradigm and the hypothesis of a constant turbulent Prandtl number are only applicable in the large-scale strain-dominated areas in the bulk. It is shown that the bulk-dominated rotational structures of vortex-stretching (and its synchronous viscous dissipative structures) hold

  19. Stability and accuracy of relative scale factor estimates for Superconducting Gravimeters (United States)

    Wziontek, H.; Cordoba, B.; Crossley, D.; Wilmes, H.; Wolf, P.; Serna, J. M.; Warburton, R.


    Superconducting gravimeters (SG) are known to be the most sensitive and most stable gravimeters. However, reliably determining the scale factor calibration and its stability with the required precision of better than 0.1% is still an open issue. The relative comparison of temporal gravity variations due to the Earths tides recorded with other calibrated gravimeters is one method to obtain the SG scale factor. Usually absolute gravimeters (AG) are used for such a comparison and the stability of the scale factor can be deduced by repeated observations over a limited period, or by comparison with precise tidal models. In recent work it was shown that spring gravimeters may not be stable enough to transfer the calibration between SG. A promising alternative is to transfer the scale factor with a well calibrated, moveable SG. To assess the perspectives of such an approach, the coherence of records from dual sphere SGs and two SGs which are being operated side by side at the stations Bad Homburg and Wettzell (Germany) and other GGP sites is analysed. To determine and remove the instrumental drift, a reference time series from the combination with AG measurements is used. The reproducibility of the scale factor and the achievable precision are investigated for comparison periods of different lenght and conclusions are drawn to the use of AG and the future application of the moveable iGrav™ SG.

  20. Final Technical Report. Project Boeing SGS

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Thomas E. [The Boeing Company, Seattle, WA (United States)


    Boeing and its partner, PJM Interconnection, teamed to bring advanced “defense-grade” technologies for cyber security to the US regional power grid through demonstration in PJM’s energy management environment. Under this cooperative project with the Department of Energy, Boeing and PJM have developed and demonstrated a host of technologies specifically tailored to the needs of PJM and the electric sector as a whole. The team has demonstrated to the energy industry a combination of processes, techniques and technologies that have been successfully implemented in the commercial, defense, and intelligence communities to identify, mitigate and continuously monitor the cyber security of critical systems. Guided by the results of a Cyber Security Risk-Based Assessment completed in Phase I, the Boeing-PJM team has completed multiple iterations through the Phase II Development and Phase III Deployment phases. Multiple cyber security solutions have been completed across a variety of controls including: Application Security, Enhanced Malware Detection, Security Incident and Event Management (SIEM) Optimization, Continuous Vulnerability Monitoring, SCADA Monitoring/Intrusion Detection, Operational Resiliency, Cyber Range simulations and hands on cyber security personnel training. All of the developed and demonstrated solutions are suitable for replication across the electric sector and/or the energy sector as a whole. Benefits identified include; Improved malware and intrusion detection capability on critical SCADA networks including behavioral-based alerts resulting in improved zero-day threat protection; Improved Security Incident and Event Management system resulting in better threat visibility, thus increasing the likelihood of detecting a serious event; Improved malware detection and zero-day threat response capability; Improved ability to systematically evaluate and secure in house and vendor sourced software applications; Improved ability to continuously monitor and maintain secure configuration of network devices resulting in reduced vulnerabilities for potential exploitation; Improved overall cyber security situational awareness through the integration of multiple discrete security technologies into a single cyber security reporting console; Improved ability to maintain the resiliency of critical systems in the face of a targeted cyber attack of other significant event; Improved ability to model complex networks for penetration testing and advanced training of cyber security personnel

  1. Identification and characterization of the merozoite surface protein 1 (msp1) gene in a host-generalist avian malaria parasite, Plasmodium relictum (lineages SGS1 and GRW4) with the use of blood transcriptome. (United States)

    Hellgren, Olof; Kutzer, Megan; Bensch, Staffan; Valkiūnas, Gediminas; Palinauskas, Vaidas


    The merozoite surface protein 1 (msp1) is one of the most studied vaccine candidate genes in mammalian Plasmodium spp. to have been used for investigations of epidemiology, population structures, and immunity to infections. However methodological difficulties have impeded the use of nuclear markers such as msp1 in Plasmodium parasites causing avian malaria. Data from an infection transcriptome of the host generalist avian malaria parasite Plasmodium relictum was used to identify and characterize the msp1 gene from two different isolates (mtDNA lineages SGS1 and GRW4). The aim was to investigate whether the msp1 gene in avian malaria species shares the properties of the msp1 gene in Plasmodium falciparum in terms of block variability, conserved anchor points and repeat motifs, and further to investigate the degree to which the gene might be informative in avian malaria parasites for population and epidemiological studies. Reads from 454 sequencing of birds infected with avian malaria was used to develop Sanger sequencing protocols for the msp1 gene of P. relictum. Genetic variability between variable and conserved blocks of the gene was compared within and between avian malaria parasite species, including P. falciparum. Genetic variability of the msp1 gene in P. relictum was compared with six other nuclear genes and the mtDNA gene cytochrome b. The msp1 gene of P. relictum shares the same general pattern of variable and conserved blocks as found in P. falciparum, although the variable blocks exhibited less variability than P. falciparum. The variation across the gene blocks in P. falciparum spanned from being as conserved as within species variation in P. relictum to being as variable as between the two avian malaria species (P. relictum and Plasmodium gallinaceum) in the variable blocks. In P. relictum the highly conserved p19 region of the peptide was identified, which included two epidermal growth factor-like domains and a fully conserved GPI anchor point. This

  2. Between-site differences in the scale of dispersal and gene flow in red oak.

    Directory of Open Access Journals (Sweden)

    Emily V Moran

    Full Text Available BACKGROUND: Nut-bearing trees, including oaks (Quercus spp., are considered to be highly dispersal limited, leading to concerns about their ability to colonize new sites or migrate in response to climate change. However, estimating seed dispersal is challenging in species that are secondarily dispersed by animals, and differences in disperser abundance or behavior could lead to large spatio-temporal variation in dispersal ability. Parentage and dispersal analyses combining genetic and ecological data provide accurate estimates of current dispersal, while spatial genetic structure (SGS can shed light on past patterns of dispersal and establishment. METHODOLOGY AND PRINCIPAL FINDINGS: In this study, we estimate seed and pollen dispersal and parentage for two mixed-species red oak populations using a hierarchical bayesian approach. We compare these results to those of a genetic ML parentage model. We also test whether observed patterns of SGS in three size cohorts are consistent with known site history and current dispersal patterns. We find that, while pollen dispersal is extensive at both sites, the scale of seed dispersal differs substantially. Parentage results differ between models due to additional data included in bayesian model and differing genotyping error assumptions, but both indicate between-site dispersal differences. Patterns of SGS in large adults, small adults, and seedlings are consistent with known site history (farmed vs. selectively harvested, and with long-term differences in seed dispersal. This difference is consistent with predator/disperser satiation due to higher acorn production at the low-dispersal site. While this site-to-site variation results in substantial differences in asymptotic spread rates, dispersal for both sites is substantially lower than required to track latitudinal temperature shifts. CONCLUSIONS: Animal-dispersed trees can exhibit considerable spatial variation in seed dispersal, although patterns may

  3. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo


    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.

  4. The sediment and phosphorus transport in a large scale study (United States)

    Bauer, Miroslav; Krása, Josef; Dostál, Tomáš; Jáchymová, Barbora


    In the name of the Water framework directive (2000/60/ES), there exists the demand to improve quality of water bodies. Basically, pollution of the flowing or stagnant water bodies comes from point and diffuse sources. To find the balance of point (mainly urban areas) and diffuse sources (drainage - N and soil erosion and sediment transport - P) in the scale of Moldau catchment is the task of the project. The area of interest is Moldau river catchment (29.500 km2) has been modelled with fully distributed approach of the WaTEM/SEDEM model. The model estimates the soil erosion as well as sediment a phosphorus transport through the river network. The results are combined with estimation of bounded nitrogen originated from drainage systems in agricultural landscape. The modelling has been done within three levels of accuracy. The simulation scale itself is defined by 10 m elements resolution with critical points net each approximately 300m in the river net (116.000 points). Subsequently, results were aggregated for sub-catchments of 4th order (ca 5 - 15 km2 each = almost 3000 individual sub-catchments) and sub-catchments of 3rd order = ca 400 sub-catchments). Each water reservoir in the system (larger than 0.25 hectares in the area) has been included, which count more than 12.000 reservoirs. The presented approach will be further use by Moldau river catchment managers for the planning of protection and elimination of the pollution in Moldau river catchment. This will lead to localize 3000 highly endangered hot spots which threaten the water bodies significantly. In this localities a detailed modelling and designing of the protection will be done. The research activities had been supported by QJ330118, SGS14/180/OHK1/3T/11, SGS17/090/OHK1/3T/11 grants.

  5. Workshop on Human Activity at Scale in Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Melissa R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Coletti, Mark A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kennedy, Joseph H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sujithkumar S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)


    Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limited schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.

  6. Cosmological dark turbulence and scaling relations in self-gravitating systems (United States)

    Nakamichi, A.; Morikawa, M.

    Many scaling relations have been observed for self-gravitating systems (SGS) in the universe. We explore a consistent understanding of them from a simple principle based on the proposal that the collision-less dark matter (DM) fluid terns into a turbulent state, i.e. dark turbulence, after crossing the caustic surface in the non-linear stage. After deriving Kolmogorov scaling laws from Navier-Stokes and Jeans equations by the method used in solving the Smoluchowski coagulation equation, we apply this to several observations such as the scale-dependent velocity dispersion, mass-luminosity ratio, and mass-angular momentum relation. They all point the concordant value for the constant energy flow per mass: 0.3 cm2/s3, which may be understood as the speed of the hierarchical coalescence process in the cosmic structure formation.

  7. Intercomparison of different subgrid-scale models for the Large Eddy Simulation of the diurnal evolution of the atmospheric boundary layer during the Wangara experiment (United States)

    Dall'Ozzo, C.; Carissimo, B.; Musson-Genon, L.; Dupont, E.; Milliez, M.


    The study of a whole diurnal cycle of the atmospheric boundary layer evolving through unstable, neutral and stable states is essential to test a model applicable to the dispersion of pollutants. Consequently a LES of a diurnal cycle is performed and compared to observations from the Wangara experiment (Day 33-34). All simulations are done with Code_Saturne [1] an open source CFD code. The synthetic eddy method (SEM) [2] is implemented to initialize turbulence at the beginning of the simulation. Two different subgrid-scale (SGS) models are tested: the Smagorinsky model [3],[4] and the dynamical Wong and Lilly model [5]. The first one, the most classical, uses a Smagorinsky constant Cs to parameterize the dynamical turbulent viscosity while the second one relies on a variable C. Cs remains insensitive to the atmospheric stability level in contrary to the parameter C determined by the Wong and Lilly model. It is based on the error minimization of the difference between the tensors of the resolved turbulent stress (Lij) and the difference of the SGS stress tensors at two different filter scales (Mij). Furthermore, the thermal eddy diffusivity, as opposed to the Smagorinsky model, is calculated with a dynamical Prandtl number determination. The results are confronted to previous simulations from Basu et al. (2008) [6], using a locally averaged scale-dependent dynamic (LASDD) SGS model, and to previous RANS simulations. The accuracy in reproducing the experimental atmospheric conditions is discussed, especially regarding the night time low-level jet formation. In addition, the benefit of the utilization of a coupled radiative model is discussed.

  8. Simple lattice Boltzmann subgrid-scale model for convectional flows with high Rayleigh numbers within an enclosed circular annular cavity (United States)

    Chen, Sheng; Tölke, Jonas; Krafczyk, Manfred


    Natural convection within an enclosed circular annular cavity formed by two concentric vertical cylinders is of fundamental interest and practical importance. Generally, the assumption of axisymmetric thermal flow is adopted for simulating such natural convections and the validity of the assumption of axisymmetric thermal flow is still held even for some turbulent convection. Usually the Rayleigh numbers (Ra) of realistic flows are very high. However, the work to design suitable and efficient lattice Boltzmann (LB) models on such flows is quite rare. To bridge the gap, in this paper a simple LB subgrid-scale (SGS) model, which is based on our recent work [S. Chen, J. Tölke, and M. Krafczyk, Phys. Rev. E 79, 016704 (2009); S. Chen, J. Tölke, S. Geller, and M. Krafczyk, Phys. Rev. E 78, 046703 (2008)], is proposed for simulating convectional flow with high Ra within an enclosed circular annular cavity. The key parameter for the SGS model can be quite easily and efficiently evaluated by the present model. The numerical experiments demonstrate that the present model works well for a large range of Ra and Prandtl number (Pr). Though in the present study a popularly used static Smagorinsky turbulence model is adopted to demonstrate how to develop a LB SGS model for simulating axisymmetric thermal flows with high Ra, other state-of-the-art turbulence models can be incorporated into the present model in the same way. In addition, the present model can be extended straightforwardly to simulate other axisymmetric convectional flows with high Ra, for example, turbulent convection with internal volumetric heat generation in a vertical cylinder, which is an important simplified representation of a nuclear reactor.

  9. Large eddy simulation of zero-pressure-gradient turbulent boundary layer based on different scaling laws (United States)

    Cheng, Wan; Samtaney, Ravi


    We present results of large eddy simulation (LES) for a smooth-wall, zero-pressure-gradient turbulent boundary layer. We employ the stretched vortex sub-grid-scale model in the simulations augmented by a wall model. Our wall model is based on the virtual-wall model introduced by Chung & Pullin (J. Fluid Mech 2009). An essential component of their wall model is an ODE governing the local wall-normal velocity gradient obtained using inner-scaling ansatz. We test two variants of the wall model based on different similarity laws: one is based on a log-law and the other on a power-law. The specific form of the power law scaling utilized is that proposed by George & Castillo (Appl. Mech. Rev. 1997), dubbed the ``GC Law''. Turbulent inflow conditions are generated by a recycling method, and applying scaling laws corresponding to the two variants of the wall model, and a uniform way to determine the inlet friction velocity. For Reynolds number based on momentum thickness, Reθ , ranging from 104 to 1012 it is found that the velocity profiles generally follow the log law form rather than the power law. For large Reynolds number asymptotic behavior, LES based on different scaling laws the boundary layer thickness and turbulent intensities do not show much difference. Supported by a KAUST funded project on large eddy simulation of turbulent flows. The IBM Blue Gene P Shaheen at KAUST was utilized for the simulations.

  10. Incorporating channel geometric uncertainty into a regional scale flood inundation model (United States)

    Neal, Jeffrey; Odoni, Nick; Trigg, Mark; Freer, Jim; Bates, Paul


    Models that simulate the dynamics of river and floodplain water surface elevations over large regions have a wide range of applications including regional scale flood risk estimation and simulating wetland inundation dynamics, while potential emerging applications include estimating river discharge from level observations as part of a data assimilation system. The river routing schemes used by global land surface models are often relatively simple in that they are based on wave speed, kinematic and diffusive physics. However, as the research on large scale river modelling matures, approaches are being developed that resemble scaled-up versions of the hydrodynamic models traditionally applied to rivers at the reach scale. These developments are not surprising given that such models can be significantly more accurate than traditional routing schemes at simulating water surface elevation. This presentation builds on the work of Neal et al. (2012) who adapted a reach scale dynamic flood inundation model for large scale application with the addition of a sub-grid parameterisation for channel flow. The scheme was shown to be numerically stable and scalable, with the aid of some simple test cases, before it was applied to an 800 km reach of the River Niger that includes the complex waterways and lakes of the Niger Inland Delta in Mali. However, the model was significantly less accurate at low to moderate flows than at high flow due, in part, to assuming that the channel geometry was rectangular. Furthermore, this made it difficult to calibrate channel parameters with water levels during typical flow conditions. This presentation will describe an extension of this sub-grid model that allows the channel shape to be defined as an exponent of width, along with a regression based approach to approximate the wetted perimeter length for the new geometry. By treating the geometry in this way uncertainty in the channel shape can be considered as a model parameter, which for the

  11. Advanced subgrid-scale modeling for convection-dominated species transport at fluid interfaces with application to mass transfer from rising bubbles (United States)

    Weiner, Andre; Bothe, Dieter


    This paper presents a novel subgrid scale (SGS) model for simulating convection-dominated species transport at deformable fluid interfaces. One possible application is the Direct Numerical Simulation (DNS) of mass transfer from rising bubbles. The transport of a dissolving gas along the bubble-liquid interface is determined by two transport phenomena: convection in streamwise direction and diffusion in interface normal direction. The convective transport for technical bubble sizes is several orders of magnitude higher, leading to a thin concentration boundary layer around the bubble. A true DNS, fully resolving hydrodynamic and mass transfer length scales results in infeasible computational costs. Our approach is therefore a DNS of the flow field combined with a SGS model to compute the mass transfer between bubble and liquid. An appropriate model-function is used to compute the numerical fluxes on all cell faces of an interface cell. This allows to predict the mass transfer correctly even if the concentration boundary layer is fully contained in a single cell layer around the interface. We show that the SGS-model reduces the resolution requirements at the interface by a factor of ten and more. The integral flux correction is also applicable to other thin boundary layer problems. Two flow regimes are investigated to validate the model. A semi-analytical solution for creeping flow is used to assess local and global mass transfer quantities. For higher Reynolds numbers ranging from Re = 100 to Re = 460 and Péclet numbers between Pe =104 and Pe = 4 ṡ106 we compare the global Sherwood number against correlations from literature. In terms of accuracy, the predicted mass transfer never deviates more than 4% from the reference values.

  12. Large Eddy Simulation (LES for IC Engine Flows

    Directory of Open Access Journals (Sweden)

    Kuo Tang-Wei


    Full Text Available Numerical computations are carried out using an engineering-level Large Eddy Simulation (LES model that is provided by a commercial CFD code CONVERGE. The analytical framework and experimental setup consist of a single cylinder engine with Transparent Combustion Chamber (TCC under motored conditions. A rigorous working procedure for comparing and analyzing the results from simulation and high speed Particle Image Velocimetry (PIV experiments is documented in this work. The following aspects of LES are analyzed using this procedure: number of cycles required for convergence with adequate accuracy; effect of mesh size, time step, sub-grid-scale (SGS turbulence models and boundary condition treatments; application of the proper orthogonal decomposition (POD technique.

  13. Exploring the Limits of the Dynamic Procedure for Modeling Subgrid-Scale Stresses in LES of Inhomogeneous Flows. (United States)

    Le, A.-T.; Kim, J.; Coleman, G.


    One of the primary reasons dynamic subgrid-scale (SGS) models are more successful than those that are `hand-tuned' is thought to be their insensitivity to numerical and modeling parameters. Jiménez has recently demonstrated that large-eddy simulations (LES) of decaying isotropic turbulence using a dynamic Smagorinsky model yield correct decay rates -- even when the model is subjected to a range of artificial perturbations. The objective of the present study is to determine to what extent this `self-adjusting' feature of dynamic SGS models is found in LES of inhomogeneous flows. The effects of numerical and modeling parameters on the accuracy of LES solutions of fully developed and developing turbulent channel flow are studied, using a spectral code and various dynamic models (including those of Lilly et al. and Meneveau et al.); other modeling parameters tested include the filter-width ratio and the effective magnitude of the Smagorinsky coefficient. Numerical parameters include the form of the convective term and the type of test filter (sharp-cutoff versus tophat). The resulting LES statistics are found to be surprisingly sensitive to the various parameter choices, which implies that more care than is needed for homogeneous-flow simulations must be exercised when performing LES of inhomogeneous flows.

  14. Thermodynamics, maximum power, and the dynamics of preferential river flow structures at the continental scale

    Directory of Open Access Journals (Sweden)

    A. Kleidon


    Full Text Available The organization of drainage basins shows some reproducible phenomena, as exemplified by self-similar fractal river network structures and typical scaling laws, and these have been related to energetic optimization principles, such as minimization of stream power, minimum energy expenditure or maximum "access". Here we describe the organization and dynamics of drainage systems using thermodynamics, focusing on the generation, dissipation and transfer of free energy associated with river flow and sediment transport. We argue that the organization of drainage basins reflects the fundamental tendency of natural systems to deplete driving gradients as fast as possible through the maximization of free energy generation, thereby accelerating the dynamics of the system. This effectively results in the maximization of sediment export to deplete topographic gradients as fast as possible and potentially involves large-scale feedbacks to continental uplift. We illustrate this thermodynamic description with a set of three highly simplified models related to water and sediment flow and describe the mechanisms and feedbacks involved in the evolution and dynamics of the associated structures. We close by discussing how this thermodynamic perspective is consistent with previous approaches and the implications that such a thermodynamic description has for the understanding and prediction of sub-grid scale organization of drainage systems and preferential flow structures in general.

  15. A nonlinear structural subgrid-scale closure for compressible MHD Part I: derivation and energy dissipation properties

    CERN Document Server

    Vlaykov, Dimitar G; Schmidt, Wolfram; Schleicher, Dominik R G


    Compressible magnetohydrodynamic (MHD) turbulence is ubiquitous in astrophysical phenomena ranging from the intergalactic to the stellar scales. In studying them, numerical simulations are nearly inescapable, due to the large degree of nonlinearity involved. However the dynamical ranges of these phenomena are much larger than what is computationally accessible. In large eddy simulations (LES), the resulting limited resolution effects are addressed explicitly by introducing to the equations of motion additional terms associated with the unresolved, subgrid-scale (SGS) dynamics. This renders the system unclosed. We derive a set of nonlinear structural closures for the ideal MHD LES equations with particular emphasis on the effects of compressibility. The closures are based on a gradient expansion of the finite-resolution operator (W.K. Yeo CUP 1993, ed. Galperin & Orszag) and require no assumptions about the nature of the flow or magnetic field. Thus the scope of their applicability ranges from the sub- to ...

  16. Assessment of the Suitability of a Global Hydrodynamic Model in Simulating a Regional-scale Extreme Flood at Finer Spatial Resolutions (United States)

    Mateo, C. M. R.; Yamazaki, D.; Kim, H.; Champathong, A.; Oki, T.


    Global river models (GRMs) are elemental for large-scale predictions and impact analyses. However, they have limited capability in providing accurate flood information at fine resolution for practical purposes. Hyperresolution (~1km resolution) modelling is believed to improve the representation of topographical constraints, which consequently result to better predictions of surface water flows and flood inundation at regional to global scales. While numerous studies have shown that finer resolutions improve the predictions of catchment-scale floods using local-scale hydrodynamic models, the impact of finer spatial resolution on predictions of large-scale floods using GRMs is rarely examined. In this study, we assessed the suitability of a state-of-the-art hydrodynamic GRM, CaMa-Flood, in the hyperresolution simulation of a regional-scale flood. The impacts of finer spatial resolution and representation of sub-grid processes on simulating the 2011 immense flooding in Chao Phraya River Basin, Thailand was investigated. River maps ranging from 30-arcsecond (~1km) to 5-arcminute (~10km) spatial resolutions were generated from 90m resolution HydroSHEDS maps and SRTM3 DEM. Simulations were executed in each spatial resolution with the new multi-directional downstream connectivity (MDC) scheme in CaMa-Flood turned on and off. While the predictive capability of the model slightly improved with finer spatial resolution when MDC scheme is turned on, it significantly declined when MDC scheme is turned off; bias increased by 35% and NSE-coefficient decreased by 60%. These findings indicate that GRMs which assume single-downstream-grid flows are not suitable for hyperresolution modelling because of their limited capability to realistically represent floodplain connectivity. When simulating large-scale floods, MDC scheme is necessary for the following functions: provide additional storage for ovehrbank flows, enhance connectivity between floodplains which allow more realistic

  17. CFD analysis of bubble microlayer and growth in subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Owoeye, Eyitayo James, E-mail:; Schubring, DuWanye, E-mail:


    Highlights: • A new LES-microlayer model is introduced. • Analogous to the unresolved SGS in LES, analysis of bubble microlayer was performed. • The thickness of bubble microlayer was computed at both steady and transient states. • The macroscale two-phase behavior was captured with VOF coupled with AMR. • Numerical validations were performed for both the micro- and macro-region analyses. - Abstract: A numerical study of single bubble growth in turbulent subcooled flow boiling was carried out. The macro- and micro-regions of the bubble were analyzed by introducing a LES-microlayer model. Analogous to the unresolved sub-grid scale (SGS) in LES, a microlayer analysis was performed to capture the unresolved thermal scales for the micro-region heat transfer by deriving equations for the microlayer thickness at steady and transient states. The phase change at the macro-region was based on Volume-of-Fluid (VOF) interface tracking method coupled with adaptive mesh refinement (AMR). Large Eddy Simulation (LES) was used to model the turbulence characteristics. The numerical model was validated with multiple experimental data from the open literature. This study includes parametric variations that cover the operating conditions of boiling water reactor (BWR) and pressurized water reactor (PWR). The numerical model was used to study the microlayer thickness, growth rate, dynamics, and distortion of the bubble.

  18. On the Effect of an Anisotropy-Resolving Subgrid-Scale Model on Turbulent Vortex Motions (United States)


    expression coincides with the modified Leonard stress proposed by Ger- mano et al. (1991). In this model, the SGS turbulence energy kSGS may be evaluated as... mano subgridscale closure method. Phys. Fluids A, Vol. 4, pp. 633-635. Morinishi, Y. and Vasilyev, O.V. (2001), A recommended modification to the

  19. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.


    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  20. Large-scale simulation of karst processes - parameter estimation, model evaluation and quantification of uncertainty (United States)

    Hartmann, A. J.


    Heterogeneity is an intrinsic property of karst systems. It results in complex hydrological behavior that is characterized by an interplay of diffuse and concentrated flow and transport. In large-scale hydrological models, these processes are usually not considered. Instead average or representative values are chosen for each of the simulated grid cells omitting many aspects of their sub-grid variability. In karst regions, this may lead to unreliable predictions when those models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this contribution I present a large-scale groundwater recharge model (0.25° x 0.25° resolution) that takes into karst hydrological processes by using statistical distribution functions to express subsurface heterogeneity. The model is applied over Europe's and the Mediterranean's carbonate rock regions ( 25% of the total area). As no measurements of the variability of subsurface properties are available at this scale, a parameter estimation procedure, which uses latent heat flux and soil moisture observations and quantifies the remaining uncertainty, was applied. The model is evaluated by sensitivity analysis, comparison to other large-scale models without karst processes included and independent recharge observations. Using with historic data (2002-2012) I can show that recharge rates vary strongly over Europe and the Mediterranean. At regions with low information for parameter estimation there is a larger prediction uncertainty (for instance in desert regions). Evaluation with independent recharge estimates shows that, on average, the model provides acceptable estimates, while the other large scale models under-estimate karstic recharge. The results of the sensitivity analysis corroborate the importance of including karst heterogeneity into the model as the distribution shape factor is the most sensitive parameter for

  1. Concepts of scale and scaling (United States)

    Jianguo Wu; Harbin Li


    The relationship between pattern and process is of great interest in all natural and social sciences, and scale is an integral part of this relationship. It is now well documented that biophysical and socioeconomic patterns and processes operate on a wide range of spatial and temporal scales. In particular, the scale multiplicity and scale dependence of pattern,...

  2. Large scale hydrological studies for the benefit of water resources management - looking up or down? (United States)

    Tallaksen, Lena M.


    Hydrological information at the macro scale has become increasingly available through the establishment of global archives of hydrological observations (e.g. the Global Runoff Data Centre) and the development of hydrological models for the purpose of water resource assessments and climate change impact studies at the global and continental scale. As such, it has contributed to improved knowledge of the present state of global water resources and variability across large spatial domains, the role of terrestrial hydrology in earth system models and the influence of climate variability and change on continental hydrology, including extremes. Recent advances include among other, improved representation of subsurface hydrology and land-surface atmosphere feedback processes. Models are further adapted to multiple sources of input data, including remote sensing products, which in turn has facilitated the development of global and continental scale flood and drought monitoring and forecasting systems (e.g. the European Flood Awareness System and the Global Integrated Drought Monitoring and Prediction System). Nevertheless, there are several challenges related to large-scale modelling due to limited data for ground truth (e.g. soil moisture, groundwater, streamflow), large differences in data availability and quality across regions, sub grid variability, downscaled and bias-corrected climate data as driving force, etc. Limitations that have questioned the usefulness of large-scale model simulations for water resource management and policy making at various scales. Still, one can argue that such models represent a useful source of information, particular for continental-scale hydrological assessments and evidence-based policy making at the EU level, as up-to-date, consistent hydrological data are not easily available across national borders. Transfer of knowledge across scales is essential to improve hydrologic predictions at different spatial scales in an ever

  3. Explicit filtering in large eddy simulation using a discontinuous Galerkin method (United States)

    Brazell, Matthew J.

    The discontinuous Galerkin (DG) method is a formulation of the finite element method (FEM). DG provides the ability for a high order of accuracy in complex geometries, and allows for highly efficient parallelization algorithms. These attributes make the DG method attractive for solving the Navier-Stokes equations for large eddy simulation (LES). The main goal of this work is to investigate the feasibility of adopting an explicit filter in the numerical solution of the Navier-Stokes equations with DG. Explicit filtering has been shown to increase the numerical stability of under-resolved simulations and is needed for LES with dynamic sub-grid scale (SGS) models. The explicit filter takes advantage of DG's framework where the solution is approximated using a polyno- mial basis where the higher modes of the solution correspond to a higher order polynomial basis. By removing high order modes, the filtered solution contains low order frequency content much like an explicit low pass filter. The explicit filter implementation is tested on a simple 1-D solver with an initial condi- tion that has some similarity to turbulent flows. The explicit filter does restrict the resolution as well as remove accumulated energy in the higher modes from aliasing. However, the ex- plicit filter is unable to remove numerical errors causing numerical dissipation. A second test case solves the 3-D Navier-Stokes equations of the Taylor-Green vortex flow (TGV). The TGV is useful for SGS model testing because it is initially laminar and transitions into a fully turbulent flow. The SGS models investigated include the constant coefficient Smagorinsky model, dynamic Smagorinsky model, and dynamic Heinz model. The constant coefficient Smagorinsky model is over dissipative, this is generally not desirable however it does add stability. The dynamic Smagorinsky model generally performs better, especially during the laminar-turbulent transition region as expected. The dynamic Heinz model which is

  4. Impacts of small-scale variability on the determination of bulk thermal diffusivity in snowpacks (United States)

    Oldroyd, H. J.; Higgins, C. W.; Huwald, H.; Selker, J. S.; Parlange, M. B.


    Thermal diffusivity of snow is an important physical property associated with key hydrological phenomena such as snowmelt and heat and water vapor exchange with the atmosphere. These phenomena have broad implications in studies of climate and heat and water budgets on many scales. Furthermore, sub grid scale phenomena may enhance these heat and mass exchanges in the snow pack due to its porous nature. We hypothesize that the heat transfer effects of these small-scale variabilities may be seen as an increased bulk thermal diffusivity of the snow. Direct measurements of snow thermal diffusivity require coupled measurements of thermal conductivity and density, which are nonstationary due to snow metamorphism. Furthermore, thermal conductivity measurements are typically obtained with specialized heating probes or plates and snow density measurements require digging snow pits. Therefore, direct measurements are difficult to obtain with high enough temporal resolution such that direct comparisons with atmospheric conditions can be made. This study uses highly resolved temperature measurements from the Plaine Morte glacier in Switzerland as initial and boundary conditions to numerically solve the 1D heat equation and iteratively optimize for thermal diffusivity. The method uses flux boundary conditions to constrain thermal diffusivity such that spuriously high values in thermal diffusivity are eliminated. Additionally, a t-test ensuring statistical significance between solutions of varied thermal diffusivity results in further constraints on thermal diffusivity that eliminate spuriously low values. The results show that time resolved thermal diffusivity can be determined from easily implemented and inexpensive temperature measurements of seasonal snow with good agreement to widely used parameterizations based on snow density. This high time resolution further affords the ability to explore possible turbulence-induced enhancements to heat and mass transfer in the snow.

  5. Maslowian Scale. (United States)

    Falk, C.; And Others

    The development of the Maslowian Scale, a method of revealing a picture of one's needs and concerns based on Abraham Maslow's levels of self-actualization, is described. This paper also explains how the scale is supported by the theories of L. Kohlberg, C. Rogers, and T. Rusk. After a literature search, a list of statements was generated…

  6. Helicity scalings

    Energy Technology Data Exchange (ETDEWEB)

    Plunian, F [ISTerre, CNRS, Universite Joseph Fourier, Grenoble (France); Lessinnes, T; Carati, D [Physique Statistique et Plasmas, Universite Libre de Bruxelles (Belgium); Stepanov, R, E-mail: [Institute of Continuous Media Mechanics of the Russian Academy of Science, Perm (Russian Federation)


    Using a helical shell model of turbulence, Chen et al. (2003) showed that both helicity and energy dissipate at the Kolmogorov scale, independently from any helicity input. This is in contradiction with a previous paper by Ditlevsen and Giuliani (2001) in which, using a GOY shell model of turbulence, they found that helicity dissipates at a scale larger than the Kolmogorov scale, and does depend on the helicity input. In a recent paper by Lessinnes et al. (2011), we showed that this discrepancy is due to the fact that in the GOY shell model only one helical mode (+ or -) is present at each scale instead of both modes in the helical shell model. Then, using the GOY model, the near cancellation of the helicity flux between the + and - modes cannot occur at small scales, as it should be in true turbulence. We review the main results with a focus on the numerical procedure needed to obtain accurate statistics.

  7. Framing scales and scaling frames

    NARCIS (Netherlands)

    Lieshout, van M.; Dewulf, A.; Aarts, M.N.C.; Termeer, C.J.A.M.


    Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment

  8. A new simple h-mesh adaptation algorithm for standard Smagorinsky LES: a first step of Taylor scale as a refinement variable

    Directory of Open Access Journals (Sweden)

    S Kaennakham


    Full Text Available The interaction between discretization error and modeling error has led to some doubts in adopting Solution Adaptive Grid (SAG strategies with LES. Existing SAG approaches contain undesired aspects making the use of one complicated and less convenient to apply to real engineering applications. In this work, a new refinement algorithm is proposed aiming to enhance the efficiency of SAG methodology in terms of simplicity in defining, less user's judgment, designed especially for standard Smagorinsky LES and computational affordability. The construction of a new refinement variable as a function of the Taylor scale, corresponding to the kinetic energy balance requirement of the Smagorinsky SGS model is presented. The numerical study has been tested out with a turbulent plane jet in two dimensions. It is found that the result quality can be effectively improved as well as a significant reduction in CPU time compared to fixed grid cases.

  9. Scaling down

    Directory of Open Access Journals (Sweden)

    Ronald L Breiger


    Full Text Available While “scaling up” is a lively topic in network science and Big Data analysis today, my purpose in this essay is to articulate an alternative problem, that of “scaling down,” which I believe will also require increased attention in coming years. “Scaling down” is the problem of how macro-level features of Big Data affect, shape, and evoke lower-level features and processes. I identify four aspects of this problem: the extent to which findings from studies of Facebook and other Big-Data platforms apply to human behavior at the scale of church suppers and department politics where we spend much of our lives; the extent to which the mathematics of scaling might be consistent with behavioral principles, moving beyond a “universal” theory of networks to the study of variation within and between networks; and how a large social field, including its history and culture, shapes the typical representations, interactions, and strategies at local levels in a text or social network.

  10. Scaling Rules! (United States)

    Malkinson, Dan; Wittenberg, Lea


    Scaling is a fundamental issue in any spatially or temporally hierarchical system. Defining domains and identifying the boundaries of the hierarchical levels may be a challenging task. Hierarchical systems may be broadly classified to two categories: compartmental and continuous ones. Examples of compartmental systems include: governments, companies, computerized networks, biological taxonomy and others. In such systems the compartments, and hence the various levels and their constituents are easily delineated. In contrast, in continuous systems, such as geomorphological, ecological or climatological ones, detecting the boundaries of the various levels may be difficult. We propose that in continuous hierarchical systems a transition from one functional scale to another is associated with increased system variance. Crossing from a domain of one scale to the domain of another is associated with a transition or substitution of the dominant drivers operating in the system. Accordingly we suggest that crossing this boundary is characterized by increased variance, or a "variance leap", which stabilizes, until crossing to the next domain or hierarchy level. To assess this we compiled sediment yield data from studies conducted at various spatial scales and from different environments. The studies were partitioned to ones conducted in undisturbed environments, and those conducted in disturbed environments, specifically by wildfires. The studies were conducted in plots as small as 1 m2, and watersheds larger than 555000 ha. Regressing sediment yield against plot size, and incrementally calculating the variance in the systems, enabled us to detect domains where variance values were exceedingly high. We propose that at these domains scale-crossing occurs, and the systems transition from one hierarchical level to another. Moreover, the degree of the "variance leaps" characterizes the degree of connectivity among the scales.

  11. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth (United States)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.


    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over

  12. Nuclear scales

    Energy Technology Data Exchange (ETDEWEB)

    Friar, J.L.


    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the {pi}-{gamma} force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.

  13. Cloud-scale model intercomparison of chemical constituent transport in deep convection

    Directory of Open Access Journals (Sweden)

    M. C. Barth


    Full Text Available Transport and scavenging of chemical constituents in deep convection is important to understanding the composition of the troposphere and therefore chemistry-climate and air quality issues. High resolution cloud chemistry models have been shown to represent convective processing of trace gases quite well. To improve the representation of sub-grid convective transport and wet deposition in large-scale models, general characteristics, such as species mass flux, from the high resolution cloud chemistry models can be used. However, it is important to understand how these models behave when simulating the same storm. The intercomparison described here examines transport of six species. CO and O3, which are primarily transported, show good agreement among models and compare well with observations. Models that included lightning production of NOx reasonably predict NOx mixing ratios in the anvil compared with observations, but the NOx variability is much larger than that seen for CO and O3. Predicted anvil mixing ratios of the soluble species, HNO3, H2O2, and CH2O, exhibit significant differences among models, attributed to different schemes in these models of cloud processing including the role of the ice phase, the impact of cloud-modified photolysis rates on the chemistry, and the representation of the species chemical reactivity. The lack of measurements of these species in the convective outflow region does not allow us to evaluate the model results with observations.

  14. Spatiotemporal Variability of Turbulence Kinetic Energy Budgets in the Convective Boundary Layer over Both Simple and Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Raj K. [Pacific Northwest National Laboratory, Richland, Washington; Berg, Larry K. [Pacific Northwest National Laboratory, Richland, Washington; Pekour, Mikhail [Pacific Northwest National Laboratory, Richland, Washington; Shaw, William J. [Pacific Northwest National Laboratory, Richland, Washington; Kosovic, Branko [National Center for Atmospheric Research, Boulder, Colorado; Mirocha, Jeffrey D. [Lawrence Livermore National Laboratory, Livermore, California; Ennis, Brandon L. [Sandia National Laboratories, Albuquerque, New Mexico


    The assumption of sub-grid scale (SGS) horizontal homogeneity within a model grid cell, which forms the basis of SGS turbulence closures used by mesoscale models, becomes increasingly tenuous as grid spacing is reduced to a few kilometers or less, such as in many emerging high-resolution applications. Herein, we use the turbulence kinetic energy (TKE) budget equation to study the spatio-temporal variability in two types of terrain—complex (Columbia Basin Wind Energy Study [CBWES] site, north-eastern Oregon) and flat (ScaledWind Farm Technologies [SWiFT] site, west Texas) using the Weather Research and Forecasting (WRF) model. In each case six-nested domains (three domains each for mesoscale and large-eddy simulation [LES]) are used to downscale the horizontal grid spacing from 10 km to 10 m using the WRF model framework. The model output was used to calculate the values of the TKE budget terms in vertical and horizontal planes as well as the averages of grid cells contained in the four quadrants (a quarter area) of the LES domain. The budget terms calculated along the planes and the mean profile of budget terms show larger spatial variability at CBWES site than at the SWiFT site. The contribution of the horizontal derivative of the shear production term to the total production shear was found to be 45% and 15% of the total shear, at the CBWES and SWiFT sites, respectively, indicating that the horizontal derivatives applied in the budget equation should not be ignored in mesoscale model parameterizations, especially for cases with complex terrain with <10 km scale.

  15. Application of Multiscale Parameterization Framework for the Large Scale Hydrologic Modeling (United States)

    Kumar, R.; Samaniego, L. E.; Livneh, B.; Attinger, S.


    In recent decades there has been increasing interest in the development and application of large scale hydrologic models to support the management of regional water resources as well as for flood forecasting and drought monitoring. However, the reliable prediction of distributed hydrologic states (i.e. soil moisture, runoff, evapotranspiration) for large river basins (i.e. ≥ 100 000 km2) requires a robust parameterization technique that avoids scale dependent issues, reduces the over-parameterization problem, and allows the transferability of model parameters across locations (e.g. to unaguged basins). In this study, we show the ability of the recently developed Multiscale Parameter Regionalization (MPR) technique (Samaniego, et. al. 2010), integrated within a grid based hydrologic model (mHM), to address the above problems. The MPR technique explicitly accounts for sub-grid variability of basin physical characteristics by linking them to model parameters at much finer spatial resolution (e.g. 100 - 500 m) than the model pixels (> 1 km). The application of the multiscale parameterization framework was tested in four large scale river basins; two in Central Europe (the Rhine and the Elbe river basins), and two in North America (the Ohio and the Red river basins). Model runs were performed at 3h time scale on four spatial resolutions, ranging from a grid size of approximately 7 km to 50 km, for the period from 1960 to 2000. Results of the study indicated that it is possible to transfer a priori set of global parameters, estimated in a relatively small German river basin (Neckar river, 10 000 km2), to all four large river basins including the remote North American basins. The values of Nash Sutcliffe efficiency for the daily and monthly streamflow simulations were, on average, above 0.80. Similar results were obtained from simulations at four spatial resolutions (0.0625°, 0.125°, 0.25°, and 0.5°), which indicated the possibility for the cross-scale

  16. Wall-resolved Large Eddy Simulation of a flow through a square-edged orifice in a round pipe at Re = 25,000

    Energy Technology Data Exchange (ETDEWEB)

    Benhamadouche, S., E-mail:; Arenas, M.; Malouf, W.J.


    Highlights: • Wall-resolved LES can predict the flow through a square-edged orifice at Re = 25,000. • LES results are compared with the available experimental data and ISO 5167-2. • Pressure loss and discharge coefficients are in very good agreement with ISO 5167-2. • The present wall-resolved LES could be used as reference data for RANS validation. - Abstract: The orifice plate is a pressure differential device frequently used for flow measurements in pipes across different industries. The present study demonstrates the accuracy obtainable using a wall-resolved Large Eddy Simulation (LES) approach to predict the velocity, the Reynolds stresses, the pressure loss and the discharge coefficient for a flow through a square-edged orifice in a round pipe at a Reynolds number of 25,000. The ratio of the orifice diameter to the pipe diameter is β = 0.62, and the ratio of the orifice thickness to the pipe diameter is 0.11. The mesh is sized using refinement criteria at the wall and preliminary RANS results to ensure that the solution is resolved beyond an estimated Taylor micro-scale. The inlet condition is simulated using a recycling method, and the LES is run with a dynamic Smagorinsky sub-grid scale (SGS) model. The sensitivity to the SGS model and to the pressure–velocity coupling is shown to be small in the present study. The LES is compared with the available experimental data and ISO 5167-2. In general, the LES shows good agreement with the velocity from the experimental data. The profiles of the Reynolds stresses are similar, but an offset is observed in the diagonal stresses. The pressure loss and discharge coefficients are shown to be in very good agreement with the predictions of ISO 5167-2. Therefore, the wall-resolved LES is shown to be highly accurate in simulating the flow across a square-edged orifice.


    Energy Technology Data Exchange (ETDEWEB)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.


    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is the inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.

  18. Lagrangian filtered density function for LES-based stochastic modelling of turbulent particle-laden flows (United States)

    Innocenti, Alessio; Marchioli, Cristian; Chibbaro, Sergio


    The Eulerian-Lagrangian approach based on Large-Eddy Simulation (LES) is one of the most promising and viable numerical tools to study particle-laden turbulent flows, when the computational cost of Direct Numerical Simulation (DNS) becomes too expensive. The applicability of this approach is however limited if the effects of the Sub-Grid Scales (SGSs) of the flow on particle dynamics are neglected. In this paper, we propose to take these effects into account by means of a Lagrangian stochastic SGS model for the equations of particle motion. The model extends to particle-laden flows the velocity-filtered density function method originally developed for reactive flows. The underlying filtered density function is simulated through a Lagrangian Monte Carlo procedure that solves a set of Stochastic Differential Equations (SDEs) along individual particle trajectories. The resulting model is tested for the reference case of turbulent channel flow, using a hybrid algorithm in which the fluid velocity field is provided by LES and then used to advance the SDEs in time. The model consistency is assessed in the limit of particles with zero inertia, when "duplicate fields" are available from both the Eulerian LES and the Lagrangian tracking. Tests with inertial particles were performed to examine the capability of the model to capture the particle preferential concentration and near-wall segregation. Upon comparison with DNS-based statistics, our results show improved accuracy and considerably reduced errors with respect to the case in which no SGS model is used in the equations of particle motion.

  19. Assessment of subgrid-scale models with a large-eddy simulation-dedicated experimental database: The pulsatile impinging jet in turbulent cross-flow (United States)

    Baya Toda, Hubert; Cabrit, Olivier; Truffin, Karine; Bruneaux, Gilles; Nicoud, Franck


    Large-Eddy Simulation (LES) in complex geometries and industrial applications like piston engines, gas turbines, or aircraft engines requires the use of advanced subgrid-scale (SGS) models able to take into account the main flow features and the turbulence anisotropy. Keeping this goal in mind, this paper reports a LES-dedicated experiment of a pulsatile hot-jet impinging a flat-plate in the presence of a cold turbulent cross-flow. Unlike commonly used academic test cases, this configuration involves different flow features encountered in complex configurations: shear/rotating regions, stagnation point, wall-turbulence, and the propagation of a vortex ring along the wall. This experiment was also designed with the aim to use quantitative and nonintrusive optical diagnostics such as Particle Image Velocimetry, and to easily perform a LES involving a relatively simple geometry and well-controlled boundary conditions. Hence, two eddy-viscosity-based SGS models are investigated: the dynamic Smagorinsky model [M. Germano, U. Piomelli, P. Moin, and W. Cabot, "A dynamic subgrid-scale eddy viscosity model," Phys. Fluids A 3(7), 1760-1765 (1991)] and the σ-model [F. Nicoud, H. B. Toda, O. Cabrit, S. Bose, and J. Lee, "Using singular values to build a subgrid-scale model for large eddy simulations," Phys. Fluids 23(8), 085106 (2011)]. Both models give similar results during the first phase of the experiment. However, it was found that the dynamic Smagorinsky model could not accurately predict the vortex-ring propagation, while the σ-model provides a better agreement with the experimental measurements. Setting aside the implementation of the dynamic procedure (implemented here in its simplest form, i.e., without averaging over homogeneous directions and with clipping of negative values to ensure numerical stability), it is suggested that the mitigated predictions of the dynamic Smagorinsky model are due to the dynamic constant, which strongly depends on the mesh resolution

  20. A Novel Multi-Scale Domain Overlapping CFD/STH Coupling Methodology for Multi-Dimensional Flows Relevant to Nuclear Applications (United States)

    Grunloh, Timothy P.

    The objective of this dissertation is to develop a 3-D domain-overlapping coupling method that leverages the superior flow field resolution of the Computational Fluid Dynamics (CFD) code STAR-CCM+ and the fast execution of the System Thermal Hydraulic (STH) code TRACE to efficiently and accurately model thermal hydraulic transport properties in nuclear power plants under complex conditions of regulatory and economic importance. The primary contribution is the novel Stabilized Inertial Domain Overlapping (SIDO) coupling method, which allows for on-the-fly correction of TRACE solutions for local pressures and velocity profiles inside multi-dimensional regions based on the results of the CFD simulation. The method is found to outperform the more frequently-used domain decomposition coupling methods. An STH code such as TRACE is designed to simulate large, diverse component networks, requiring simplifications to the fluid flow equations for reasonable execution times. Empirical correlations are therefore required for many sub-grid processes. The coarse grids used by TRACE diminish sensitivity to small scale geometric details such as Reactor Pressure Vessel (RPV) internals. A CFD code such as STAR-CCM+ uses much finer computational meshes that are sensitive to the geometric details of reactor internals. In turbulent flows, it is infeasible to fully resolve the flow solution, but the correlations used to model turbulence are at a low level. The CFD code can therefore resolve smaller scale flow processes. The development of a 3-D coupling method was carried out with the intention of improving predictive capabilities of transport properties in the downcomer and lower plenum regions of an RPV in reactor safety calculations. These regions are responsible for the multi-dimensional mixing effects that determine the distribution at the core inlet of quantities with reactivity implications, such as fluid temperature and dissolved neutron absorber concentration.

  1. Learner Autonomy Scale: A Scale Development Study (United States)

    Orakci, Senol; Gelisli, Yücel


    The goal of the study is to develop a scale named "Learner Autonomy Scale" (LAS) for determining the learner autonomy of the students toward English lesson. The proposal scale, composed of 29 items, was applied to two study groups in Turkey. The group of Exploratory Factor Analysis that aims to determine the psychometric properties…

  2. Turbulence characteristics in a free wake of an actuator disk: comparisons between a rotating and a non-rotating actuator disk in uniform inflow (United States)

    Olivares-Espinosa, H.; Breton, S.-P.; Masson, C.; Dufresne, L.


    An Actuator Disk (AD) model is implemented in the CFD platform OpenFOAM® with the purpose of studying the characteristics of the turbulent flow in the wake of the rotor of a horizontal-axis wind turbine. This AD model is based on the blade-element theory and it employs airfoil data to calculate the distribution of forces over the disk of a conceptual 5 MW offshore wind turbine. A uniform, non-turbulent flow is used as inflow so the turbulence is only produced in the wake of the AD. Computations are performed using Large-Eddy Simulations (LES) to capture the unsteady fluctuations in the flow. Additionally, a classic Smagorinsky Sub-Grid Scale (SGS) technique is employed to model the unfiltered motions. This new AD implementation makes use of a control system to adjust the rotational velocity of the rotor (below rated power) to the local conditions of the wind flow. The preliminary results show that the wake characteristics are influenced by the force distribution on the disk when compared to the wake produced by a uniformly loaded AD. Also, we observe that the simulated rotor reacts correctly to the introduction of the control system, although operating below the optimal power.

  3. Dynamic scaling analysis of the long-range RKKY Ising spin glass DyxY1 -xRu2Si2 (United States)

    Tabata, Y.; Waki, T.; Nakamura, H.


    Dynamic scaling analyses of linear and nonlinear ac susceptibilities in a model magnet of the long-rang Ruderman-Kittel-Kasuya-Yosida (RKKY) Ising spin glass (SG) Dy0.103Y0.897Ru2Si2 were examined. The obtained set of critical exponents, γ ˜ 1, β ˜ 1, δ ˜ 2, and z ν ˜ 3.4, indicates the SG phase transition belongs to a universality class different from that of either the canonical (Heisenberg) or short-range Ising SGs. The analyses also reveal a finite-temperature SG transition with the same critical exponents under a magnetic field and the phase-transition line Tg(H ) described by Tg(H ) =Tg(0 ) (1 -A H2 /ϕ) , with ϕ ˜ 2. The crossover exponent ϕ obeys the scaling relation ϕ =γ +β within the margin of errors. These results strongly suggest spontaneous replica-symmetry breaking (RSB) with a non- or marginal-mean-field universality class in the long-range RKKY Ising SG.

  4. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  5. A robust and quick method to validate large scale flood inundation modelling with SAR remote sensing (United States)

    Schumann, G. J.; Neal, J. C.; Bates, P. D.


    With flood frequency likely to increase as a result of altered precipitation patterns triggered by climate change, there is a growing demand for more data and, at the same time, improved flood inundation modeling. The aim is to develop more reliable flood forecasting systems over large scales that account for errors and inconsistencies in observations, modeling, and output. Over the last few decades, there have been major advances in the fields of remote sensing, particularly microwave remote sensing, and flood inundation modeling. At the same time both research communities are attempting to roll out their products on a continental to global scale. In a first attempt to harmonize both research efforts on a very large scale, a two-dimensional flood model has been built for the Niger Inland Delta basin in northwest Africa on a 700 km reach of the Niger River, an area similar to the size of the UK. This scale demands a different approach to traditional 2D model structuring and we have implemented a simplified version of the shallow water equations as developed in [1] and complemented this formulation with a sub-grid structure for simulating flows in a channel much smaller than the actual grid resolution of the model. This joined integration allows to model flood flows across two dimensions with efficient computational speeds but without losing out on channel resolution when moving to coarse model grids. Using gaged daily flows, the model was applied to simulate the wetting and drying of the Inland Delta floodplain for 7 years from 2002 to 2008, taking less than 30 minutes to simulate 365 days at 1 km resolution. In these rather data poor regions of the world and at this type of scale, verification of flood modeling is realistically only feasible with wide swath or global mode remotely sensed imagery. Validation of the Niger model was carried out using sequential global mode SAR images over the period 2006/7. This scale not only requires different types of models and

  6. Scaling of Metabolic Scaling within Physical Limits

    Directory of Open Access Journals (Sweden)

    Douglas S. Glazier


    Full Text Available Both the slope and elevation of scaling relationships between log metabolic rate and log body size vary taxonomically and in relation to physiological or developmental state, ecological lifestyle and environmental conditions. Here I discuss how the recently proposed metabolic-level boundaries hypothesis (MLBH provides a useful conceptual framework for explaining and predicting much, but not all of this variation. This hypothesis is based on three major assumptions: (1 various processes related to body volume and surface area exert state-dependent effects on the scaling slope for metabolic rate in relation to body mass; (2 the elevation and slope of metabolic scaling relationships are linked; and (3 both intrinsic (anatomical, biochemical and physiological and extrinsic (ecological factors can affect metabolic scaling. According to the MLBH, the diversity of metabolic scaling relationships occurs within physical boundary limits related to body volume and surface area. Within these limits, specific metabolic scaling slopes can be predicted from the metabolic level (or scaling elevation of a species or group of species. In essence, metabolic scaling itself scales with metabolic level, which is in turn contingent on various intrinsic and extrinsic conditions operating in physiological or evolutionary time. The MLBH represents a “meta-mechanism” or collection of multiple, specific mechanisms that have contingent, state-dependent effects. As such, the MLBH is Darwinian in approach (the theory of natural selection is also meta-mechanistic, in contrast to currently influential metabolic scaling theory that is Newtonian in approach (i.e., based on unitary deterministic laws. Furthermore, the MLBH can be viewed as part of a more general theory that includes other mechanisms that may also affect metabolic scaling.

  7. Black Pineleaf Scale (FIDL) (United States)

    Katharine A. Sheehan; Mario A. Melendez; Shana Westfall


    The black pineleaf scale (Nuculaspis californica (Coleman)) belongs to a group of sucking insects called armored scales. Concealed under their protective shells, these scales insert their mouthparts into their hosts, removing sap and, possibly, injecting toxic enzymes secreted in the saliva. Armored scales are important pests of agricultural and ornamental plants;...

  8. Numerical Dissipation and Subgrid Scale Modeling for Separated Flows at Moderate Reynolds Numbers (United States)

    Cadieux, Francois; Domaradzki, Julian Andrzej


    Flows in rotating machinery, for unmanned and micro aerial vehicles, wind turbines, and propellers consist of different flow regimes. First, a laminar boundary layer is followed by a laminar separation bubble with a shear layer on top of it that experiences transition to turbulence. The separated turbulent flow then reattaches and evolves downstream from a nonequilibrium turbulent boundary layer to an equilibrium one. In previous work, the capability of LES to reduce the resolution requirements down to 1 % of DNS resolution for such flows was demonstrated (Cadieux et al., JFE 136-6). However, under-resolved DNS agreed better with the benchmark DNS than simulations with explicit SGS modeling because numerical dissipation and filtering alone acted as a surrogate SGS dissipation. In the present work numerical viscosity is quantified using a new method proposed recently by Schranner et al. and its effects are analyzed and compared to turbulent eddy viscosities of explicit SGS models. The effect of different SGS models on a simulation of the same flow using a non-dissipative code is also explored. Supported by NSF.

  9. Towards Cloud-Resolving European-Scale Climate Simulations using a fully GPU-enabled Prototype of the COSMO Regional Model (United States)

    Leutwyler, David; Fuhrer, Oliver; Cumming, Benjamin; Lapillonne, Xavier; Gysi, Tobias; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph


    The representation of moist convection is a major shortcoming of current global and regional climate models. State-of-the-art global models usually operate at grid spacings of 10-300 km, and therefore cannot fully resolve the relevant upscale and downscale energy cascades. Therefore parametrization of the relevant sub-grid scale processes is required. Several studies have shown that this approach entails major uncertainties for precipitation processes, which raises concerns about the model's ability to represent precipitation statistics and associated feedback processes, as well as their sensitivities to large-scale conditions. Further refining the model resolution to the kilometer scale allows representing these processes much closer to first principles and thus should yield an improved representation of the water cycle including the drivers of extreme events. Although cloud-resolving simulations are very useful tools for climate simulations and numerical weather prediction, their high horizontal resolution and consequently the small time steps needed, challenge current supercomputers to model large domains and long time scales. The recent innovations in the domain of hybrid supercomputers have led to mixed node designs with a conventional CPU and an accelerator such as a graphics processing unit (GPU). GPUs relax the necessity for cache coherency and complex memory hierarchies, but have a larger system memory-bandwidth. This is highly beneficial for low compute intensity codes such as atmospheric stencil-based models. However, to efficiently exploit these hybrid architectures, climate models need to be ported and/or redesigned. Within the framework of the Swiss High Performance High Productivity Computing initiative (HP2C) a project to port the COSMO model to hybrid architectures has recently come to and end. The product of these efforts is a version of COSMO with an improved performance on traditional x86-based clusters as well as hybrid architectures with GPUs

  10. On Quantitative Rorschach Scales. (United States)

    Haggard, Ernest A.


    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  11. Atlantic Salmon Scale Measurements (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Scales are collected annually from smolt trapping operations in Maine as wellas other sampling opportunities (e.g. marine surveys, fishery sampling etc.). Scale...

  12. Scale Space Hierarchy

    NARCIS (Netherlands)

    Kuijper, Arjan; Florack, L.M.J.; Viergever, M.A.


    We investigate the deep structure of a scale space image. We concentrate on scale space critical points - points with vanishing gradient with respect to both spatial and scale direction. We show that these points are always saddle points. They turn out to be extremely useful, since the

  13. Landscape-scale water balance monitoring with an iGrav superconducting gravimeter in a field enclosure (United States)

    Güntner, Andreas; Reich, Marvin; Mikolaj, Michal; Creutzfeldt, Benjamin; Schroeder, Stephan; Wziontek, Hartmut


    In spite of the fundamental role of the landscape water balance for the Earth's water and energy cycles, monitoring the water balance and its components beyond the point scale is notoriously difficult due to the multitude of flow and storage processes and their spatial heterogeneity. Here, we present the first field deployment of an iGrav superconducting gravimeter (SG) in a minimized enclosure for long-term integrative monitoring of water storage changes. Results of the field SG on a grassland site under wet-temperate climate conditions were compared to data provided by a nearby SG located in the controlled environment of an observatory building. The field system proves to provide gravity time series that are similarly precise as those of the observatory SG. At the same time, the field SG is more sensitive to hydrological variations than the observatory SG. We demonstrate that the gravity variations observed by the field setup are almost independent of the depth below the terrain surface where water storage changes occur (contrary to SGs in buildings), and thus the field SG system directly observes the total water storage change, i.e., the water balance, in its surroundings in an integrative way. We provide a framework to single out the water balance components actual evapotranspiration and lateral subsurface discharge from the gravity time series on annual to daily timescales. With about 99 and 85 % of the gravity signal due to local water storage changes originating within a radius of 4000 and 200 m around the instrument, respectively, this setup paves the road towards gravimetry as a continuous hydrological field-monitoring technique at the landscape scale.

  14. Landscape-scale water balance monitoring with an iGrav superconducting gravimeter in a field enclosure

    Directory of Open Access Journals (Sweden)

    A. Güntner


    Full Text Available In spite of the fundamental role of the landscape water balance for the Earth's water and energy cycles, monitoring the water balance and its components beyond the point scale is notoriously difficult due to the multitude of flow and storage processes and their spatial heterogeneity. Here, we present the first field deployment of an iGrav superconducting gravimeter (SG in a minimized enclosure for long-term integrative monitoring of water storage changes. Results of the field SG on a grassland site under wet–temperate climate conditions were compared to data provided by a nearby SG located in the controlled environment of an observatory building. The field system proves to provide gravity time series that are similarly precise as those of the observatory SG. At the same time, the field SG is more sensitive to hydrological variations than the observatory SG. We demonstrate that the gravity variations observed by the field setup are almost independent of the depth below the terrain surface where water storage changes occur (contrary to SGs in buildings, and thus the field SG system directly observes the total water storage change, i.e., the water balance, in its surroundings in an integrative way. We provide a framework to single out the water balance components actual evapotranspiration and lateral subsurface discharge from the gravity time series on annual to daily timescales. With about 99 and 85 % of the gravity signal due to local water storage changes originating within a radius of 4000 and 200 m around the instrument, respectively, this setup paves the road towards gravimetry as a continuous hydrological field-monitoring technique at the landscape scale.

  15. Why PUB needs scaling (United States)

    Lovejoy, S.; Schertzer, D.; Hubert, P.; Mouchel, J. M.; Benjoudhi, H.; Tchigurinskaya, Y.; Gaume, E.; Vesseire, J.-M.


    Hydrological fields display an extreme variability over a wide range of space-time scales. This variability is beyond the scope of classical mathematical and modeling methods which are forced to combine homogeneity assumptions with scale truncations and subgrid parameterizations. These ad hoc procedures nevertheless lead to complex numerical codes: they are difficult to transfer from one basin to another one, or even to verify with data at a different scale. Tuning the model parameters is hazardous: “predictions” are often reduced to fitting existing observations and are in any case essentially limited to the narrow range of space-time scales over which the parameters have been estimated. In contrast, in recent scaling approaches heterogeneity and uncertainty at all scales are no longer obstacles. The variability is viewed as a consequence of a scale symmetry which must first be elucidated and then exploited: small scale homogeneity assumptions are replaced by small scale heterogeneity assumptions which are verified from data covering wide ranges of scale. PUB provides an unprecedented opportunity not only to test scaling concepts and techniques, but also to development them further. Indeed, PUB can be restated in the following manner: given a partial knowledge on the input (atmospheric states, dynamics and fluxes) and of the media (basin) over a given range of scales, what can we predict for the output (steamflow and water quality) and over which range of scales? We illustrate this state of the art with examples taken from various projects involving precipitation and stream flow collectively spanning the range of scales from centimeters to planetary scales in space, from seconds to tens of years in time.

  16. Physical modelling of interactions between interfaces and turbulence; Modelisation physique des interactions entre interfaces et turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Toutant, A


    The complex interactions between interfaces and turbulence strongly impact the flow properties. Unfortunately, Direct Numerical Simulations (DNS) have to entail a number of degrees of freedom proportional to the third power of the Reynolds number to correctly describe the flow behaviour. This extremely hard constraint makes it impossible to use DNS for industrial applications. Our strategy consists in using and improving DNS method in order to develop the Interfaces and Sub-grid Scales concept. ISS is a two-phase equivalent to the single-phase Large Eddy Simulation (LES) concept. The challenge of ISS is to integrate the two-way coupling phenomenon into sub-grid models. Applying a space filter, we have exhibited correlations or sub-grid terms that require closures. We have shown that, in two-phase flows, the presence of a discontinuity leads to specific sub-grid terms. Comparing the maximum of the norm of the sub-grid terms with the maximum of the norm of the advection tensor, we have found that sub-grid terms related to interfacial forces and viscous effect are negligible. Consequently, in the momentum balance, only the sub-grid terms related to inertia have to be closed. Thanks to a priori tests performed on several DNS data, we demonstrate that the scale similarity hypothesis, reinterpreted near discontinuity, provides sub-grid models that take into account the two-way coupling phenomenon. These models correspond to the first step of our work. Indeed, in this step, interfaces are smooth and, interactions between interfaces and turbulence occur in a transition zone where each physical variable varies sharply but continuously. The next challenge has been to determine the jump conditions across the sharp equivalent interface corresponding to the sub-grid models of the transition zone. We have used the matched asymptotic expansion method to obtain the jump conditions. The first tests on the velocity of the sharp equivalent interface are very promising (author)

  17. Scale and scaling in agronomy and environmental sciences (United States)

    Scale is of paramount importance in environmental studies, engineering, and design. The unique course covers the following topics: scale and scaling, methods and theories, scaling in soils and other porous media, scaling in plants and crops; scaling in landscapes and watersheds, and scaling in agro...

  18. Cranfield situation awareness scale :


    Dennehy, K.


    Training to enhance situation awareness depends upon having satisfactory quantitative methods for measuring situation awareness. Until the development of the Cranfield-SAS, there was no direct subjective rating scale to measure the situation awareness of student (ab initio) civil pilots (see appendix 4 for an overview of the measurement guidelines for an overview of the measurement guidelines for scale development). The development of the scale was part requirement for a Ph.D. at Cranfield Un...

  19. Fractal Characteristics Analysis of Blackouts in Interconnected Power Grid

    DEFF Research Database (Denmark)

    Wang, Feng; Li, Lijuan; Li, Canbing


    The power failure models are a key to understand the mechanism of large scale blackouts. In this letter, the similarity of blackouts in interconnected power grids (IPGs) and their sub-grids is discovered by the fractal characteristics analysis to simplify the failure models of the IPG. The distri......The power failure models are a key to understand the mechanism of large scale blackouts. In this letter, the similarity of blackouts in interconnected power grids (IPGs) and their sub-grids is discovered by the fractal characteristics analysis to simplify the failure models of the IPG....... The distribution characteristics of blackouts in various sub-grids are demonstrated based on the Kolmogorov-Smirnov (KS) test. The fractal dimensions (FDs) of the IPG and its sub-grids are then obtained by using the KS test and the maximum likelihood estimation (MLE). The blackouts data in China were used...

  20. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo


    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...

  1. Novel Reading Maturity Scale. (United States)

    Reich, Carol

    Designed to assess the maturity level of the novels which students read, the Novel Reading Maturity Scale (NRMS) is based on the notion that fiction of high quality is characterized by a number of themes or topics. The list of 22 topics in NRMS came from a survey of several guides on books for teenagers. To explore the reliability of the scale,…

  2. The career distress scale

    DEFF Research Database (Denmark)

    Creed, Peter; Hood, Michelle; Praskova, Anna


    Career distress is a common and painful outcome of many negative career experiences, such as career indecision, career compromise, and discovering career barriers. However, there are very few scales devised to assess career distress, and the two existing scales identified have psychometric weakne...

  3. Biological scaling and physics

    Indian Academy of Sciences (India)


    Kleiber scaling in the previous paragraph has the growth in Q kept smaller in spite of the high power dependence on R in eq. (1). 4. Conclusions. The arguments that have been advanced so far by pre- vious authors swing to extremes at either end. Thus, the rich variety and diversity in biology, including of scaling exponents ...

  4. The RRR Scale. (United States)

    Christensen, K. Eleanor

    The School Readiness Rating Scale was developed to help teachers organize their suggestions to parents about how parents can help their children prepare for beginning reading experiences. The scale surveys five important aspects of readiness for beginning reading: visual perception, visual motor perception, auditory perception and discrimination,…

  5. Genome-Scale Models

    DEFF Research Database (Denmark)

    Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel


    An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical...

  6. The Fatherhood Scale (United States)

    Dick, Gary L.


    This article reports on the initial validation of the Fatherhood Scale (FS), a 64-item instrument designed to measure the type of relationship a male adult had with his father while growing up. The FS was validated using a convenience sample of 311 males. The assessment packet contained a demographic form, the Conflict Tactics Scale (2),…

  7. Ensemble Pulsar Time Scale (United States)

    Yin, Dong-shan; Gao, Yu-ping; Zhao, Shu-hong


    Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observations are not evenly sampled, and the internals between two data points range from several hours to more than half a month. Further more, these data sets are sparse. All this makes it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, a cubic spline interpolation is used to densify the data set, and make the intervals between data points uniform. Then, the Vondrak filter is employed to smooth the data set, and get rid of the high-frequency noises, and finally the weighted average method is adopted to generate the ensemble pulsar time scale. The newly released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set is used to generate the ensemble pulsar time scale. This data set includes the 9-year observational data of 37 millisecond pulsars observed by the 100-meter Green Bank telescope and the 305-meter Arecibo telescope. It is found that the algorithm used in this paper can reduce effectively the influence caused by the noises in pulsar timing residuals, and improve the long-term stability of the ensemble pulsar time scale. Results indicate that the long-term (> 1 yr) stability of the ensemble pulsar time scale is better than 3.4 × 10-15.

  8. NPP Grassland: Central Plains Experimental Range (SGS), USA, 1939-1990, R1 (United States)

    National Aeronautics and Space Administration — This data set records the productivity of a semiarid shortgrass prairie steppe located in the Central Plains Experimental Reserve (CPER)/Pawnee National Grassland in...

  9. Large-scale structure

    CERN Document Server

    White, S D M


    Abstract. Recent observational surveys have made substantial progress in quantifying the structure of the Universe on large scales. Galaxy density and galaxy velocity fields show deviations from the predictions of a homogeneous and isotropic world model on scales approaching one percent of the current hori— zon scale. A comparison of the amplitudes in density and in velocity provides the first direct dynamical evidence in favour of a high mean density similar to that required for closure. The fluctuations observed on these scales have the amplitude predicted by the standard Cold Dark Matter (CDM) model when this model is normalised to agree with the microwave background fluc- tuations measured on much larger scales by the COBE satellite. However, a CDM model with this amplitude appears inconsistent with observational data on smaller scales. In addition it predicts a scale dependence of fluctua— tion amplitude which disagrees with that observed for galaxies in the APM survey of two million faint galaxi...

  10. Small scale optics

    CERN Document Server

    Yupapin, Preecha


    The behavior of light in small scale optics or nano/micro optical devices has shown promising results, which can be used for basic and applied research, especially in nanoelectronics. Small Scale Optics presents the use of optical nonlinear behaviors for spins, antennae, and whispering gallery modes within micro/nano devices and circuits, which can be used in many applications. This book proposes a new design for a small scale optical device-a microring resonator device. Most chapters are based on the proposed device, which uses a configuration know as a PANDA ring resonator. Analytical and nu

  11. Small-scale Biorefining

    NARCIS (Netherlands)

    Visser, de C.L.M.; Ree, van R.


    One promising way to accelerate the market implementation of integrated biorefineries is to promote small (regional) biorefinery initiatives. Small-scale biorefineries require relatively low initial investments, and therefore are often lacking the financing problems that larger facilities face. They

  12. Coma scales: a historical review


    Bordini, Ana Luisa; Luiz, Thiago F.; Fernandes, Maurício; Arruda, Walter O.; Teive, Hélio A.G.


    OBJECTIVE: To describe the most important coma scales developed in the last fifty years. METHOD: A review of the literature between 1969 and 2009 in the Medline and Scielo databases was carried out using the following keywords: coma scales, coma, disorders of consciousness, coma score and levels of coma. RESULTS: Five main scales were found in chronological order: the Jouvet coma scale, the Moscow coma scale, the Glasgow coma scale (GCS), the Bozza-Marrubini scale and the FOUR score (Full Out...

  13. Universities scale like cities.

    Directory of Open Access Journals (Sweden)

    Anthony F J van Raan

    Full Text Available Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.

  14. Cardinal scales for health evaluation

    DEFF Research Database (Denmark)

    Harvey, Charles; Østerdal, Lars Peter Raahave


    Policy studies often evaluate health for an individual or for a population by using measurement scales that are ordinal scales or expected-utility scales. This paper develops scales of a different type, commonly called cardinal scales, that measure changes in health. Also, we argue that cardinal...

  15. No-Scale Inflation

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V.; Olive, Keith A.


    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on $R + R^2$ gravity, with a tilted spectrum of scalar perturbations: $n_s \\sim 0.96$, and small values of the tensor-to-scalar perturbation ratio $r < 0.1$, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  16. Wavelets, vibrations and scalings

    CERN Document Server

    Meyer, Yves


    Physicists and mathematicians are intensely studying fractal sets of fractal curves. Mandelbrot advocated modeling of real-life signals by fractal or multifractal functions. One example is fractional Brownian motion, where large-scale behavior is related to a corresponding infrared divergence. Self-similarities and scaling laws play a key role in this new area. There is a widely accepted belief that wavelet analysis should provide the best available tool to unveil such scaling laws. And orthonormal wavelet bases are the only existing bases which are structurally invariant through dyadic dilations. This book discusses the relevance of wavelet analysis to problems in which self-similarities are important. Among the conclusions drawn are the following: 1) A weak form of self-similarity can be given a simple characterization through size estimates on wavelet coefficients, and 2) Wavelet bases can be tuned in order to provide a sharper characterization of this self-similarity. A pioneer of the wavelet "saga", Meye...

  17. Urban Scaling in Europe

    CERN Document Server

    Bettencourt, Luis M A


    Over the last decades, in disciplines as diverse as economics, geography, and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the for...

  18. Elders Health Empowerment Scale (United States)


    Introduction: Empowerment refers to patient skills that allow them to become primary decision-makers in control of daily self-management of health problems. As important the concept as it is, particularly for elders with chronic diseases, few available instruments have been validated for use with Spanish speaking people. Objective: Translate and adapt the Health Empowerment Scale (HES) for a Spanish-speaking older adults sample and perform its psychometric validation. Methods: The HES was adapted based on the Diabetes Empowerment Scale-Short Form. Where "diabetes" was mentioned in the original tool, it was replaced with "health" terms to cover all kinds of conditions that could affect health empowerment. Statistical and Psychometric Analyses were conducted on 648 urban-dwelling seniors. Results: The HES had an acceptable internal consistency with a Cronbach's α of 0.89. The convergent validity was supported by significant Pearson's Coefficient correlations between the HES total and item scores and the General Self Efficacy Scale (r= 0.77), Swedish Rheumatic Disease Empowerment Scale (r= 0.69) and Making Decisions Empowerment Scale (r= 0.70). Construct validity was evaluated using item analysis, half-split test and corrected item to total correlation coefficients; with good internal consistency (α> 0.8). The content validity was supported by Scale and Item Content Validity Index of 0.98 and 1.0, respectively. Conclusions: HES had acceptable face validity and reliability coefficients; which added to its ease administration and users' unbiased comprehension, could set it as a suitable tool in evaluating elder's outpatient empowerment-based medical education programs. PMID:25767307

  19. Scaling up Telemedicine

    DEFF Research Database (Denmark)

    Christensen, Jannie Kristine Bang; Nielsen, Jeppe Agger; Gustafsson, Jeppe

    through negotiating, mobilizing coalitions, and legitimacy building. To illustrate and further develop this conceptualization, we build on insights from a longitudinal case study (2008-2014) and provide a rich empirical account of how a Danish telemedicine pilot was transformed into a large......-scale telemedicine project through simultaneous translation and theorization efforts in a cross-sectorial, politicized social context. Although we focus on upscaling as a bottom up process (from pilot to large scale), we argue that translation and theorization, and associated political behavior occurs in a broader...

  20. SI - Small Scale Advantages


    Nordström, Marie; Kallin Westin, Lena


    Not being part of a larger SI-organisation has both advantages and disadvantages. In this paper we try to illustrate the advantages of doing SI small scale. In a large scale SI-organisation the supervisors are often not teachers themselves and/or not familiar with the practices of a specific course. To have teaching staff supervising a SIproject completely focused on one course is favourable in many ways. The decision to introduce SI was taken by the department of Computing Science to support...

  1. Vineland Adaptive Behavior Scales. (United States)

    Icabone, Dona G.


    This article describes the Vineland Adaptive Behavior Scales, a general assessment of personal and social sufficiency of individuals from birth through adulthood to determine areas of strength and weakness. The instrument assesses communication, daily living skills, socialization, and motor skills. Its administration, standardization, reliability,…

  2. Symbolic Multidimensional Scaling

    NARCIS (Netherlands)

    P.J.F. Groenen (Patrick); Y. Terada


    markdownabstract__Abstract__ Multidimensional scaling (MDS) is a technique that visualizes dissimilarities between pairs of objects as distances between points in a low dimensional space. In symbolic MDS, a dissimilarity is not just a value but can represent an interval or even a histogram. Here,

  3. Build an Interplanetary Scale. (United States)

    Matthews, Catherine; And Others


    Describes an activity in which students use a bathroom scale and a long board to see how their weight changes on other planets and the moon. Materials list, procedures, tables of planet radii, comparative values, and gravitational ratios are provided. (DDR)

  4. An Estimated Income Scale. (United States)

    Nicholson, Everard

    The decision to develop an estimated income scale arose from a wish to prove or disprove the statement that colleges like Brown University may be headed toward a situation where the student body will consist of the rich and the poor, the traditional group of middle class having been eliminated. As the research proceeded, it became evident that an…

  5. Sawtooth Period Scaling

    CERN Document Server

    Connor, J W; Hastie, R J; Zocco, A


    We discuss the role of neoclassical resistivity and local magnetic shear in the prediction of the sawtooth period in tokamaks. When collisional detrapping of electrons is considered the value of the safety factor on axis, $q(t,0)$, evolves on a new time scale, $\\tau_{*}=\\tau_{\\eta}\

  6. Scales of mantle heterogeneity (United States)

    Moore, J. C.; Akber-Knutson, S.; Konter, J.; Kellogg, J.; Hart, S.; Kellogg, L. H.; Romanowicz, B.


    A long-standing question in mantle dynamics concerns the scale of heterogeneity in the mantle. Mantle convection tends to both destroy (through stirring) and create (through melt extraction and subduction) heterogeneity in bulk and trace element composition. Over time, these competing processes create variations in geochemical composition along mid-oceanic ridges and among oceanic islands, spanning a range of scales from extremely long wavelength (for example, the DUPAL anomaly) to very small scale (for example, variations amongst melt inclusions). While geochemical data and seismic observations can be used to constrain the length scales of mantle heterogeneity, dynamical mixing calculations can illustrate the processes and timescales involved in stirring and mixing. At the Summer 2004 CIDER workshop on Relating Geochemical and Seismological Heterogeneity in the Earth's Mantle, an interdisciplinary group evaluated scales of heterogeneity in the Earth's mantle using a combined analysis of geochemical data, seismological data and results of numerical models of mixing. We mined the PetDB database for isotopic data from glass and whole rock analyses for the Mid-Atlantic Ridge (MAR) and the East Pacific Rise (EPR), projecting them along the ridge length. We examined Sr isotope variability along the East Pacific rise by looking at the difference in Sr ratio between adjacent samples as a function of distance between the samples. The East Pacific Rise exhibits an overall bowl shape of normal MORB characteristics, with higher values in the higher latitudes (there is, however, an unfortunate gap in sampling, roughly 2000 km long). These background characteristics are punctuated with spikes in values at various locations, some, but not all of which are associated with off-axis volcanism. A Lomb-Scargle periodogram for unevenly spaced data was utilized to construct a power spectrum of the scale lengths of heterogeneity along both ridges. Using the same isotopic systems (Sr, Nd

  7. Evolution of Scale Worms

    DEFF Research Database (Denmark)

    Gonzalez, Brett Christopher

    ) caves, and the interstitium, recovering six monophyletic clades within Aphroditiformia: Acoetidae, Aphroditidae, Eulepethidae, Iphionidae, Polynoidae, and Sigalionidae (inclusive of the former ‘Pisionidae’ and ‘Pholoidae’), respectively. Tracing of morphological character evolution showed a high degree...... of adaptability and convergent evolution between relatively closely related scale worms. While some morphological and behavioral modifications in cave polynoids reflected troglomorphism, other modifications like eye loss were found to stem from a common ancestor inhabiting the deep sea, further corroborating...... the deep sea ancestry of scale worm cave fauna. In conclusion, while morphological characterization across Aphroditiformia appears deceptively easy due to the presence of elytra, convergent evolution during multiple early radiations across wide ranging habitats have confounded our ability to reconstruct...

  8. Rolling at small scales

    DEFF Research Database (Denmark)

    Nielsen, Kim L.; Niordson, Christian F.; Hutchinson, John W.


    The rolling process is widely used in the metal forming industry and has been so for many years. However, the process has attracted renewed interest as it recently has been adapted to very small scales where conventional plasticity theory cannot accurately predict the material response. It is well......-established that gradient effects play a role at the micron scale, and the objective of this study is to demonstrate how strain gradient hardening affects the rolling process. Specifically, the paper addresses how the applied roll torque, roll forces, and the contact conditions are modified by strain gradient plasticity...... the power input to the process. The contact traction is also affected, particularly for sheet thicknesses on the order of 10 μm and below. The influences of the length parameter and the friction coefficient are emphasized, and the results are presented for multiple sheet reductions and roll sizes....

  9. Dynamo Scaling Relationships (United States)

    Augustson, Kyle; Mathis, Stéphane; Brun, Sacha; Toomre, Juri


    This paper provides a brief look at dynamo scaling relationships for the degree of equipartition between magnetic and kinetic energies. Two simple models are examined, where one that assumes magnetostrophy and another that includes the effects of inertia. These models are then compared to a suite of convective dynamo simulations of the convective core of a main-sequence B-type star and applied to its later evolutionary stages.



    Sujitha Mary; Alaguraj, V.; Krishnaswamy, S


    Aggregation is an inherent property of proteins. Both ordered and disordered proteins have a tendency to aggregate. Protein folding itself starts from the partially folded intermediates. The formation of native structures from these intermediates may be called as constructive aggregation. We describe the design of an intrinsic aggregation scale and its efficiency in finding hot-spots for constructive aggregation. In this paper, we are proposing a new aspect of aggregation, wherein...

  11. Indian scales and inventories


    Venkatesan, S


    This conceptual, perspective and review paper on Indian scales and inventories begins with clarification on the historical and contemporary meanings of psychometry before linking itself to the burgeoning field of clinimetrics in their applications to the practice of clinical psychology and psychiatry. Clinimetrics is explained as a changing paradigm in the design, administration, and interpretation of quantitative tests, techniques or procedures applied to measurement of clinical variables, t...

  12. Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Fingersh, Lee J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Loth, Eric [University of Virginia; Kaminski, Meghan [University of Virginia; Qin, Chao [University of Virginia; Griffith, D. Todd [Sandia National Laboratories


    A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3 wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.

  13. Scale effects in necking

    Directory of Open Access Journals (Sweden)

    MacGillivray H.


    Full Text Available Geometrically similar specimens spanning a scale range of 100:1 are tested quasi-statically to failure. Images of neck development are acquired using optical means for large specimens, and in-situ scanning electron microscope testing for small specimens, to examine the dependence of neck geometry on a broad range of specimen sizes. Size effects typically arise when the smallest specimen dimension is on the order of a microstructural length (e.g. grain size, dislocation mean free path, etc., or in the presence of significant plastic strain gradients, which increase the density of geometrically necessary dislocations. This study was carried out for the purpose of investigating scale dependence in models used for predicting dynamic deformation and damage to very high strains for ballistic impact applications, such as the Goldthorpe path-dependent failure model, which includes temperature and strain-rate dependence but does not account for specimen size or a dependence on microstructural lengths. Although the experiments show that neck geometry does not exhibit a clear dependence on specimen size across the range of length scales tested, the statistical variation due to microstructural variations was found to increase monotonically with decreasing size, becoming significant for the smallest (0.35 mm diameter size, allowing a limit to be identified for reliable model calibration.

  14. H2@Scale Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, Mark


    'H2@Scale' is a concept based on the opportunity for hydrogen to act as an intermediate between energy sources and uses. Hydrogen has the potential to be used like the primary intermediate in use today, electricity, because it too is fungible. This presentation summarizes the H2@Scale analysis efforts performed during the first third of 2017. Results of technical potential uses and supply options are summarized and show that the technical potential demand for hydrogen is 60 million metric tons per year and that the U.S. has sufficient domestic resources to meet that demand. A high level infrastructure analysis is also presented that shows an 85% increase in energy on the grid if all hydrogen is produced from grid electricity. However, a preliminary spatial assessment shows that supply is sufficient in most counties across the U.S. The presentation also shows plans for analysis of the economic potential for the H2@Scale concept. Those plans involve developing supply and demand curves for potential hydrogen generation options and as compared to other options for use of that hydrogen.

  15. Micro-Scale Thermoacoustics (United States)

    Offner, Avshalom; Ramon, Guy Z.


    Thermoacoustic phenomena - conversion of heat to acoustic oscillations - may be harnessed for construction of reliable, practically maintenance-free engines and heat pumps. Specifically, miniaturization of thermoacoustic devices holds great promise for cooling of micro-electronic components. However, as devices size is pushed down to micro-meter scale it is expected that non-negligible slip effects will exist at the solid-fluid interface. Accordingly, new theoretical models for thermoacoustic engines and heat pumps were derived, accounting for a slip boundary condition. These models are essential for the design process of micro-scale thermoacoustic devices that will operate under ultrasonic frequencies. Stability curves for engines - representing the onset of self-sustained oscillations - were calculated with both no-slip and slip boundary conditions, revealing improvement in the performance of engines with slip at the resonance frequency range applicable for micro-scale devices. Maximum achievable temperature differences curves for thermoacoustic heat pumps were calculated, revealing the negative effect of slip on the ability to pump heat up a temperature gradient. The authors acknowledge the support from the Nancy and Stephen Grand Technion Energy Program (GTEP).

  16. Mechanism for salt scaling (United States)

    Valenza, John J., II

    Salt scaling is superficial damage caused by freezing a saline solution on the surface of a cementitious body. The damage consists of the removal of small chips or flakes of binder. The discovery of this phenomenon in the early 1950's prompted hundreds of experimental studies, which clearly elucidated the characteristics of this damage. In particular it was shown that a pessimum salt concentration exists, where a moderate salt concentration (˜3%) results in the most damage. Despite the numerous studies, the mechanism responsible for salt scaling has not been identified. In this work it is shown that salt scaling is a result of the large thermal expansion mismatch between ice and the cementitious body, and that the mechanism responsible for damage is analogous to glue-spalling. When ice forms on a cementitious body a bi-material composite is formed. The thermal expansion coefficient of the ice is ˜5 times that of the underlying body, so when the temperature of the composite is lowered below the melting point, the ice goes into tension. Once this stress exceeds the strength of the ice, cracks initiate in the ice and propagate into the surface of the cementitious body, removing a flake of material. The glue-spall mechanism accounts for all of the characteristics of salt scaling. In particular, a theoretical analysis is presented which shows that the pessimum concentration is a consequence of the effect of brine pockets on the mechanical properties of ice, and that the damage morphology is accounted for by fracture mechanics. Finally, empirical evidence is presented that proves that the glue-small mechanism is the primary cause of salt scaling. The primary experimental tool used in this study is a novel warping experiment, where a pool of liquid is formed on top of a thin (˜3 mm) plate of cement paste. Stresses in the plate, including thermal expansion mismatch, result in warping of the plate, which is easily detected. This technique revealed the existence of

  17. The Practicality of Behavioral Observation Scales, Behavioral Expectation Scales, and Trait Scales. (United States)

    Wiersma, Uco; Latham, Gary P.


    The practicality of three appraisal instruments was measured in terms of user preference, namely, behavioral observation scales (BOS), behavioral expectation scales (BES), and trait scales. In all instances, BOS were preferred to BES, and in all but two instances, BOS were viewed as superior to trait scales. (Author/ABB)

  18. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair


    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  19. Scaling CouchDB

    CERN Document Server

    Holt, Bradley


    This practical guide offers a short course on scaling CouchDB to meet the capacity needs of your distributed application. Through a series of scenario-based examples, this book lets you explore several methods for creating a system that can accommodate growth and meet expected demand. In the process, you learn about several tools that can help you with replication, load balancing, clustering, and load testing and monitoring. Apply performance tips for tuning your databaseReplicate data, using Futon and CouchDB's RESTful interfaceDistribute CouchDB's workload through load balancingLearn option

  20. Scales on the scalp

    Directory of Open Access Journals (Sweden)

    Jamil A


    Full Text Available A five-year-old boy presented with a six-week history of scales, flaking and crusting of the scalp. He had mild pruritus but no pain. He did not have a history of atopy and there were no pets at home. Examination of the scalp showed thick, yellowish dry crusts on the vertex and parietal areas and the hair was adhered to the scalp in clumps. There was non-scarring alopecia and mild erythema (Figure 1 & 2. There was no cervical or occipital lymphadenopathy. The patient’s nails and skin in other parts of the body were normal.

  1. Challenging comparison of stroke scales

    Directory of Open Access Journals (Sweden)

    Kavian Ghandehari


    Full Text Available Stroke scales can be classified as clinicometric scales and functional impairment, handicap scales. All studies describing stroke scales were reviewed by internet searching engines with the final search performed on January 1, 2013. The following string of keywords was entered into search engines; stroke, scale, score and disability. Despite advantages of modified National Institute of Health Stroke Scale and Scandinavian stroke scale comparing to the NIHSS, including their simplification and less inter-rater variability; most of the stroke neurologists around the world continue using the NIHSS. The modified Rankin scale (mRS and Barthel index (BI are widely used functional impairment and disability scales. Distinction between grades of mRS is poorly defined. The Asian stroke disability scale is a simplified functional impairment, handicap scale which is as valid as mRS and BI. At the present time, the NIHSS, mRS and BI are routine stroke scales because physicians have used to work with these scales for more than two decades, although it could not be an acceptable reason. On the other side, results of previous stroke trials, which are the basis of stroke management guidelines are driven using these scales.

  2. The birth satisfaction scale. (United States)

    Martin, Caroline Hollins; Fleming, Valerie


    The purpose of this paper is to develop a psychometric scale--the birth satisfaction scale (BSS)--for assessing women's birth perceptions. Literature review and transcribed research-based perceived birth satisfaction and dissatisfaction expression statements were converted into a scored questionnaire. Three overarching themes were identified: service provision (home assessment, birth environment, support, relationships with health care professionals); personal attributes (ability to cope during labour, feeling in control, childbirth preparation, relationship with baby); and stress experienced during labour (distress, obstetric injuries, receiving sufficient medical care, obstetric intervention, pain, long labour and baby's health). Women construct their birth experience differently. Views are directed by personal beliefs, reactions, emotions and reflections, which alter in relation to mood, humour, disposition, frame of mind and company kept. Nevertheless, healthcare professionals can use BSS to assess women's birth satisfaction and dissatisfaction. Scores measure their service quality experiences. Scores provide a global measure of care that women perceived they received during labour. Finding out more about what causes birth satisfaction and dissatisfaction helps maternity care professionals improve intra-natal care standards and allocate resources effectively. An attempt has been made to capture birth satisfaction's generalised meaning and incorporate it into an evidence-based measuring tool.

  3. Small scale sanitation technologies. (United States)

    Green, W; Ho, G


    Small scale systems can improve the sustainability of sanitation systems as they more easily close the water and nutrient loops. They also provide alternate solutions to centrally managed large scale infrastructures. Appropriate sanitation provision can improve the lives of people with inadequate sanitation through health benefits, reuse products as well as reduce ecological impacts. In the literature there seems to be no compilation of a wide range of available onsite sanitation systems around the world that encompasses black and greywater treatment plus stand-alone dry and urine separation toilet systems. Seventy technologies have been identified and classified according to the different waste source streams. Sub-classification based on major treatment methods included aerobic digestion, composting and vermicomposting, anaerobic digestion, sand/soil/peat filtration and constructed wetlands. Potential users or suppliers of sanitation systems can choose from wide range of technologies available and examine the different treatment principles used in the technologies. Sanitation systems need to be selected according to the local social, economic and environmental conditions and should aim to be sustainable.

  4. The Unintentional Procrastination Scale. (United States)

    Fernie, Bruce A; Bharucha, Zinnia; Nikčević, Ana V; Spada, Marcantonio M


    Procrastination refers to the delay or postponement of a task or decision and is often conceptualised as a failure of self-regulation. Recent research has suggested that procrastination could be delineated into two domains: intentional and unintentional. In this two-study paper, we aimed to develop a measure of unintentional procrastination (named the Unintentional Procrastination Scale or the 'UPS') and test whether this would be a stronger marker of psychopathology than intentional and general procrastination. In Study 1, a community sample of 139 participants completed a questionnaire that consisted of several items pertaining to unintentional procrastination that had been derived from theory, previous research, and clinical experience. Responses were subjected to a principle components analysis and assessment of internal consistency. In Study 2, a community sample of 155 participants completed the newly developed scale, along with measures of general and intentional procrastination, metacognitions about procrastination, and negative affect. Data from the UPS were subjected to confirmatory factor analysis and revised accordingly. The UPS was then validated using correlation and regression analyses. The six-item UPS possesses construct and divergent validity and good internal consistency. The UPS appears to be a stronger marker of psychopathology than the pre-existing measures of procrastination used in this study. Results from the regression models suggest that both negative affect and metacognitions about procrastination differentiate between general, intentional, and unintentional procrastination. The UPS is brief, has good psychometric properties, and has strong associations with negative affect, suggesting it has value as a research and clinical tool.

  5. Grid origin affects scaling of species across spatial scales.

    NARCIS (Netherlands)

    Witte, J.P.M.; He, F.; Groen, C.L.G.


    Aim: Distribution maps of species based on a grid are useful for investigating relationships between scale and the number or area of occupied grid cells. A species is scaled up simply by merging occupied grid cells on the observation grid to successively coarser cells. Scale-occupancy relationships

  6. Scale in Education Research: Towards a Multi-Scale Methodology (United States)

    Noyes, Andrew


    This article explores some theoretical and methodological problems concerned with scale in education research through a critique of a recent mixed-method project. The project was framed by scale metaphors drawn from the physical and earth sciences and I consider how recent thinking around scale, for example, in ecosystems and human geography might…

  7. Earthquake impact scale (United States)

    Wald, D.J.; Jaiswal, K.S.; Marano, K.D.; Bausch, D.


    With the advent of the USGS prompt assessment of global earthquakes for response (PAGER) system, which rapidly assesses earthquake impacts, U.S. and international earthquake responders are reconsidering their automatic alert and activation levels and response procedures. To help facilitate rapid and appropriate earthquake response, an Earthquake Impact Scale (EIS) is proposed on the basis of two complementary criteria. On the basis of the estimated cost of damage, one is most suitable for domestic events; the other, on the basis of estimated ranges of fatalities, is generally more appropriate for global events, particularly in developing countries. Simple thresholds, derived from the systematic analysis of past earthquake impact and associated response levels, are quite effective in communicating predicted impact and response needed after an event through alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1,000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses reaching $1M, $100M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness predominate in countries in which local building practices typically lend themselves to high collapse and casualty rates, and these impacts lend to prioritization for international response. In contrast, financial and overall societal impacts often trigger the level of response in regions or countries in which prevalent earthquake resistant construction practices greatly reduce building collapse and resulting fatalities. Any newly devised alert, whether economic- or casualty-based, should be intuitive and consistent with established lexicons and procedures. Useful alerts should

  8. On Scale and Fields

    DEFF Research Database (Denmark)

    Kadish, David


    is being undertaken with the adoption of interventionist strategies in urban agricultural practices like seed bombing and guerrilla gardening. At the same time, there is a proliferation of media-connected and miniature autonomous drones and robotics. Might this combination be the foundation for a novel......This paper explores thematic parallels between artistic and agricultural practices in the postwar period to establish a link to media art and cultural practices that are currently emerging in urban agriculture. Industrial agriculture has roots in the post-WWII abundance of mechanical and chemical......-scale agricultural systems that range from spreading pests and diseases to poor global distribution of concentrated regional food wealth. That the conversion of vegetatively diverse farmland into monochromatic fields was popularized at the same time as the arrival of colour field paintings like Barnett Newman...

  9. Galactic-scale civilization (United States)

    Kuiper, T. B. H.


    Evolutionary arguments are presented in favor of the existence of civilization on a galactic scale. Patterns of physical, chemical, biological, social and cultural evolution leading to increasing levels of complexity are pointed out and explained thermodynamically in terms of the maximization of free energy dissipation in the environment of the organized system. The possibility of the evolution of a global and then a galactic human civilization is considered, and probabilities that the galaxy is presently in its colonization state and that life could have evolved to its present state on earth are discussed. Fermi's paradox of the absence of extraterrestrials in light of the probability of their existence is noted, and a variety of possible explanations is indicated. Finally, it is argued that although mankind may be the first occurrence of intelligence in the galaxy, it is unjustified to presume that this is so.

  10. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred


    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors...

  11. Scaling MongoDB

    CERN Document Server

    Chodorow, Kristina


    Create a MongoDB cluster that will to grow to meet the needs of your application. With this short and concise book, you'll get guidelines for setting up and using clusters to store a large volume of data, and learn how to access the data efficiently. In the process, you'll understand how to make your application work with a distributed database system. Scaling MongoDB will help you: Set up a MongoDB cluster through shardingWork with a cluster to query and update dataOperate, monitor, and backup your clusterPlan your application to deal with outages By following the advice in this book, you'l

  12. The neighborhood as scale

    Directory of Open Access Journals (Sweden)

    Francisco Clébio Rodrigues Lopes


    Full Text Available This article consists of a theoretical and practical essay, developed from post-gra- duation studies. Its goal is to analyze the role of the neighborhood as necessary mediation to understanding the city, the region and the wider society linkages. Used as an empirical framework – the neighborhood of Parangaba, located in Fortaleza-CE. So we rescued the notions of reproduction, daily life and scale, developed by Marxist authors. Subsequently, we researched in newspapers, we interviewed residents and we collected statistical data on official bodies. We con- cluded that there is a spatiality of reproduction, which may be captured in the most banal level. Wherefore the neighborhood for being the qualitative domain, it should be investigated. 

  13. Indian scales and inventories. (United States)

    Venkatesan, S


    This conceptual, perspective and review paper on Indian scales and inventories begins with clarification on the historical and contemporary meanings of psychometry before linking itself to the burgeoning field of clinimetrics in their applications to the practice of clinical psychology and psychiatry. Clinimetrics is explained as a changing paradigm in the design, administration, and interpretation of quantitative tests, techniques or procedures applied to measurement of clinical variables, traits and processes. As an illustrative sample, this article assembles a bibliographic survey of about 105 out of 2582 research papers (4.07%) scanned through 51 back dated volumes covering 185 issues related to clinimetry as reviewed across a span of over fifty years (1958-2009) in the Indian Journal of Psychiatry. A content analysis of the contributions across distinct categories of mental measurements is explained before linkages are proposed for future directions along these lines.

  14. ScaleUp America Communities (United States)

    Small Business Administration — SBA’s new ScaleUp America Initiative is designed to help small firms with high potential “scale up” and grow their businesses so that they will provide more jobs and...

  15. Northeast Snowfall Impact Scale (NESIS) (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — While the Fujita and Saffir-Simpson Scales characterize tornadoes and hurricanes respectively, there is no widely used scale to classify snowstorms. The Northeast...

  16. Scale setting in lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Sommer, Rainer [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC


    The principles of scale setting in lattice QCD as well as the advantages and disadvantages of various commonly used scales are discussed. After listing criteria for good scales, I concentrate on the main presently used ones with an emphasis on scales derived from the Yang-Mills gradient flow. For these I discuss discretisation errors, statistical precision and mass effects. A short review on numerical results also brings me to an unpleasant disagreement which remains to be explained.

  17. An ordinal metrical scale built on a fuzzy nominal scale (United States)

    Benoit, E.


    The Measurement theory defines a measurement as a mapping from a set of empirical property manifestations to a set of abstract property values called symbols. The ordinal metrical scales were introduced within the context of Psychophysics as a way to solve the problem of multidimensional scaling. Usually the distances used to define such scales are based on the hypothesis that symbols are vectors of numbers and that each component is expressed on an interval scale or a ratio scale. In a recent paper was introduced a distance-based scale that represents manifestations from an empirical world with fuzzy subsets of lexical terms. This approach supposes only the existence of a fuzzy nominal scale and allows a choice into a wider set of distances to build the ordinal metrical scales. This paper focuses on the knowledge source used to choose a scale definition and takes metrical scales built on fuzzy nominal scale as example. Then it opens a discussion on the reality of some distances in the empirical world.

  18. Improving the spatial resolution of air-quality modelling at a European scale - development and evaluation of the Air Quality Re-gridder Model (AQR v1.1) (United States)

    Theobald, Mark R.; Simpson, David; Vieno, Massimo


    Currently, atmospheric chemistry and transport models (ACTMs) used to assess impacts of air quality, applied at a European scale, lack the spatial resolution necessary to simulate fine-scale spatial variability. This spatial variability is especially important for assessing the impacts to human health or ecosystems of short-lived pollutants, such as nitrogen dioxide (NO2) or ammonia (NH3). In order to simulate this spatial variability, the Air Quality Re-gridder (AQR) model has been developed to estimate the spatial distributions (at a spatial resolution of 1 × 1 km2) of annual mean atmospheric concentrations within the grid squares of an ACTM (in this case with a spatial resolution of 50 × 50 km2). This is done as a post-processing step by combining the coarse-resolution ACTM concentrations with high-spatial-resolution emission data and simple parameterisations of atmospheric dispersion. The AQR model was tested for two European sub-domains (the Netherlands and central Scotland) and evaluated using NO2 and NH3 concentration data from monitoring networks within each domain. A statistical comparison of the performance of the two models shows that AQR gives a substantial improvement on the predictions of the ACTM, reducing both mean model error (from 61 to 41 % for NO2 and from 42 to 27 % for NH3) and increasing the spatial correlation (r) with the measured concentrations (from 0.0 to 0.39 for NO2 and from 0.74 to 0.84 for NH3). This improvement was greatest for monitoring locations close to pollutant sources. Although the model ideally requires high-spatial-resolution emission data, which are not available for the whole of Europe, the use of a Europe-wide emission dataset with a lower spatial resolution also gave an improvement on the ACTM predictions for the two test domains. The AQR model provides an easy-to-use and robust method to estimate sub-grid variability that can potentially be extended to different timescales and pollutants.

  19. Scale issues in tourism development (United States)

    Sinji Yang; Lori Pennington-Gray; Donald F. Holecek


    Proponents of Alternative Tourism overwhelmingly believe that alternative forms of tourism development need to be small in scale. Inasmuch as tourists' demand has great power to shape the market, the issues surrounding the tourism development scale deserve further consideration. This paper discusses the implications and effects of the tourism development scale on...

  20. Coma scales: a historical review

    Directory of Open Access Journals (Sweden)

    Ana Luisa Bordini


    Full Text Available OBJECTIVE: To describe the most important coma scales developed in the last fifty years. METHOD: A review of the literature between 1969 and 2009 in the Medline and Scielo databases was carried out using the following keywords: coma scales, coma, disorders of consciousness, coma score and levels of coma. RESULTS: Five main scales were found in chronological order: the Jouvet coma scale, the Moscow coma scale, the Glasgow coma scale (GCS, the Bozza-Marrubini scale and the FOUR score (Full Outline of UnResponsiveness, as well as other scales that have had less impact and are rarely used outside their country of origin. DISCUSSION: Of the five main scales, the GCS is by far the most widely used. It is easy to apply and very suitable for cases of traumatic brain injury (TBI. However, it has shortcomings, such as the fact that the speech component in intubated patients cannot be tested. While the Jouvet scale is quite sensitive, particularly for levels of consciousness closer to normal levels, it is difficult to use. The Moscow scale has good predictive value but is little used by the medical community. The FOUR score is easy to apply and provides more neurological details than the Glasgow scale.

  1. Death Anxiety Scales: A Dialogue. (United States)

    Lester, David; Templer, Donald


    Presents dialog among David Lester, author of first critical survey of death anxiety measures, developer of scales, and researcher about suicide and fear of death; Donald Templer, Death Anxiety Scale (DAS) creator; and journal editor. Lester and Templer discuss origins, uses, results, limitations, and future of death anxiety scales and research on…

  2. Solar system to scale (United States)

    Gerwig López, Susanne


    One of the most important successes in astronomical observations has been to determine the limit of the Solar System. It is said that the first man able to measure the distance Earth-Sun with only a very slight mistake, in the second century BC, was the wise Greek man Aristarco de Samos. Thanks to Newtońs law of universal gravitation, it was possible to measure, with a little margin of error, the distances between the Sun and the planets. Twelve-year old students are very interested in everything related to the universe. However, it seems too difficult to imagine and understand the real distances among the different celestial bodies. To learn the differences among the inner and outer planets and how far away the outer ones are, I have considered to make my pupils work on the sizes and the distances in our solar system constructing it to scale. The purpose is to reproduce our solar system to scale on a cardboard. The procedure is very easy and simple. Students of first year of ESO (12 year-old) receive the instructions in a sheet of paper (things they need: a black cardboard, a pair of scissors, colored pencils, a ruler, adhesive tape, glue, the photocopies of the planets and satellites, the measurements they have to use). In another photocopy they get the pictures of the edge of the sun, the planets, dwarf planets and some satellites, which they have to color, cut and stick on the cardboard. This activity is planned for both Spanish and bilingual learning students as a science project. Depending on the group, they will receive these instructions in Spanish or in English. When the time is over, the students bring their works on their cardboard to the class. They obtain a final mark: passing, good or excellent, depending on the accuracy of the measurements, the position of all the celestial bodies, the asteroids belts, personal contributions, etc. If any of the students has not followed the instructions they get the chance to remake it again properly, in order not


    Energy Technology Data Exchange (ETDEWEB)



    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.

  4. The Bereavement Guilt Scale. (United States)

    Li, Jie; Stroebe, Magaret; Chan, Cecilia L W; Chow, Amy Y M


    The rationale, development, and validation of the Bereavement Guilt Scale (BGS) are described in this article. The BGS was based on a theoretically developed, multidimensional conceptualization of guilt. Part 1 describes the generation of the item pool, derived from in-depth interviews, and review of the scientific literature. Part 2 details statistical analyses for further item selection (Sample 1, N = 273). Part 3 covers the psychometric properties of the emergent-BGS (Sample 2, N = 600, and Sample 3, N = 479). Confirmatory factor analysis indicated that a five-factor model fit the data best. Correlations of BGS scores with depression, anxiety, self-esteem, self-forgiveness, and mode of death were consistent with theoretical predictions, supporting the construct validity of the measure. The internal consistency and test-retest reliability were also supported. Thus, initial testing or examination suggests that the BGS is a valid tool to assess multiple components of bereavement guilt. Further psychometric testing across cultures is recommended.

  5. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. Transition from large-scale to small-scale dynamo. (United States)

    Ponty, Y; Plunian, F


    The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.

  7. Linking the Grain Scale to Experimental Measurements and Other Scales (United States)

    Vogler, Tracy


    A number of physical processes occur at the scale of grains that can have a profound influence on the behavior of materials under shock loading. Examples include inelastic deformation, pore collapse, fracture, friction, and internal wave reflections. In some cases such as the initiation of energetics and brittle fracture, these processes can have first order effects on the behavior of materials: the emergent behavior from the grain scale is the dominant one. In other cases, many aspects of the bulk behavior can be described by a continuum description, but some details of the behavior are missed by continuum descriptions. The multi-scale model paradigm envisions flow of information from smaller scales (atomic, dislocation, etc.) to the grain or mesoscale and the up to the continuum scale. A significant challenge in this approach is the need to validate each step. For the grain scale, diagnosing behavior is challenging because of the small spatial and temporal scales involved. Spatially resolved diagnostics have begun to shed light on these processes, and, more recently, advanced light sources have started to be used to probe behavior at the grain scale. In this talk, I will discuss some interesting phenomena that occur at the grain scale in shock loading, experimental approaches to probe the grain scale, and efforts to link the grain scale to smaller and larger scales. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE.

  8. [Correlations between Beck's suicidal ideation scale, suicidal risk assessment scale RSD and Hamilton's depression rating scale]. (United States)

    Ducher, J-L; Dalery, J


    Most of the people who will attempt suicide, talk about it beforehand. Therefore, recognition of suicidal risk is not absolutely impossible. Beck's suicidal ideation scale and Ducher's suicidal risk assessment scale (RSD) are common tools to help practicians in this way. These scales and the Hamilton's depression scale were included in an international multicentric, phase IV, double-blind study, according to two parallel groups who had been administered a fixed dose of fluvoxamin or fluoxetin for six weeks. This allowed examination of the correlations between these scales and the relations, which could possibly exist between suicidal risk, depression and anxiety. (a) Relationships between the Beck's suicidal ideation scale, the suicidal risk assessment scale RSD and Hamilton's depression before treatment. Before treatment, the analysis was conducted with 108 male and female depressive outpatients, aged 18 or over. Results revealed a significant positive correlation (with a Pearson's correlation coefficient r equal to 0.69 and risk pRSD. These scales correlate less consistently with Hamilton's depression (Beck/Hamilton's depression: r=0.34; p=0.0004-RSD/Hamilton's depression: r=0.35; p=0.0002). We observed that the clinical anxiety scale by Snaith is also strongly correlated to these two suicidal risk assessment scales (Beck/CAS: r=0.48; pRSD/CAS: r=0.35; p=0.0005). Besides, the item "suicide" of Hamilton's depression scale accounts for more than a third of the variability of Beck's suicidal ideation scale and the suicidal risk assessment scale RSD. According to these results, the suicidal risk evaluated by these two scales seems to be significantly correlated with anxiety as much as with depression. On the other hand, the Clinical Global Impression is fairly significantly correlated with Beck's suicidal ideation scale (r=0.22; p=0.02), unlike the suicidal risk assessment scale RSD (r=0.42; pRSD and Hamilton's depression under treatment. The follow-up under

  9. The Scales of Injustice

    Directory of Open Access Journals (Sweden)

    Charles Blattberg


    Full Text Available This paper criticises four major approaches to criminal law – consequentialism, retributivism, abolitionism, and “mixed” pluralism – each of which, in its own fashion, affirms the celebrated emblem of the “scales of justice.” The argument is that there is a better way of dealing with the tensions that often arise between the various legal purposes than by merely balancing them against each other. It consists, essentially, of striving to genuinely reconcile those purposes, a goal which is shown to require taking a new, “patriotic” approach to law. Le présent article porte une critique à quatre approches majeures en droit pénal : le conséquentialisme, le rétributivisme, l’abolitionnisme et le pluralisme « mixte. » Toutes ces approches se rangent, chacune à leur manière, sous le célèbre emblème des « échelles de justice. » L’argument est qu’il existe une meilleure façon de faire face aux tensions qui opposent les multiples objectifs judiciaires plutôt que de comparer le poids des uns contre le poids des autres. Il s’agit essentiellement de s’efforcer à réaliser une authentique réconciliation de ces objectifs. Il apparaîtra que pour y parvenir il est nécessaire d’avoir recours à une nouvelle approche du droit, une approche précisément « patriotique. »

  10. Excitable scale free networks (United States)

    Copelli, M.; Campos, P. R. A.


    When a simple excitable system is continuously stimulated by a Poissonian external source, the response function (mean activity versus stimulus rate) generally shows a linear saturating shape. This is experimentally verified in some classes of sensory neurons, which accordingly present a small dynamic range (defined as the interval of stimulus intensity which can be appropriately coded by the mean activity of the excitable element), usually about one or two decades only. The brain, on the other hand, can handle a significantly broader range of stimulus intensity, and a collective phenomenon involving the interaction among excitable neurons has been suggested to account for the enhancement of the dynamic range. Since the role of the pattern of such interactions is still unclear, here we investigate the performance of a scale-free (SF) network topology in this dynamic range problem. Specifically, we study the transfer function of disordered SF networks of excitable Greenberg-Hastings cellular automata. We observe that the dynamic range is maximum when the coupling among the elements is critical, corroborating a general reasoning recently proposed. Although the maximum dynamic range yielded by general SF networks is slightly worse than that of random networks, for special SF networks which lack loops the enhancement of the dynamic range can be dramatic, reaching nearly five decades. In order to understand the role of loops on the transfer function we propose a simple model in which the density of loops in the network can be gradually increased, and show that this is accompanied by a gradual decrease of dynamic range.

  11. Integrating Local Scale Drainage Measures in Meso Scale Catchment Modelling

    Directory of Open Access Journals (Sweden)

    Sandra Hellmers


    Full Text Available This article presents a methodology to optimize the integration of local scale drainage measures in catchment modelling. The methodology enables to zoom into the processes (physically, spatially and temporally where detailed physical based computation is required and to zoom out where lumped conceptualized approaches are applied. It allows the definition of parameters and computation procedures on different spatial and temporal scales. Three methods are developed to integrate features of local scale drainage measures in catchment modelling: (1 different types of local drainage measures are spatially integrated in catchment modelling by a data mapping; (2 interlinked drainage features between data objects are enabled on the meso, local and micro scale; (3 a method for modelling multiple interlinked layers on the micro scale is developed. For the computation of flow routing on the meso scale, the results of the local scale measures are aggregated according to their contributing inlet in the network structure. The implementation of the methods is realized in a semi-distributed rainfall-runoff model. The implemented micro scale approach is validated with a laboratory physical model to confirm the credibility of the model. A study of a river catchment of 88 km2 illustrated the applicability of the model on the regional scale.

  12. Scaling Effects on Materials Tribology: From Macro to Micro Scale (United States)

    Stoyanov, Pantcho; Chromik, Richard R.


    The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale. PMID:28772909

  13. Scaling Effects on Materials Tribology: From Macro to Micro Scale. (United States)

    Stoyanov, Pantcho; Chromik, Richard R


    The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale.

  14. Plague and Climate: Scales Matter (United States)

    Ben Ari, Tamara; Neerinckx, Simon; Gage, Kenneth L.; Kreppel, Katharina; Laudisoit, Anne; Leirs, Herwig; Stenseth, Nils Chr.


    Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts) of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations. PMID:21949648

  15. H2@Scale Workshop Report

    Energy Technology Data Exchange (ETDEWEB)

    Pivovar, Bryan


    Final report from the H2@Scale Workshop held November 16-17, 2016, at the National Renewable Energy Laboratory in Golden, Colorado. The U.S. Department of Energy's National Renewable Energy Laboratory hosted a technology workshop to identify the current barriers and research needs of the H2@Scale concept. H2@Scale is a concept regarding the potential for wide-scale impact of hydrogen produced from diverse domestic resources to enhance U.S. energy security and enable growth of innovative technologies and domestic industries. Feedback received from a diverse set of stakeholders at the workshop will guide the development of an H2@Scale roadmap for research, development, and early stage demonstration activities that can enable hydrogen as an energy carrier at a national scale.

  16. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori


    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  17. A Short Boredom Proneness Scale. (United States)

    Struk, Andriy A; Carriere, Jonathan S A; Cheyne, J Allan; Danckert, James


    It has been evident for some time that the Boredom Proneness Scale (BPS), a commonly used measure of trait boredom, does not constitute a single scale. Factor analytic studies have identified anything from two to seven factors, prompting Vodanovich and colleagues to propose an alternative two factor, short form version Boredom Proneness Scale-Short Form (BPS-SR). The present study further investigates the factor structure and validity of both the BPS and the BPS-SR. The two-factor solution obtained for the BPS-SR appears to be an artifact of item wording of reverse-scored items. These same items may also have contributed to the earlier complexity and inconsistency of results for the full BPS. An eight-item scale of only consistently worded items (i.e., those not requiring reverse scoring) was developed. This new scale demonstrated unidimensionality and the scale score had good internal consistency and construct validity comparable to the original BPS score.

  18. Natural Scales in Geographical Patterns (United States)

    Menezes, Telmo; Roth, Camille


    Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.

  19. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo


    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  20. A medium scale mobile rainfall simulator for experiments on soil erosion and soil hydrology (United States)

    Kavka, Petr; Dostál, Tomáš; Iserloh, Thomas; Davidová, Tereza; Krása, Josef; David, Václav; Vopravil, Jan; Khel, Tomáš; Bauer, Miroslav


    Numerous types of rainfall simulators (RS) have been used to the study the behaviour of surface runoff and sediment transport caused by rainfall. It has been documented, that reproducibility and the knowledge of test conditions are essential for gathering necessary and comparable data. Therefore medium, to large scale field rainfall simulators are very desirable. Such devices are nevertheless very much time and laboratory consuming and their weakness is especially a high water consumption. A new, compact and mobile medium scale rainfall simulator has been developed under close cooperation of CTU Prague and Research Institute of Soil Conservation. The main idea was to develop a device, which is easily to handle by 4 persons, transportable with trailer behind an off-road car and independent of additional water sources and energy. Therefore, a special construction fixed on a standard trailer has been developed. It consists of an aggregate to produce power, an electric pump and a water tank with a capacity up to 1000 l. The pump can work in reverse mode, what allows filling the water tank from any source, including stream or pond. The capacity of the tank is normally sufficient for experiments with duration up to 30 minutes. The RS itself consist of a folding arm, which carries 4 nozzles (SS Full Jet 40WSQ), controlled by electromagnetic valves, which allow to set up desired rainfall intensity by opening intervals. A simple logical unit allows programming various schemes of operation of individual nozzles, to keep low pressure fluctuation in the system. The arm is first unfolded into total length of 9.6 m and then lifted up, using simple crab to its operation position which is 2.3 - 2.65 m above terrain surface. The distance between individual nozzles had been optimized based on number of calibrating experiments on 2.4 m. There is also special space at the trailer for transportation of metal sheets and collector (for experimental plot), additional equipment, tools and

  1. Towards filtered drag force model for non-cohesive and cohesive particle-gas flows (United States)

    Ozel, Ali; Gu, Yile; Milioli, Christian C.; Kolehmainen, Jari; Sundaresan, Sankaran


    Euler-Lagrange simulations of gas-solid flows in unbounded domains have been performed to study sub-grid modeling of the filtered drag force for non-cohesive and cohesive particles. The filtered drag forces under various microstructures and flow conditions were analyzed in terms of various sub-grid quantities: the sub-grid drift velocity, which stems from the sub-grid correlation between the local fluid velocity and the local particle volume fraction, and the scalar variance of solid volume fraction, which is a measure to identify the degree of local inhomogeneity of volume fraction within a filter volume. The results show that the drift velocity and the scalar variance exert systematic effects on the filtered drag force. Effects of particle and domain sizes, gravitational accelerations, and mass loadings on the filtered drag are also studied, and it is shown that these effects can be captured by both sub-grid quantities. Additionally, the effect of cohesion force through the van der Waals interaction on the filtered drag force is investigated, and it is found that there is no significant difference on the dependence of the filtered drag coefficient of cohesive and non-cohesive particles on the sub-grid drift velocity or the scalar variance of solid volume fraction. The assessment of predictabilities of sub-grid quantities was performed by correlation coefficient analyses in a priori manner, and it is found that the drift velocity is superior. However, the drift velocity is not available in "coarse-grid" simulations and a specific closure is needed. A dynamic scale-similarity approach was used to model drift velocity but the predictability of that model is not entirely satisfactory. It is concluded that one must develop a more elaborate model for estimating the drift velocity in "coarse-grid" simulations.

  2. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris


    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  3. Transistor scaling with novel materials


    Meikei Ieong; Vijay Narayanan; Dinkar Singh; Anna Topol; Victor Chan; Zhibin Ren


    Complementary metal-oxide-semiconductor (CMOS) transistor scaling will continue for at least another decade. However, innovation in transistor structures and integration of novel materials are needed to sustain this performance trend. Here we discuss the challenges and opportunities of transistor scaling for the next five to ten years.

  4. A Scale of Mobbing Impacts (United States)

    Yaman, Erkan


    The aim of this research was to develop the Mobbing Impacts Scale and to examine its validity and reliability analyses. The sample of study consisted of 509 teachers from Sakarya. In this study construct validity, internal consistency, test-retest reliabilities and item analysis of the scale were examined. As a result of factor analysis for…

  5. Voice, Schooling, Inequality, and Scale (United States)

    Collins, James


    The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…

  6. Understanding Scale: Powers of Ten (United States)

    Jones, M. Gail; Taylor, Amy; Minogue, James; Broadwell, Bethany; Wiebe, Eric; Carter, Glenda


    The classic film "Powers of Ten" is often employed to catalyze the building of more accurate conceptions of scale, yet its effectiveness is largely unknown. This study examines the impact of the film on students' concepts of size and scale. Twenty-two middle school students and six science teachers participated. Students completed pre- and…


    NARCIS (Netherlands)



    The conditions for a scaling behaviour from the fragmentation process leading to slow protons are discussed- The scaling referred to implies that the fragmentation functions depend on the light-cone momentum fraction only. It is shown that differences in the fragmentation functions for valence- and

  8. Multi-scale brain networks

    CERN Document Server

    Betzel, Richard F


    The network architecture of the human brain has become a feature of increasing interest to the neuroscientific community, largely because of its potential to illuminate human cognition, its variation over development and aging, and its alteration in disease or injury. Traditional tools and approaches to study this architecture have largely focused on single scales -- of topology, time, and space. Expanding beyond this narrow view, we focus this review on pertinent questions and novel methodological advances for the multi-scale brain. We separate our exposition into content related to multi-scale topological structure, multi-scale temporal structure, and multi-scale spatial structure. In each case, we recount empirical evidence for such structures, survey network-based methodological approaches to reveal these structures, and outline current frontiers and open questions. Although predominantly peppered with examples from human neuroimaging, we hope that this account will offer an accessible guide to any neuros...

  9. The QT Scale: A Weight Scale Measuring the QTc Interval. (United States)

    Couderc, Jean-Philippe; Beshaw, Connor; Niu, Xiaodan; Serrano-Finetti, Ernesto; Casas, Oscar; Pallas-Areny, Ramon; Rosero, Spencer; Zareba, Wojciech


    Despite the strong evidence of the clinical utility of QTc prolongation as a surrogate marker of cardiac risk, QTc measurement is not part of clinical routine either in hospital or in physician offices. We evaluated a novel device ("the QT scale") to measure heart rate (HR) and QTc interval. The QT scale is a weight scale embedding an ECG acquisition system with four limb sensors (feet and hands: lead I, II, and III). We evaluated the reliability of QT scale in healthy subjects (cohort 1) and cardiac patients (cohorts 2 and 3) considering a learning (cohort 2) and two validation cohorts. The QT scale and the standard 12-lead recorder were compared using intraclass correlation coefficient (ICC) in cohorts 2 and 3. Absolute value of heart rate and QTc intervals between manual and automatic measurements using ECGs from the QT scale and a clinical device were compared in cohort 1. We enrolled 16 subjects in cohort 1 (8 w, 8 m; 32 ± 8 vs 34 ± 10 years, P = 0.7), 51 patients in cohort 2 (13 w, 38 m; 61 ± 16 vs 58 ± 18 years, P = 0.6), and 13 AF patients in cohort 3 (4 w, 9 m; 63 ± 10 vs 64 ± 10 years, P = 0.9). Similar automatic heart rate and QTc were delivered by the scale and the clinical device in cohort 1: paired difference in RR and QTc were -7 ± 34 milliseconds (P = 0.37) and 3.4 ± 28.6 milliseconds (P = 0.64), respectively. The measurement of stability was slightly lower in ECG from the QT scale than from the clinical device (ICC: 91% vs 80%) in cohort 3. The "QT scale device" delivers valid heart rate and QTc interval measurements. © 2016 Wiley Periodicals, Inc.

  10. Scaling effect and its impact on wavelength-scale microlenses (United States)

    Kim, Myun-Sik; Scharf, Toralf; Herzig, Hans Peter; Voelkel, Reinhard


    We revisit the scaling laws in micro-optical systems to highlight new phenomena arising beyond a conventional optical regime, especially when the size of the system approaches to the operational wavelength. Our goal is to visualize the impact of the scaling effect in the micrometer-sized domain. First, we will show where the conventional optical regime fades away and unexpected responses arise. We will show this by using a ball-lens as an example. Second, we discuss the scaling effect in the Fresnel number of lens systems. Moving toward wavelength-scale microlenses, a specific value of Fresnel numbers leads to a giant focal shift with strong focal power. Our study will give comprehensive insights into the birth of unanticipated phenomena in miniaturized optical systems.

  11. A Figurine and its Scale, a Scale and its Figurine

    Directory of Open Access Journals (Sweden)

    Fotis Ifantidis


    Full Text Available I was taught to think of archaeological photography as faceless, a to-scale and accurate depiction of ancient artefacts and sites but these rules only apply to one part of archaeological photography, the 'official' one.

  12. Weyl current, scale-invariant inflation, and Planck scale generation (United States)

    Ferreira, Pedro G.; Hill, Christopher T.; Ross, Graham G.


    Scalar fields, ϕi, can be coupled nonminimally to curvature and satisfy the general criteria: (i) the theory has no mass input parameters, including MP=0 ; (ii) the ϕi have arbitrary values and gradients, but undergo a general expansion and relaxation to constant values that satisfy a nontrivial constraint, K (ϕi)=constant; (iii) this constraint breaks scale symmetry spontaneously, and the Planck mass is dynamically generated; (iv) there can be adequate inflation associated with slow roll in a scale-invariant potential subject to the constraint; (v) the final vacuum can have a small to vanishing cosmological constant; (vi) large hierarchies in vacuum expectation values can naturally form; (vii) there is a harmless dilaton which naturally eludes the usual constraints on massless scalars. These models are governed by a global Weyl scale symmetry and its conserved current, Kμ. At the quantum level the Weyl scale symmetry can be maintained by an invariant specification of renormalized quantities.

  13. Scaling limits of a model for selection at two scales (United States)

    Luo, Shishi; Mattingly, Jonathan C.


    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

  14. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K


    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  15. Dimensional scaling in chemical physics

    CERN Document Server

    Avery, John; Goscinski, Osvaldo


    Dimensional scaling offers a new approach to quantum dynamical correlations. This is the first book dealing with dimensional scaling methods in the quantum theory of atoms and molecules. Appropriately, it is a multiauthor production, derived chiefly from papers presented at a workshop held in June 1991 at the Ørsted Institute in Copenhagen. Although focused on dimensional scaling, the volume includes contributions on other unorthodox methods for treating nonseparable dynamical problems and electronic correlation. In shaping the book, the editors serve three needs: an introductory tutorial for this still fledgling field; a guide to the literature; and an inventory of current research results and prospects. Part I treats basic aspects of dimensional scaling. Addressed to readers entirely unfamiliar with the subject, it provides both a qualitative overview, and a tour of elementary quantum mechanics. Part II surveys the research frontier. The eight chapters exemplify current techniques and outline results. Part...

  16. Scaling of graphene integrated circuits. (United States)

    Bianchi, Massimiliano; Guerriero, Erica; Fiocco, Marco; Alberti, Ruggero; Polloni, Laura; Behnam, Ashkan; Carrion, Enrique A; Pop, Eric; Sordan, Roman


    The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing.

  17. Scaling behaviour of entropy estimates (United States)

    Schürmann, Thomas


    Entropy estimation of information sources is highly non-trivial for symbol sequences with strong long-range correlations. The rabbit sequence, related to the symbolic dynamics of the nonlinear circle map at the critical point as well as the logistic map at the Feigenbaum point, is known to produce long memory tails. For both dynamical systems the scaling behaviour of the block entropy of order n has been shown to increase ∝log n. In contrast to such probabilistic concepts, we investigate the scaling behaviour of certain non-probabilistic entropy estimation schemes suggested by Lempel and Ziv (LZ) in the context of algorithmic complexity and data compression. These are applied in a sequential manner with the scaling variable being the length N of the sequence. We determine the scaling law for the LZ entropy estimate applied to the case of the critical circle map and the logistic map at the Feigenbaum point in a binary partition.

  18. Pilot Scale Advanced Fogging Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Demmer, Rick L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Fox, Don T. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Archiblad, Kip E. [Idaho National Lab. (INL), Idaho Falls, ID (United States)


    Experiments in 2006 developed a useful fog solution using three different chemical constituents. Optimization of the fog recipe and use of commercially available equipment were identified as needs that had not been addressed. During 2012 development work it was noted that low concentrations of the components hampered coverage and drying in the United Kingdom’s National Nuclear Laboratory’s testing much more so than was evident in the 2006 tests. In fiscal year 2014 the Idaho National Laboratory undertook a systematic optimization of the fogging formulation and conducted a non-radioactive, pilot scale demonstration using commercially available fogging equipment. While not as sophisticated as the equipment used in earlier testing, the new approach is much less expensive and readily available for smaller scale operations. Pilot scale testing was important to validate new equipment of an appropriate scale, optimize the chemistry of the fogging solution, and to realize the conceptual approach.

  19. String no-scale supergravity

    CERN Document Server

    López, J L


    We explore the postulates of string no-scale supergravity in the context of free-fermionic string models. The requirements of vanishing vacuum energy, flat directions of the scalar potential, and stable no-scale mechanism impose strong restrictions on possible string no-scale models, which must possess only two or three moduli, and a constrained massless spectrum. All soft-supersymmetry-breaking parameters involving untwisted fields are given explicitly and those involving twisted fields are conjectured. This class of models contain no free parameters, \\ie, in principle all supersymmetric particle masses and interactions are completely determined. A computerized search for free-fermionic models with the desired properties yields a candidate SU(5)\\times U(1) model, and evidence that all such models contain extra (\\r{10},\\rb{10}) matter representations that allow gauge coupling unification at the string scale. Our candidate model possesses a novel assignment of supersymmetry breaking scalar masses which gives v...

  20. Hidden scale invariance of metals

    DEFF Research Database (Denmark)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.


    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general “hidden” scale invariance...... of iron and phosphorous are shown to increase at elevated pressures. Finally, we discuss how scale invariance explains the Grüneisen equation of state and a number of well-known empirical melting and freezing rules...

  1. Scale issues in remote sensing

    CERN Document Server

    Weng, Qihao


    This book provides up-to-date developments, methods, and techniques in the field of GIS and remote sensing and features articles from internationally renowned authorities on three interrelated perspectives of scaling issues: scale in land surface properties, land surface patterns, and land surface processes. The book is ideal as a professional reference for practicing geographic information scientists and remote sensing engineers as well as a supplemental reading for graduate level students.

  2. Scaling exponents of star polymers


    von Ferber, Christian; Holovatch, Yurij


    We review recent results of the field theoretical renormalization group analysis on the scaling properties of star polymers. We give a brief account of how the numerical values of the exponents governing the scaling of star polymers were obtained as well as provide some examples of the phenomena governed by these exponents. In particular we treat the interaction between star polymers in a good solvent, the Brownian motion near absorbing polymers, and diffusion-controlled reactions involving p...

  3. Two-Dimensional Vernier Scale (United States)

    Juday, Richard D.


    Modified vernier scale gives accurate two-dimensional coordinates from maps, drawings, or cathode-ray-tube displays. Movable circular overlay rests on fixed rectangular-grid overlay. Pitch of circles nine-tenths that of grid and, for greatest accuracy, radii of circles large compared with pitch of grid. Scale enables user to interpolate between finest divisions of regularly spaced rule simply by observing which mark on auxiliary vernier rule aligns with mark on primary rule.

  4. Normalization of emotion control scale

    Directory of Open Access Journals (Sweden)

    Hojatoolah Tahmasebian


    Full Text Available Background: Emotion control skill teaches the individuals how to identify their emotions and how to express and control them in various situations. The aim of this study was to normalize and measure the internal and external validity and reliability of emotion control test. Methods: This standardization study was carried out on a statistical society, including all pupils, students, teachers, nurses and university professors in Kermanshah in 2012, using Williams’ emotion control scale. The subjects included 1,500 (810 females and 690 males people who were selected by stratified random sampling. Williams (1997 emotion control scale, was used to collect the required data. Emotional Control Scale is a tool for measuring the degree of control people have over their emotions. This scale has four subscales, including anger, depressed mood, anxiety and positive affect. The collected data were analyzed by SPSS software using correlation and Cronbach's alpha tests. Results: The results of internal consistency of the questionnaire reported by Cronbach's alpha indicated an acceptable internal consistency for emotional control scale, and the correlation between the subscales of the test and between the items of the questionnaire was significant at 0.01 confidence level. Conclusion: The validity of emotion control scale among the pupils, students, teachers, nurses and teachers in Iran has an acceptable range, and the test itemswere correlated with each other, thereby making them appropriate for measuring emotion control.

  5. The Menopause Rating Scale (MRS scale: A methodological review

    Directory of Open Access Journals (Sweden)

    Strelow Frank


    Full Text Available Abstract Background This paper compiles data from different sources to get a first comprehensive picture of psychometric and other methodological characteristics of the Menopause Rating Scale (MRS scale. The scale was designed and standardized as a self-administered scale to (a to assess symptoms/complaints of aging women under different conditions, (b to evaluate the severity of symptoms over time, and (c to measure changes pre- and postmenopause replacement therapy. The scale became widespread used (available in 10 languages. Method A large multinational survey (9 countries in 4 continents from 2001/ 2002 is the basis for in depth analyses on reliability and validity of the MRS. Additional small convenience samples were used to get first impressions about test-retest reliability. The data were centrally analyzed. Data from a postmarketing HRT study were used to estimate discriminative validity. Results Reliability measures (consistency and test-retest stability were found to be good across countries, although the sample size for test-retest reliability was small. Validity: The internal structure of the MRS across countries was astonishingly similar to conclude that the scale really measures the same phenomenon in symptomatic women. The sub-scores and total score correlations were high (0.7–0.9 but lower among the sub-scales (0.5–0.7. This however suggests that the subscales are not fully independent. Norm values from different populations were presented showing that a direct comparison between Europe and North America is possible, but caution recommended with comparisons of data from Latin America and Indonesia. But this will not affect intra-individual comparisons within clinical trials. The comparison with the Kupperman Index showed sufficiently good correlations, illustrating an adept criterion-oriented validity. The same is true for the comparison with the generic quality-of-life scale SF-36 where also a sufficiently close association

  6. The Internet Gaming Disorder Scale. (United States)

    Lemmens, Jeroen S; Valkenburg, Patti M; Gentile, Douglas A


    Recently, the American Psychiatric Association included Internet gaming disorder (IGD) in the appendix of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). The main aim of the current study was to test the reliability and validity of 4 survey instruments to measure IGD on the basis of the 9 criteria from the DSM-5: a long (27-item) and short (9-item) polytomous scale and a long (27-item) and short (9-item) dichotomous scale. The psychometric properties of these scales were tested among a representative sample of 2,444 Dutch adolescents and adults, ages 13-40 years. Confirmatory factor analyses demonstrated that the structural validity (i.e., the dimensional structure) of all scales was satisfactory. Both types of assessment (polytomous and dichotomous) were also reliable (i.e., internally consistent) and showed good criterion-related validity, as indicated by positive correlations with time spent playing games, loneliness, and aggression and negative correlations with self-esteem, prosocial behavior, and life satisfaction. The dichotomous 9-item IGD scale showed solid psychometric properties and was the most practical scale for diagnostic purposes. Latent class analysis of this dichotomous scale indicated that 3 groups could be discerned: normal gamers, risky gamers, and disordered gamers. On the basis of the number of people in this last group, the prevalence of IGD among 13- through 40-year-olds in the Netherlands is approximately 4%. If the DSM-5 threshold for diagnosis (experiencing 5 or more criteria) is applied, the prevalence of disordered gamers is more than 5%. (c) 2015 APA, all rights reserved).

  7. Copper atomic-scale transistors

    Directory of Open Access Journals (Sweden)

    Fangqing Xie


    Full Text Available We investigated copper as a working material for metallic atomic-scale transistors and confirmed that copper atomic-scale transistors can be fabricated and operated electrochemically in a copper electrolyte (CuSO4 + H2SO4 in bi-distilled water under ambient conditions with three microelectrodes (source, drain and gate. The electrochemical switching-on potential of the atomic-scale transistor is below 350 mV, and the switching-off potential is between 0 and −170 mV. The switching-on current is above 1 μA, which is compatible with semiconductor transistor devices. Both sign and amplitude of the voltage applied across the source and drain electrodes (Ubias influence the switching rate of the transistor and the copper deposition on the electrodes, and correspondingly shift the electrochemical operation potential. The copper atomic-scale transistors can be switched using a function generator without a computer-controlled feedback switching mechanism. The copper atomic-scale transistors, with only one or two atoms at the narrowest constriction, were realized to switch between 0 and 1G0 (G0 = 2e2/h; with e being the electron charge, and h being Planck’s constant or 2G0 by the function generator. The switching rate can reach up to 10 Hz. The copper atomic-scale transistor demonstrates volatile/non-volatile dual functionalities. Such an optimal merging of the logic with memory may open a perspective for processor-in-memory and logic-in-memory architectures, using copper as an alternative working material besides silver for fully metallic atomic-scale transistors.

  8. SETI and astrobiology: The Rio Scale and the London Scale (United States)

    Almár, Iván


    The public reaction to a discovery, the character of the corresponding risk communication, as well as the possible impact on science and society all depend on the character of the phenomenon discovered, on the method of discovery, on the distance to the phenomenon and, last but not least, on the reliability of the announcement itself. The Rio Scale - proposed together with Jill Tarter just a decade ago at an IAA symposium in Rio de Janeiro - attempts to quantify the relative importance of such a “low probability, high consequence event”, namely the announcement of an ETI discovery. After the publication of the book “The Eerie Silence” by Paul Davies it is necessary to control how the recently suggested possible “technosignatures” or “technomarkers” mentioned in this book could be evaluated by the Rio Scale. The new London Scale, proposed at the Royal Society meeting in January 2010, in London, is a similar attempt to quantify the impact of an announcement regarding the discovery of ET life on an analogous ordinal scale between zero and ten. Here again the new concept of a “shadow biosphere” raised in this book deserves a special attention since a “weird form of life” found on Earth would not necessarily have an extraterrestrial origin, nevertheless it might be an important discovery in itself. Several arguments are presented that methods, aims and targets of “search for ET life” and “search for ET intelligence” are recently converging. The new problem is raised whether a unification of these two scales is necessary as a consequence of the convergence of the two subjects. Finally, it is suggested that experts in social sciences should take the structure of the respective scales into consideration when investigating case by case the possible effects on the society of such discoveries.

  9. Scale Construction: Motivation and Relationship Scale in Education

    Directory of Open Access Journals (Sweden)

    Yunus Emre Demir


    Full Text Available The aim of this study is to analyze the validity and reliability of the Turkish version of Motivation and Relationship Scale (MRS, (Raufelder , Drury , Jagenow , Hoferichter & Bukowski , 2013.Participants were 526 students of secondary school. The results of confirmatory factor analysis described that the 21 items loaded three factor and the three-dimensional model was well fit (x2= 640.04, sd= 185, RMSEA= .068, NNFI= .90, CFI = .91, IFI=.91,SRMR=079, GFI= .90,AGFI=.87. Overall findings demonstrated that this scale is a valid and indicates that the adapted MRS is a valid instrument for measuring secondary school children’s motivation in Turkey.

  10. Re-scaling Landscape. Re-scaling Identity

    Directory of Open Access Journals (Sweden)

    Julia Sulina


    Full Text Available To understand the bonds cultural groups living in Estonia have with their cultural landscape and why they identify themselves with a particular territory (region, the general process of presenting the landscape role in their identity needs to be analysed. Scales of landscape and regional identity of cultural groups are examined as belonging to different historical social formation periods, including nowadays, also taking into account the identity and physical setting relationship, as well as the results of questionnaires and previous studies. The tendency is that becoming more open the society is influenced by globalisation, new technologies and freedom of movement, thus changing both the identities and landscapes scales.

  11. Scaling laws of Rydberg excitons (United States)

    Heckötter, J.; Freitag, M.; Fröhlich, D.; Aßmann, M.; Bayer, M.; Semina, M. A.; Glazov, M. M.


    Rydberg atoms have attracted considerable interest due to their huge interaction among each other and with external fields. They demonstrate characteristic scaling laws in dependence on the principal quantum number n for features such as the magnetic field for level crossing or the electric field of dissociation. Recently, the observation of excitons in highly excited states has allowed studying Rydberg physics in cuprous oxide crystals. Fundamentally different insights may be expected for Rydberg excitons, as the crystal environment and associated symmetry reduction compared to vacuum give not only optical access to many more states within an exciton multiplet but also extend the Hamiltonian for describing the exciton beyond the hydrogen model. Here we study experimentally and theoretically the scaling of several parameters of Rydberg excitons with n , for some of which we indeed find laws different from those of atoms. For others we find identical scaling laws with n , even though their origin may be distinctly different from the atomic case. At zero field the energy splitting of a particular multiplet n scales as n-3 due to crystal-specific terms in the Hamiltonian, e.g., from the valence band structure. From absorption spectra in magnetic field we find for the first crossing of levels with adjacent principal quantum numbers a Br∝n-4 dependence of the resonance field strength, Br, due to the dominant paramagnetic term unlike for atoms for which the diamagnetic contribution is decisive, resulting in a Br∝n-6 dependence. By contrast, the resonance electric field strength shows a scaling as Er∝n-5 as for Rydberg atoms. Also similar to atoms with the exception of hydrogen we observe anticrossings between states belonging to multiplets with different principal quantum numbers at these resonances. The energy splittings at the avoided crossings scale roughly as n-4, again due to crystal specific features in the exciton Hamiltonian. The data also allow us to

  12. Featured Invention: Laser Scaling Device (United States)

    Dunn, Carol Anne


    In September 2003, NASA signed a nonexclusive license agreement with Armor Forensics, a subsidiary of Armor Holdings, Inc., for the laser scaling device under the Innovative Partnerships Program. Coupled with a measuring program, also developed by NASA, the unit provides crime scene investigators with the ability to shoot photographs at scale without having to physically enter the scene, analyzing details such as bloodspatter patterns and graffiti. This ability keeps the scene's components intact and pristine for the collection of information and evidence. The laser scaling device elegantly solved a pressing problem for NASA's shuttle operations team and also provided industry with a useful tool. For NASA, the laser scaling device is still used to measure divots or damage to the shuttle's external tank and other structures around the launchpad. When the invention also met similar needs within industry, the Innovative Partnerships Program provided information to Armor Forensics for licensing and marketing the laser scaling device. Jeff Kohler, technology transfer agent at Kennedy, added, "We also invited a representative from the FBI's special photography unit to Kennedy to meet with Armor Forensics and the innovator. Eventually the FBI ended up purchasing some units. Armor Forensics is also beginning to receive interest from DoD [Department of Defense] for use in military crime scene investigations overseas."

  13. Multi-scale brain networks. (United States)

    Betzel, Richard F; Bassett, Danielle S


    The network architecture of the human brain has become a feature of increasing interest to the neuroscientific community, largely because of its potential to illuminate human cognition, its variation over development and aging, and its alteration in disease or injury. Traditional tools and approaches to study this architecture have largely focused on single scales-of topology, time, and space. Expanding beyond this narrow view, we focus this review on pertinent questions and novel methodological advances for the multi-scale brain. We separate our exposition into content related to multi-scale topological structure, multi-scale temporal structure, and multi-scale spatial structure. In each case, we recount empirical evidence for such structures, survey network-based methodological approaches to reveal these structures, and outline current frontiers and open questions. Although predominantly peppered with examples from human neuroimaging, we hope that this account will offer an accessible guide to any neuroscientist aiming to measure, characterize, and understand the full richness of the brain's multiscale network structure-irrespective of species, imaging modality, or spatial resolution. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Scaling Effect In Trade Network (United States)

    Konar, M.; Lin, X.; Rushforth, R.; Ruddell, B. L.; Reimer, J.


    Scaling is an important issue in the physical sciences. Economic trade is increasingly of interest to the scientific community due to the natural resources (e.g. water, carbon, nutrients, etc.) embodied in traded commodities. Trade refers to the spatial and temporal redistribution of commodities, and is typically measured annually between countries. However, commodity exchange networks occur at many different scales, though data availability at finer temporal and spatial resolution is rare. Exchange networks may prove an important adaptation measure to cope with future climate and economic shocks. As such, it is essential to understand how commodity exchange networks scale, so that we can understand opportunities and roadblocks to the spatial and temporal redistribution of goods and services. To this end, we present an empirical analysis of trade systems across three spatial scales: global, sub-national in the United States, and county-scale in the United States. We compare and contrast the network properties, the self-sufficiency ratio, and performance of the gravity model of trade for these three exchange systems.

  15. Piezoelectricity of green carp scales (United States)

    Jiang, H. Y.; Yen, F.; Huang, C. W.; Mei, R. B.; Chen, L.


    Piezoelectricity takes part in multiple important functions and processes in biomaterials often vital to the survival of organisms. Here, we investigate the piezoelectric properties of fish scales of green carp by directly examining their morphology at nanometer levels. Two types of regions are found to comprise the scales, a smooth one and a rough one. The smooth region is comprised of a ridge and trough pattern and the rough region characterized by a flat base with an elevated mosaic of crescents. Piezoelectricity is found on the ridges and base regions of the scales. From clear distinctions between the composition of the inner and outer surfaces of the scales, we identify the piezoelectricity to originate from the presence of hydroxyapatite which only exists on the surface of the fish scales. Our findings reveal a different mechanism of how green carp are sensitive to their surroundings and should be helpful to studies related to the electromechanical properties of marine life and the development of bio-inspired materials.

  16. Allometric scaling in-vitro (United States)

    Ahluwalia, Arti


    About two decades ago, West and coworkers established a model which predicts that metabolic rate follows a three quarter power relationship with the mass of an organism, based on the premise that tissues are supplied nutrients through a fractal distribution network. Quarter power scaling is widely considered a universal law of biology and it is generally accepted that were in-vitro cultures to obey allometric metabolic scaling, they would have more predictive potential and could, for instance, provide a viable substitute for animals in research. This paper outlines a theoretical and computational framework for establishing quarter power scaling in three-dimensional spherical constructs in-vitro, starting where fractal distribution ends. Allometric scaling in non-vascular spherical tissue constructs was assessed using models of Michaelis Menten oxygen consumption and diffusion. The models demonstrate that physiological scaling is maintained when about 5 to 60% of the construct is exposed to oxygen concentrations less than the Michaelis Menten constant, with a significant concentration gradient in the sphere. The results have important implications for the design of downscaled in-vitro systems with physiological relevance.

  17. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.


    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale....

  18. Magnetotransport on the nano scale (United States)

    Willke, Philip; Kotzott, Thomas; Pruschke, Thomas; Wenderoth, Martin


    Transport experiments in strong magnetic fields show a variety of fascinating phenomena like the quantum Hall effect, weak localization or the giant magnetoresistance. Often they originate from the atomic-scale structure inaccessible to macroscopic magnetotransport experiments. To connect spatial information with transport properties, various advanced scanning probe methods have been developed. Capable of ultimate spatial resolution, scanning tunnelling potentiometry has been used to determine the resistance of atomic-scale defects such as steps and interfaces. Here we combine this technique with magnetic fields and thus transfer magnetotransport experiments to the atomic scale. Monitoring the local voltage drop in epitaxial graphene, we show how the magnetic field controls the electric field components. We find that scattering processes at localized defects are independent of the strong magnetic field while monolayer and bilayer graphene sheets show a locally varying conductivity and charge carrier concentration differing from the macroscopic average.

  19. Flavor hierarchies from dynamical scales

    Energy Technology Data Exchange (ETDEWEB)

    Panico, Giuliano [IFAE and BIST, Universitat Autònoma de Barcelona,Bellaterra, Barcelona, 08193 (Spain); Pomarol, Alex [IFAE and BIST, Universitat Autònoma de Barcelona,Bellaterra, Barcelona, 08193 (Spain); CERN, Theory Division,Geneva 23, CH-1211 (Switzerland); Dept. de Física, Universitat Autònoma de Barcelona,Bellaterra, Barcelona, 08193 (Spain)


    One main obstacle for any beyond the SM (BSM) scenario solving the hierarchy problem is its potentially large contributions to electric dipole moments. An elegant way to avoid this problem is to have the light SM fermions couple to the BSM sector only through bilinears, f̄f. This possibility can be neatly implemented in composite Higgs models. We study the implications of dynamically generating the fermion Yukawa couplings at different scales, relating larger scales to lighter SM fermions. We show that all flavor and CP-violating constraints can be easily accommodated for a BSM scale of few TeV, without requiring any extra symmetry. Contributions to B physics are mainly mediated by the top, giving a predictive pattern of deviations in ΔF=2 and ΔF=1 flavor observables that could be seen in future experiments.

  20. Flavor hierarchies from dynamical scales

    CERN Document Server

    Panico, Giuliano


    One main obstacle for any beyond the SM (BSM) scenario solving the hierarchy problem is its potentially large contributions to electric dipole moments. An elegant way to avoid this problem is to have the light SM fermions couple to the BSM sector only through bilinears, $\\bar ff$. This possibility can be neatly implemented in composite Higgs models. We study the implications of dynamically generating the fermion Yukawa couplings at different scales, relating larger scales to lighter SM fermions. We show that all flavor and CP-violating constraints can be easily accommodated for a BSM scale of few TeV, without requiring any extra symmetry. Contributions to B physics are mainly mediated by the top, giving a predictive pattern of deviations in $\\Delta F=2$ and $\\Delta F=1$ flavor observables that could be seen in future experiments.

  1. The Huntington's Disease Dysphagia Scale. (United States)

    Heemskerk, Anne-Wil; Verbist, Berit M; Marinus, Johan; Heijnen, Bas; Sjögren, Elisabeth V; Roos, Raymund A C


    Little is known about the swallowing disturbances of patients with Huntington's disease; therefore, we developed the Huntington's Disease Dysphagia Scale. The scale was developed in four stages: (1) item generation, (2) comprehension testing, (3) evaluation of reliability, (4) item reduction and validity testing. The questionnaire was presented twice to 50 Huntington's disease patients and their caregivers. The Kruskal-Wallis test was used to evaluate whether the severity of swallowing difficulties increased with advancing disease. Pearson's correlation coefficient was used to examine the construct validity with the Swallowing Disturbance Questionnaire. The final version contained 11 items with five response options and exhibited a Cronbach's alpha coefficient of 0.728. The severity of swallowing difficulties was significantly higher in more advanced Huntington's disease. The correlation with the Swallowing Disturbance Questionnaire was 0.734. We developed a valid and reliable 11-item scale to measure the severity of dysphagia in Huntington's disease. © 2014 International Parkinson and Movement Disorder Society.

  2. Scale problems in reporting landscape pattern at the regional scale (United States)

    R.V. O' Neill; C.T. Hunsaker; S.P. Timmins; B.L. Jackson; K.B. Jones; Kurt H. Riitters; James D. Wickham


    Remotely sensed data for Southeastern United States (Standard Federal Region 4) are used to examine the scale problems involved in reporting landscape pattern for a large, heterogeneous region. Frequency distribu-tions of landscape indices illustrate problems associated with the grain or resolution of the data. Grain should be 2 to 5 times smaller than the...

  3. Reconciling theories for metabolic scaling. (United States)

    Maino, James L; Kearney, Michael R; Nisbet, Roger M; Kooijman, Sebastiaan A L M


    Metabolic theory specifies constraints on the metabolic organisation of individual organisms. These constraints have important implications for biological processes ranging from the scale of molecules all the way to the level of populations, communities and ecosystems, with their application to the latter emerging as the field of metabolic ecology. While ecologists continue to use individual metabolism to identify constraints in ecological processes, the topic of metabolic scaling remains controversial. Much of the current interest and controversy in metabolic theory relates to recent ideas about the role of supply networks in constraining energy supply to cells. We show that an alternative explanation for physicochemical constraints on individual metabolism, as formalised by dynamic energy budget (DEB) theory, can contribute to the theoretical underpinning of metabolic ecology, while increasing coherence between intra- and interspecific scaling relationships. In particular, we emphasise how the DEB theory considers constraints on the storage and use of assimilated nutrients and derive an equation for the scaling of metabolic rate for adult heterotrophs without relying on optimisation arguments or implying cellular nutrient supply limitation. Using realistic data on growth and reproduction from the literature, we parameterise the curve for respiration and compare the a priori prediction against a mammalian data set for respiration. Because the DEB theory mechanism for metabolic scaling is based on the universal process of acquiring and using pools of stored metabolites (a basal feature of life), it applies to all organisms irrespective of the nature of metabolic transport to cells. Although the DEB mechanism does not necessarily contradict insight from transport-based models, the mechanism offers an explanation for differences between the intra- and interspecific scaling of biological rates with mass, suggesting novel tests of the respective hypotheses. © 2013 The

  4. Cavitation erosion size scale effects (United States)

    Rao, P. V.; Buckley, D. H.


    Size scaling in cavitation erosion is a major problem confronting the design engineers of modern high speed machinery. An overview and erosion data analysis presented in this paper indicate that the size scale exponent n in the erosion rate relationship as a function of the size or diameter can vary from 1.7 to 4.9 depending on the type of device used. There is, however, a general agreement as to the values of n if the correlations are made with constant cavitation number.

  5. Pelamis WEC - intermediate scale demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Yemm, R.


    This report describes the successful building and commissioning of an intermediate 1/7th scale model of the Pelamis Wave Energy Converter (WEC) and its testing in the wave climate of the Firth of Forth. Details are given of the design of the semi-submerged articulated structure of cylindrical elements linked by hinged joints. The specific programme objectives and conclusions, development issues addressed, and key remaining risks are discussed along with development milestones to be passed before the Pelamis WEC is ready for full-scale prototype testing.

  6. Integral equations on time scales

    CERN Document Server

    Georgiev, Svetlin G


    This book offers the reader an overview of recent developments of integral equations on time scales. It also contains elegant analytical and numerical methods. This book is primarily intended for senior undergraduate students and beginning graduate students of engineering and science courses. The students in mathematical and physical sciences will find many sections of direct relevance. The book contains nine chapters and each chapter is pedagogically organized. This book is specially designed for those who wish to understand integral equations on time scales without having extensive mathematical background.

  7. Development of Peace Image Scale


    野中,陽一朗; 蘆田, 智絵; 石井,眞治


    The approaches to quest educational value of the peace education have been carried out recently. In the present study, we pointed out that there is lack of a standard scale to define and measure the word of " peace", and conduct the three investigations for the university students to develop the peace image scale. Study 1 examined factor structures which constitute a peace image and its reliability based on results of the meaning analysis by the association tests of word of the peace. The pea...

  8. Continuously-Variable Vernier Scale (United States)

    Miller, Irvin M.


    Easily fabricated device increases precision in reading graphical data. Continuously-variable vernier scale (CV VS) designed to provide greater accuracy to scientists and technologists in reading numerical values from graphical data. Placed on graph and used to interpolate coordinate value of point on curve or plotted point on figure within division on each coordinate axis. Requires neither measurement of line segments where projection of point intersects division nor calculation to quantify projected value. Very flexible device constructed with any kind of scale. Very easy to use, requiring no special equipment of any kind, and saves considerable amount of time if numerous points to be evaluated.

  9. Managing Small-scale Fisheries

    International Development Research Centre (IDRC) Digital Library (Canada)

    However, a glance through current fisheries literature reveals a perplexing array of perspectives and prescriptions to achieve this goal. There are few simple solutions for the problems that fisheries science and management address anywhere in the world. This is particularly so for small-scale fisheries, which this book is ...

  10. Scaling up of renewable chemicals. (United States)

    Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A


    The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Scaling of pressurized fluidized beds

    Energy Technology Data Exchange (ETDEWEB)

    Guralnik, S.; Glicksman, L.R.


    The project has two primary objectives. The first is to verify a set of hydrodynamic scaling relationships for commercial pressurized fluidized bed combustors (PFBC). The second objective is to investigate solids mixing in pressurized bubbling fluidized beds. American Electric Power`s (AEP) Tidd combined-cycle demonstration plant will provide time-varying pressure drop data to serve as the basis for the scaling verification. The verification will involve demonstrating that a properly scaled cold model and the Tidd PFBC exhibit hydrodynamically similar behavior. An important issue in PFBC design is the spacing of fuel feed ports. The feed spacing is dictated by the fuel distribution and the mixing characteristics within the bed. After completing the scaling verification, the cold model will be used to study the characteristics of PFBCs. A thermal tracer technique will be utilized to study mixing both near the fuel feed region and in the far field. The results allow the coal feed and distributor to be designed for optimal heating.

  12. Developing a Teacher Characteristics Scale (United States)

    Yaratan, Hüseyin; Muezzin, Emre


    It is a known fact that every profession needs to be developed during its practice. To be able to acquire this we need to know the characteristics of teachers related to their professional development. For this purpose this study tries to develop a scale to measure teacher characteristics which would help in designing in-service training programs…

  13. Learning From the Furniture Scale

    DEFF Research Database (Denmark)

    Hvejsel, Marie Frier; Kirkegaard, Poul Henning


    tangibility allowing experimentation with the ‘principles’ of architectural construction. In present paper we explore this dual tectonic potential of the furniture scale as an epistemological foundation in architectural education. In this matter, we discuss the conduct of a master-level course where we...

  14. Animal coloration: sexy spider scales. (United States)

    Taylor, Lisa A; McGraw, Kevin J


    Many male jumping spiders display vibrant colors that are used in visual communication. A recent microscopic study on a jumping spider from Singapore shows that three-layered 'scale sandwiches' of chitin and air are responsible for producing their brilliant iridescent body coloration.

  15. Structural Similitude and Scaling Laws (United States)

    Simitses, George J.


    Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in

  16. Multiphysics of Fractures across Scales (United States)

    Pyrak-Nolte, L. J.


    Remote monitoring of fluid flow in fractured rock faces challenges because fractures are topologically complex, span a range of length scales, and are routinely altered due to physical and chemical processes. A long-standing goal has been to find a link between fluid flow supported by a fracture and the seismic response of that fracture. This link requires a relationship between intrinsic fracture properties and macroscopic scattered wave fields. Furthermore, such a link among multiphysical properties of fracture should be retained as the scale of observation changes. Recently, Pyrak-Nolte and Nolte (Nature Comm., 2016) demonstrated, numerically, that a scaling relationship exists between fluid flow and fracture specific stiffness, linked through the topology of the fracture void geometry (i.e. fracture void space and contact area spatial distributions). This scaling relationship holds for fractures with either random or spatially correlated aperture distributions. To extend these results, a heuristic numerical study was performed to determine if fracture specific stiffness determined from seismic wave attenuation (defined through a displacement-discontinuity boundary condition) corresponds to static stiffness based on deformation measurements. In the long wavelength limit, static and dynamic stiffness are closely connected. As the scattering conditions of the fracture move out of the long-wavelength limit, a frequency-dependent stiffness is defined that captures low-order corrections, extending the regime of applicability of the displacement discontinuity model. The displacement discontinuity theory has a built-in scaling parameter that ensures some set of discontinuities will be optimal for detection as different wavelengths sample different subsets of fractures. Future studies will extend these concepts to fracture networks. Acknowledgments: The U.S. Department of Energy, Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences

  17. Scale dependence of deuteron electrodisintegration (United States)

    More, S. N.; Bogner, S. K.; Furnstahl, R. J.


    Background: Isolating nuclear structure properties from knock-out reactions in a process-independent manner requires a controlled factorization, which is always to some degree scale and scheme dependent. Understanding this dependence is important for robust extractions from experiment, to correctly use the structure information in other processes, and to understand the impact of approximations for both. Purpose: We seek insight into scale dependence by exploring a model calculation of deuteron electrodisintegration, which provides a simple and clean theoretical laboratory. Methods: By considering various kinematic regions of the longitudinal structure function, we can examine how the components—the initial deuteron wave function, the current operator, and the final-state interactions (FSIs)—combine at different scales. We use the similarity renormalization group to evolve each component. Results: When evolved to different resolutions, the ingredients are all modified, but how they combine depends strongly on the kinematic region. In some regions, for example, the FSIs are largely unaffected by evolution, while elsewhere FSIs are greatly reduced. For certain kinematics, the impulse approximation at a high renormalization group resolution gives an intuitive picture in terms of a one-body current breaking up a short-range correlated neutron-proton pair, although FSIs distort this simple picture. With evolution to low resolution, however, the cross section is unchanged but a very different and arguably simpler intuitive picture emerges, with the evolved current efficiently represented at low momentum through derivative expansions or low-rank singular value decompositions. Conclusions: The underlying physics of deuteron electrodisintegration is scale dependent and not just kinematics dependent. As a result, intuition about physics such as the role of short-range correlations or D -state mixing in particular kinematic regimes can be strongly scale dependent

  18. The theory of n-scales (United States)

    Dündar, Furkan Semih


    We provide a theory of n-scales previously called as n dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales [1, 2]. n-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an n-scale as an arbitrary closed subset of ℝn. Modified forward and backward jump operators, Δ-derivatives and Δ-integrals on n-scales are defined.


    Directory of Open Access Journals (Sweden)



    Full Text Available An adaption of McConahay, Harder and Batts’ (1981 moderm racism scale is presented for Chilean population andits psychometric properties, (reliability and validity are studied, along with its relationship with other relevantpsychosocial variables in studies on prejudice and ethnic discrimination (authoritarianism, religiousness, politicalposition, etc., as well as with other forms of prejudice (gender stereotypes and homophobia. The sample consistedof 120 participants, students of psychology, resident in the city of Antofagasta (a geographical zone with a highnumber of Latin-American inmigrants. Our findings show that the scale seems to be a reliable instrument to measurethe prejudice towards Bolivian immigrants in our social environment. Likewise, important differences among thesubjects are detected with high and low scores in the psychosocial variables used.

  20. Scale invariance in road networks. (United States)

    Kalapala, Vamsi; Sanwalani, Vishal; Clauset, Aaron; Moore, Cristopher


    We study the topological and geographic structure of the national road networks of the United States, England, and Denmark. By transforming these networks into their dual representation, where roads are vertices and an edge connects two vertices if the corresponding roads ever intersect, we show that they exhibit both topological and geographic scale invariance. That is, we show that for sufficiently large geographic areas, the dual degree distribution follows a power law with exponent 2.2< or = alpha < or =2.4, and that journeys, regardless of their length, have a largely identical structure. To explain these properties, we introduce and analyze a simple fractal model of road placement that reproduces the observed structure, and suggests a testable connection between the scaling exponent and the fractal dimensions governing the placement of roads and intersections.

  1. Scaling in public transport networks

    Directory of Open Access Journals (Sweden)

    C. von Ferber


    Full Text Available We analyse the statistical properties of public transport networks. These networks are defined by a set of public transport routes (bus lines and the stations serviced by these. For larger networks these appear to possess a scale-free structure, as it is demonstrated e.g. by the Zipf law distribution of the number of routes servicing a given station or for the distribution of the number of stations which can be visited from a chosen one without changing the means of transport. Moreover, a rather particular feature of the public transport network is that many routes service common subsets of stations. We discuss the possibility of new scaling laws that govern intrinsic properties of such subsets.

  2. Meso scale flextensional piezoelectric actuators (United States)

    York, Peter A.; Jafferis, Noah T.; Wood, Robert J.


    We present an ultra-thin meso scale piezoelectric actuator consisting of a piezoceramic beam and a carbon fiber displacement-amplification frame. We show that the actuator can be designed to achieve a wide range of force/displacement characteristics on the mN/μm scales. The best performing design achieved a free displacement of 106 μm and a blocked force of 73 mN, yielding a total energy density of 0.51 {{Jkg}}-1 for the 7.6 mg system. We describe a printed circuit MEMS process for fabricating the actuator that incorporates laser micromachining, chemical vapor deposition, and precision carbon fiber lamination. Lastly, we report the incorporation of the actuator into a microgripper and describe other promising application opportunities in micro-optics and micro-laser systems.

  3. Frequency scaling for angle gathers

    KAUST Repository

    Zuberi, M. A H


    Angle gathers provide an extra dimension to analyze the velocity after migration. Space-shift and time shift-imaging conditions are two methods used to obtain angle gathers, but both are reasonably expensive. By scaling the time-lag axis of the time-shifted images, the computational cost of the time shift imaging condition can be considerably reduced. In imaging and more so Full waveform inversion, frequencydomain Helmholtz solvers are used more often to solve for the wavefields than conventional time domain extrapolators. In such cases, we do not need to extend the image, instead we scale the frequency axis of the frequency domain image to obtain the angle gathers more efficiently. Application on synthetic data demonstrate such features.

  4. Establishing an Information Avoidance Scale. (United States)

    Howell, Jennifer L; Shepperd, James A


    People differ in their openness to different types of information and some information may evoke greater avoidance than does other information. We developed an 8-item measure of people's tendency to avoid learning information. The flexible instrument can function as both a predictor and outcome measure. The results from 4 studies involving 7 samples and 4,393 participants reveal that scores on the measure are generally internally consistent, remain relatively stable across time, and correlate modestly with measures of similar constructs and with avoidance behavior. The measure is adaptable to a variety of types of information (e.g., health outcomes, attractiveness feedback) and is internally consistent in several distinct populations (e.g., high school students, college students, U.S. adults, low-socioeconomic-status adults). Discussion centers on potential uses for the scale and an online supplement discusses a 2-item version of the scale. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Impedance Scaling and Impedance Control

    Energy Technology Data Exchange (ETDEWEB)

    Chou, W.; Griffin, J.


    When a machine becomes really large, such as the Very Large Hadron Collider (VLHC), of which the circumference could reach the order of megameters, beam instability could be an essential bottleneck. This paper studies the scaling of the instability threshold vs. machine size when the coupling impedance scales in a ``normal`` way. It is shown that the beam would be intrinsically unstable for the VLHC. As a possible solution to this problem, it is proposed to introduce local impedance inserts for controlling the machine impedance. In the longitudinal plane, this could be done by using a heavily detuned rf cavity (e.g., a biconical structure), which could provide large imaginary impedance with the right sign (i.e., inductive or capacitive) while keeping the real part small. In the transverse direction, a carefully designed variation of the cross section of a beam pipe could generate negative impedance that would partially compensate the transverse impedance in one plane.

  6. Metabolic scaling in solid tumours (United States)

    Milotti, E.; Vyshemirsky, V.; Sega, M.; Stella, S.; Chignola, R.


    Tumour metabolism is an outstanding topic of cancer research, as it determines the growth rate and the global activity of tumours. Recently, by combining the diffusion of oxygen, nutrients, and metabolites in the extracellular environment, and the internal motions that mix live and dead cells, we derived a growth law of solid tumours which is linked to parameters at the cellular level. Here we use this growth law to obtain a metabolic scaling law for solid tumours, which is obeyed by tumours of different histotypes both in vitro and in vivo, and we display its relation with the fractal dimension of the distribution of live cells in the tumour mass. The scaling behaviour is related to measurable parameters, with potential applications in the clinical practice.

  7. Small-scale classification schemes

    DEFF Research Database (Denmark)

    Hertzum, Morten


    Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements classificat....... This difference between the written requirements specification and the oral discussions at the meetings may help explain software engineers’ general preference for people, rather than documents, as their information sources.......Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements....... While coordination mechanisms focus on how classification schemes enable cooperation among people pursuing a common goal, boundary objects embrace the implicit consequences of classification schemes in situations involving conflicting goals. Moreover, the requirements specification focused on functional...

  8. Source Code Analysis Laboratory (SCALe) (United States)


    products (including services) and processes. The agency has also published ISO/IEC 17025 :2005 General Requirements for the Competence of Testing...SCALe undertakes. Testing and calibration laboratories that comply with ISO/IEC 17025 also operate in accordance with ISO 9001. • NIST National...assessed by the accreditation body against all of the requirements of ISO/IEC 17025 : 2005 General requirements for the competence of testing and

  9. Tuning the Cepheid distance scale (United States)

    Mateo, Mario


    Ongoing observational programs (both from the ground and space) will provide a significantly larger sample of galaxies with well-studied Cepheids both within the Local Group and in more distant galaxies. Recent efforts in the calibration of the Cepheid distance scale utilizing Cepheids in star clusters in the Galaxy and in the Magellanic Clouds are described. Some of the significant advantages of utilizing LMC Cepheids in particular are emphasized, and the current status of the field is summarized.

  10. Scaling Exponents in Financial Markets (United States)

    Kim, Kyungsik; Kim, Cheol-Hyun; Kim, Soo Yong


    We study the dynamical behavior of four exchange rates in foreign exchange markets. A detrended fluctuation analysis (DFA) is applied to detect the long-range correlation embedded in the non-stationary time series. It is for our case found that there exists a persistent long-range correlation in volatilities, which implies the deviation from the efficient market hypothesis. Particularly, the crossover is shown to exist in the scaling behaviors of the volatilities.

  11. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics


    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  12. Latest Developments in SLD Scaling (United States)

    Tsao, Jen-Ching; Anderson, David N.


    Scaling methods have been shown previously to work well for super cooled large droplet (SLD) main ice shapes. However, feather sizes for some conditions have not been well represented by scale tests. To determine if there are fundamental differences between the development of feathers for appendix C and SLD conditions, this study used time-sequenced photographs, viewing along the span of the model during icing sprays. An airspeed of 100 kt, cloud water drop MVDs of 30 and 140 microns, and stagnation freezing fractions of 0.30 and 0.50 were tested in the NASA Glenn Icing Research Tunnel using an unswept 91-cm-chord NACA0012 airfoil model mounted at 0deg AOA. The photos indicated that the feathers that developed in a distinct region downstream of the leading-edge ice determined the horn location and angle. The angle at which feathers grew from the surface were also measured; results are shown for an airspeed of 150 kt, an MVD of 30 microns, and stagnation freezing fractions of 0.30 to 0.60. Feather angles were found to depend strongly on the stagnation freezing fraction, and were independent of either chordwise position on the model or time into the spray. Feather angles also correlated well with horn angles. For these tests, there did not appear to be fundamental differences between the physics of SLD and appendix C icing; therefore, for these conditions similarity parameters used for appendix C scaling appear to be valid for SLD scaling as well. Further investigation into the cause for the large feather structures observed for some SLD conditions will continue.

  13. The Principle of Social Scaling

    Directory of Open Access Journals (Sweden)

    Paulo L. dos Santos


    Full Text Available This paper identifies a general class of economic processes capable of generating the first-moment constraints implicit in the observed cross-sectional distributions of a number of economic variables: processes of social scaling. Across a variety of settings, the outcomes of economic competition reflect the normalization of individual values of certain economic quantities by average or social measures of themselves. The resulting socioreferential processes establish systematic interdependences among individual values of important economic variables, which under certain conditions take the form of emergent first-moment constraints on their distributions. The paper postulates a principle describing this systemic regulation of socially scaled variables and illustrates its empirical purchase by showing how capital- and labor-market competition can give rise to patterns of social scaling that help account for the observed distributions of Tobin’s q and wage income. The paper’s discussion embodies a distinctive approach to understanding and investigating empirically the relationship between individual agency and structural determinations in complex economic systems and motivates the development of observational foundations for aggregative, macrolevel economic analysis.

  14. Development of emotional stability scale

    Directory of Open Access Journals (Sweden)

    M Chaturvedi


    Full Text Available Background: Emotional stability remains the central theme in personality studies. The concept of stable emotional behavior at any level is that which reflects the fruits of normal emotional development. The study aims at development of an emotional stability scale. Materials and Methods: Based on available literature the components of emotional stability were identified and 250 items were developed, covering each component. Two-stage elimination of items was carried out, i.e. through judges′ opinions and item analysis. Results: Fifty items with highest ′t′ values covering 5 dimensions of emotional stability viz pessimism vs. optimism, anxiety vs. calm, aggression vs. tolerance., dependence vs. autonomy., apathy vs. empathy were retained in the final scale. Reliability as checked by Cronbach′s alpha was .81 and by split half method it was .79. Content validity and construct validity were checked. Norms are given in the form of cumulative percentages. Conclusion: Based on the psychometric principles a 50 item, self-administered 5 point Lickert type rating scale was developed for measurement of emotional stability.

  15. Non-relativistic scale anomalies

    Energy Technology Data Exchange (ETDEWEB)

    Arav, Igal [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University,55 Haim Levanon street, Tel-Aviv, 69978 (Israel); Chapman, Shira [Perimeter Institute for Theoretical Physics,31 Caroline Street North, ON N2L 2Y5 (Canada); Oz, Yaron [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University,55 Haim Levanon street, Tel-Aviv, 69978 (Israel)


    We extend the cohomological analysis in arXiv:1410.5831 of anisotropic Lifshitz scale anomalies. We consider non-relativistic theories with a dynamical critical exponent z=2 with or without non-relativistic boosts and a particle number symmetry. We distinguish between cases depending on whether the time direction does or does not induce a foliation structure. We analyse both 1+1 and 2+1 spacetime dimensions. In 1+1 dimensions we find no scale anomalies with Galilean boost symmetries. The anomalies in 2+1 dimensions with Galilean boosts and a foliation structure are all B-type and are identical to the Lifshitz case in the purely spatial sector. With Galilean boosts and without a foliation structure we find also an A-type scale anomaly. There is an infinite ladder of B-type anomalies in the absence of a foliation structure with or without Galilean boosts. We discuss the relation between the existence of a foliation structure and the causality of the field theory.

  16. Temporal scaling in information propagation. (United States)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi


    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  17. Temporal scaling in information propagation (United States)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi


    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  18. Mineral scale management. Part 1, Case studies (United States)

    Peter W. Hart; Alan W. Rudie


    Mineral scale increases operating costs, extends downtime, and increases maintenance requirements. This paper presents several successful case studies detailing how mills have eliminated scale. Cases presented include calcium carbonate scale in a white liquor strainer, calcium oxalate scale in the D0 stage of the bleach plant, enzymatic treatment of brown stock to...

  19. Family health climate scale (FHC-scale): development and validation. (United States)

    Niermann, Christina; Krapf, Fabian; Renner, Britta; Reiner, Miriam; Woll, Alexander


    The family environment is important for explaining individual health behaviour. While previous research mostly focused on influences among family members and dyadic interactions (parent-child), the purpose of this study was to develop a new measure, the Family Health Climate Scale (FHC-Scale), using a family-based approach. The FHC is an attribute of the whole family and describes an aspect of the family environment that is related to health and health behaviour. Specifically, a questionnaire measuring the FHC (a) for nutrition (FHC-NU) and (b) for activity behaviour (FHC-PA) was developed and validated. In Study 1 (N=787) the FHC scales were refined and validated. The sample was randomly divided into two subsamples. With random sample I exploratory factor analyses were conducted and items were selected according to their psychometric quality. In a second step, confirmatory factor analyses were conducted using the random sample II. In Study 2 (N=210 parental couples) the construct validity was tested by correlating the FHC to self-determined motivation of healthy eating and physical activity as well as the families' food environment and joint physical activities. Exploratory factor analyses with random sample I (Study 1) revealed a four (FHC-NU) and a three (FHC-PA) factor model. These models were cross-validated with random sample II and demonstrated an acceptable fit [FHC-PA: χ(2)=222.69, df=74, pfamilies were developed. The use of different informants' ratings demonstrated that the FHC is a family level variable. The results confirm the high relevance of the FHC for individuals' health behaviour. The FHC and the measurement instruments are useful for examining health-related aspects of the family environment.

  20. Brane World Models Need Low String Scale

    CERN Document Server

    Antoniadis, Ignatios; Calmet, Xavier


    Models with large extra dimensions offer the possibility of the Planck scale being of order the electroweak scale, thus alleviating the gauge hierarchy problem. We show that these models suffer from a breakdown of unitarity at around three quarters of the low effective Planck scale. An obvious candidate to fix the unitarity problem is string theory. We therefore argue that it is necessary for the string scale to appear below the effective Planck scale and that the first signature of such models would be string resonances. We further translate experimental bounds on the string scale into bounds on the effective Planck scale.

  1. Evaluating the impact of farm scale innovation at catchment scale (United States)

    van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik


    Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear

  2. Development of a Facebook Addiction Scale. (United States)

    Andreassen, Cecilie Schou; Torsheim, Torbjørn; Brunborg, Geir Scott; Pallesen, Ståle


    The Bergen Facebook Addiction Scale (BFAS), initially a pool of 18 items, three reflecting each of the six core elements of addiction (salience, mood modification, tolerance, withdrawal, conflict, and relapse), was constructed and administered to 423 students together with several other standardized self-report scales (Addictive Tendencies Scale, Online Sociability Scale, Facebook Attitude Scale, NEO-FFI, BIS/BAS scales, and Sleep questions). That item within each of the six addiction elements with the highest corrected item-total correlation was retained in the final scale. The factor structure of the scale was good (RMSEA = .046, CFI = .99) and coefficient alpha was .83. The 3-week test-retest reliability coefficient was .82. The scores converged with scores for other scales of Facebook activity. Also, they were positively related to Neuroticism and Extraversion, and negatively related to Conscientiousness. High scores on the new scale were associated with delayed bedtimes and rising times.

  3. Euthanasia attitude; A comparison of two scales


    Aghababaei, Naser; Farahani, Hojjatollah; Hatami, Javad


    The main purposes of the present study were to see how the term ?euthanasia? influences people?s support for or opposition to euthanasia; and to see how euthanasia attitude relates to religious orientation and personality factors. In this study two different euthanasia attitude scales were compared. 197 students were selected to fill out either the Euthanasia Attitude Scale (EAS) or Wasserman?s Attitude Towards Euthanasia scale (ATE scale). The former scale includes the term ?euthanasia?, the...

  4. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs


    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  5. The Origin of Scales and Scaling Laws in Star Formation (United States)

    Guszejnov, David; Hopkins, Philip; Grudich, Michael


    Star formation is one of the key processes of cosmic evolution as it influences phenomena from the formation of galaxies to the formation of planets, and the development of life. Unfortunately, there is no comprehensive theory of star formation, despite intense effort on both the theoretical and observational sides, due to the large amount of complicated, non-linear physics involved (e.g. MHD, gravity, radiation). A possible approach is to formulate simple, easily testable models that allow us to draw a clear connection between phenomena and physical processes.In the first part of the talk I will focus on the origin of the IMF peak, the characteristic scale of stars. There is debate in the literature about whether the initial conditions of isothermal turbulence could set the IMF peak. Using detailed numerical simulations, I will demonstrate that not to be the case, the initial conditions are "forgotten" through the fragmentation cascade. Additional physics (e.g. feedback) is required to set the IMF peak.In the second part I will use simulated galaxies from the Feedback in Realistic Environments (FIRE) project to show that most star formation theories are unable to reproduce the near universal IMF peak of the Milky Way.Finally, I will present analytic arguments (supported by simulations) that a large number of observables (e.g. IMF slope) are the consequences of scale-free structure formation and are (to first order) unsuitable for differentiating between star formation theories.

  6. Preliminary Scaling Estimate for Select Small Scale Mixing Demonstration Tests

    Energy Technology Data Exchange (ETDEWEB)

    Wells, Beric E.; Fort, James A.; Gauglitz, Phillip A.; Rector, David R.; Schonewill, Philip P.


    The Hanford Site double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions’ Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems.

  7. Fractional Scaling Analysis for IRIS pressurizer reduced scale experiments

    Energy Technology Data Exchange (ETDEWEB)

    Bezerra da Silva, Mario Augusto, E-mail: [Departamento de Energia Nuclear - Centro de Tecnologia e Geociencias, Universidade Federal de Pernambuco, Av. Prof. Luiz Freire, 1000, 50740-540 Recife, PE (Brazil); Brayner de Oliveira Lira, Carlos Alberto, E-mail: cabol@ufpe.b [Departamento de Energia Nuclear - Centro de Tecnologia e Geociencias, Universidade Federal de Pernambuco, Av. Prof. Luiz Freire, 1000, 50740-540 Recife, PE (Brazil); Oliveira Barroso, Antonio Carlos de, E-mail: barroso@ipen.b [Instituto de Pesquisas Energeticas e Nucleares - Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes, 2242, 05508-900 Cidade Universitaria, Sao Paulo (Brazil)


    About twenty organizations joined in a consortium led by Westinghouse to develop an integral, modular and medium size pressurized water reactor (PWR), known as international reactor innovative and secure (IRIS), which is characterized by having most of its components inside the pressure vessel, eliminating or minimizing the probability of severe accidents. The pressurizer is responsible for pressure control in PWRs. A small continuous flow is maintained by the spray system in conventional pressurizers. This mini-flow allows a mixing between the reactor coolant and the pressurizer water, warranting acceptable limits for occasional differences in boron concentrations. There are neither surge lines nor spray in IRIS pressurizer, but surge and recirculation orifices that promote a circulation flow between primary system and pressurizer, avoiding power transients whether outsurges occur. The construction of models is a routine practice in engineering, being supported by similarity rules. A new method of scaling systems, Fractional Scaling Analysis, has been successfully used to analyze pressure variations, considering the most relevant agents of change. The aim of this analysis is to obtain the initial boron concentration ratio and the volumetric flows that ensure similar behavior for boron dispersion in a prototype and its model.

  8. High-resolution sequence stratigraphy of fluvio-deltaic systems: Prospects of system-wide chronostratigraphic correlation

    NARCIS (Netherlands)

    Dalman, R.A.F.; Weltje, G.J.; Karamitopoulos, P.


    A basin-scale numerical model with a sub-grid parameterization of fluvio–deltaic processes and stratigraphy was used to study the relation between alluvial sedimentation and marine deltaic deposition under conditions of time-invariant forcing. The experiments show that delta evolution is governed by

  9. Water flow at all scales

    DEFF Research Database (Denmark)

    Sand-Jensen, K.


    Continuous water fl ow is a unique feature of streams and distinguishes them from all other ecosystems. The main fl ow is always downstream but it varies in time and space and can be diffi cult to measure and describe. The interest of hydrologists, geologists, biologists and farmers in water fl ow......, and its physical impact, depends on whether the main focus is on the entire stream system, the adjacent fi elds, the individual reaches or the habitats of different species. It is important to learn how to manage fl ow at all scales, in order to understand the ecology of streams and the biology...

  10. Drift-Scale Radionuclide Transport

    Energy Technology Data Exchange (ETDEWEB)

    J. Houseworth


    The purpose of this model report is to document the drift scale radionuclide transport model, taking into account the effects of emplacement drifts on flow and transport in the vicinity of the drift, which are not captured in the mountain-scale unsaturated zone (UZ) flow and transport models ''UZ Flow Models and Submodels'' (BSC 2004 [DIRS 169861]), ''Radionuclide Transport Models Under Ambient Conditions'' (BSC 2004 [DIRS 164500]), and ''Particle Tracking Model and Abstraction of Transport Process'' (BSC 2004 [DIRS 170041]). The drift scale radionuclide transport model is intended to be used as an alternative model for comparison with the engineered barrier system (EBS) radionuclide transport model ''EBS Radionuclide Transport Abstraction'' (BSC 2004 [DIRS 169868]). For that purpose, two alternative models have been developed for drift-scale radionuclide transport. One of the alternative models is a dual continuum flow and transport model called the drift shadow model. The effects of variations in the flow field and fracture-matrix interaction in the vicinity of a waste emplacement drift are investigated through sensitivity studies using the drift shadow model (Houseworth et al. 2003 [DIRS 164394]). In this model, the flow is significantly perturbed (reduced) beneath the waste emplacement drifts. However, comparisons of transport in this perturbed flow field with transport in an unperturbed flow field show similar results if the transport is initiated in the rock matrix. This has led to a second alternative model, called the fracture-matrix partitioning model, that focuses on the partitioning of radionuclide transport between the fractures and matrix upon exiting the waste emplacement drift. The fracture-matrix partitioning model computes the partitioning, between fractures and matrix, of diffusive radionuclide transport from the invert (for drifts without seepage) into the rock water

  11. A small scale honey dehydrator


    Gill, R.S.; Hans, V. S.; Singh, Sukhmeet; Pal Singh, Parm; Dhaliwal, S. S.


    A small scale honey dehydrator has been designed, developed, and tested to reduce moisture content of honey below 17 %. Experiments have been conducted for honey dehydration by using drying air at ambient temperature, 30 and 40 °C and water at 35, 40 and 45 °C. In this dehydrator, hot water has been circulated in a water jacket around the honey container to heat honey. The heated honey has been pumped through a sieve to form honey streams through which drying air passes for moisture removal. ...

  12. Time scales in spectator fragmentation (United States)

    Schwarz, C.; Fritz, S.; Bassini, R.; Begemann-Blaich, M.; Gaff-Ejakov, S. J.; Gourio, D.; Groß, C.; Immé, G.; Iori, I.; Kleinevoß, U.; Kunde, G. J.; Kunze, W. D.; Lynen, U.; Maddalena, V.; Mahi, M.; Möhlenkamp, T.; Moroni, A.; Müller, W. F. J.; Nociforo, G.; Ocker, B.; Ohed, T.; Pertruzzelli, F.; Pochodzalla, J.; Raciti, G.; Riccobene, G.; Romano, F. P.; Saija, A.; Schnittker, M.; Schüttauf, A.; Seidel, W.; Serfling, V.; Sfienti, C.; Trautmann, W.; Trzcinski, A.; Verde, G.; Wörner, A.; Xi, Hongfei; Zwieglinski, B.


    Proton-proton correlations and correlations of p-alpha, d-alpha, and t-alpha from spectator decays following Au + Au collisions at 1000 AMeV have been measured with an highly efficient detector hodoscope. The constructed correlation functions indicate a moderate expansion and low breakup densities similar to assumptions made in statistical multifragmentation models. In agreement with a volume breakup rather short time scales were deduced employing directional cuts in proton-proton correlations. PACS numbers: 25.70.Pq, 21.65.+f, 25.70.Mn

  13. JavaScript at scale

    CERN Document Server

    Boduch, Adam


    Have you ever come up against an application that felt like it was built on sand? Maybe you've been tasked with creating an application that needs to last longer than a year before a complete re-write? If so, JavaScript at Scale is your missing documentation for maintaining scalable architectures. There's no prerequisite framework knowledge required for this book, however, most concepts presented throughout are adaptations of components found in frameworks such as Backbone, AngularJS, or Ember. All code examples are presented using ECMAScript 6 syntax, to make sure your applications are ready

  14. Scaling the Baltic Sea environment

    DEFF Research Database (Denmark)

    Larsen, Henrik Gutzon


    The Baltic Sea environment has since the early 1970s passed through several phases of spatial objectification in which the ostensibly well-defined semi-enclosed sea has been framed and reframed as a geographical object for intergovernmental environmental politics. Based on a historical analysis......-scientific logic, but should rather be seen as temporal outcomes of scale framing processes, processes that are accentuated by contemporary conceptions of the environment (or nature) in terms of multi-scalar ecosystems. This has implications for how an environmental concern is perceived and politically addressed....

  15. The Scales of Gravitational Lensing

    Directory of Open Access Journals (Sweden)

    Francesco De Paolis


    Full Text Available After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge of supermassive black holes in the center of ours and nearby galaxies.

  16. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg


    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...


    KAUST Repository

    Fiscaletti, Daniele


    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  18. The SCALE-UP Project (United States)

    Beichner, Robert


    The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).

  19. Dynamic scaling in natural swarms (United States)

    Cavagna, Andrea; Conti, Daniele; Creato, Chiara; Del Castello, Lorenzo; Giardina, Irene; Grigera, Tomas S.; Melillo, Stefania; Parisi, Leonardo; Viale, Massimiliano


    Collective behaviour in biological systems presents theoretical challenges beyond the borders of classical statistical physics. The lack of concepts such as scaling and renormalization is particularly problematic, as it forces us to negotiate details whose relevance is often hard to assess. In an attempt to improve this situation, we present here experimental evidence of the emergence of dynamic scaling laws in natural swarms of midges. We find that spatio-temporal correlation functions in different swarms can be rescaled by using a single characteristic time, which grows with the correlation length with a dynamical critical exponent z ~ 1, a value not found in any other standard statistical model. To check whether out-of-equilibrium effects may be responsible for this anomalous exponent, we run simulations of the simplest model of self-propelled particles and find z ~ 2, suggesting that natural swarms belong to a novel dynamic universality class. This conclusion is strengthened by experimental evidence of the presence of non-dissipative modes in the relaxation, indicating that previously overlooked inertial effects are needed to describe swarm dynamics. The absence of a purely dissipative regime suggests that natural swarms undergo a near-critical censorship of hydrodynamics.

  20. Three Scales of Acephalous Organization

    Directory of Open Access Journals (Sweden)

    Victor MacGill


    Full Text Available Dominance-based hierarchies have been taken for granted as the way we structure our organizations, but they are a part of a paradigm that has put our whole existence in peril. There is an urgent need to explore alternative paradigms that take us away from dystopic futures towards preferred, life enhancing paradigms based on wellbeing. One of the alternative ways of organizing ourselves that avoids much of the structural violence of existing organizations is the acephalous group (operating without any structured, ongoing leadership. Decision making becomes distributed, transitory and self-selecting. Such groups are not always appropriate and have their strengths and weaknesses, but they can be a more effective, humane way of organizing ourselves and can open windows to new ways of being. Acephalous groups operate at many different scales and adapt their structure accordingly. For this reason, a comparison of small, medium and large-scale acephalous groups reveals some of the dynamics involved in acephalous functioning and provides a useful overview of these emergent forms of organization and foreshadows the role they may play in future.

  1. Goethite Bench-scale and Large-scale Preparation Tests

    Energy Technology Data Exchange (ETDEWEB)

    Josephson, Gary B.; Westsik, Joseph H.


    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the

  2. High Order Numerical Methods for LES of Turbulent Flows with Shocks (United States)

    Kotov, D. V.; Yee, H. C.; Hadjadj, A.; Wray, A.; Sjögreen, B.


    Simulation of turbulent flows with shocks employing explicit subgrid-scale (SGS) filtering may encounter a loss of accuracy in the vicinity of a shock. In this work we perform a comparative study of different approaches to reduce this loss of accuracy within the framework of the dynamic Germano SGS model. One of the possible approaches is to apply Harten's subcell resolution procedure to locate and sharpen the shock, and to use a one-sided test filter at the grid points adjacent to the exact shock location. The other considered approach is local disabling of the SGS terms in the vicinity of the shock location. In this study we use a canonical shock-turbulence interaction problem for comparison of the considered modifications of the SGS filtering procedure. For the considered test case both approaches show a similar improvement in the accuracy near the shock.

  3. Rating on life valuation scale

    Directory of Open Access Journals (Sweden)

    Lapčević Mirjana


    Full Text Available Introduction: World Health Organization (WHO Articles of Association defines health as the state of complete physical, mental and social well-being and not merely the absence of disease. According to this definition, the concept of health is enlarged and consists of public and personal needs, motives and psychological nature of a person, education, culture, tradition, religion, etc. All these needs do not have the same rank on life valuation scale. Objective: The objective of our study was ranking 6 most important values of life out of 12 suggested. Method: Questionnaire about Life Valuation Scale was used as method in our study. This questionnaire was created by the Serbian Medical Association and Department of General Medicine, School of Medicine, University of Belgrade. It analyzed 10% of all citizens in 18 places in Serbia, aged from 25 to 64 years, including Belgrade commune Vozdovac. Survey was performed in health institutions and in citizens’ residencies in 1995/96 by doctors, nurses and field nurses. Results: A total of 14,801 citizens was questioned in Serbia (42.57% of men, 57.25% of women, and 852 citizens in Vozdovac commune (34.62% of men, 65.38% of women. People differently value things in their lives. On the basis of life values scoring, the most important thing in people’s life was health. In Serbia, public rank of health was 4.79%, and 4.4% in Vozdovac commune. Relations in family were on the second place, and engagement in politics was on the last place. Conclusion: The results of our study in the whole Serbia and in Vozdovac commune do not differ significantly from each other, and all of them demonstrated that people attached the greatest importance to health on the scale of proposed values. Relationships in family were on the second place, and political activity was on the last place. High ranking of health and relationships in family generally shows that general practitioners in Serbia take important part in primary

  4. Scaling of program fitness spaces. (United States)

    Langdon, W B


    We investigate the distribution of fitness of programs concentrating on those represented as parse trees and, particularly, how such distributions scale with respect to changes in the size of the programs. By using a combination of enumeration and Monte Carlo sampling on a large number of problems from three very different areas, we suggest that, in general, once some minimum size threshold has been exceeded, the distribution of performance is approximately independent of program length. We proof this for both linear programs and simple side effect free parse trees. We give the density of solutions to the parity problems in program trees which are composed of XOR building blocks. Limited experiments with programs including side effects and iteration suggest a similar result may also hold for this wider class of programs.

  5. Bacterial Communities: Interactions to Scale (United States)

    Stubbendieck, Reed M.; Vargas-Bautista, Carol; Straight, Paul D.


    In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities. PMID:27551280

  6. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.


    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  7. Scaling Theory of Polyelectrolyte Nanogels (United States)

    Qu, Li-Jian


    The present paper develops the scaling theory of polyelectrolyte nanogels in dilute and semidilute solutions. The dependencies of the nanogel dimension on branching topology, charge fraction, subchain length, segment number, solution concentration are obtained. For a single polyelectrolyte nanogel in salt free solution, the nanogel may be swelled by the Coulombic repulsion (the so-called polyelectrolyte regime) or the osmotic counterion pressure (the so-called osmotic regime). Characteristics and boundaries between different regimes of a single polyelectrolyte nanogel are summarized. In dilute solution, the nanogels in polyelectrolyte regime will distribute orderly with the increase of concentration. While the nanogels in osmotic regime will always distribute randomly. Different concentration dependencies of the size of a nanogel in polyelectrolyte regime and in osmotic regime are also explored. Supported by China Earthquake Administration under Grant No. 20150112 and National Natural Science Foundation of China under Grant No. 21504014

  8. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P


    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  9. Cognitive Reserve Scale and ageing

    Directory of Open Access Journals (Sweden)

    Irene León


    Full Text Available The construct of cognitive reserve attempts to explain why some individuals with brain impairment, and some people during normal ageing, can solve cognitive tasks better than expected. This study aimed to estimate cognitive reserve in a healthy sample of people aged 65 years and over, with special attention to its influence on cognitive performance. For this purpose, it used the Cognitive Reserve Scale (CRS and a neuropsychological battery that included tests of attention and memory. The results revealed that women obtained higher total CRS raw scores than men. Moreover, the CRS predicted the learning curve, short-term and long-term memory, but not attentional and working memory performance. Thus, the CRS offers a new proxy of cognitive reserve based on cognitively stimulating activities performed by healthy elderly people. Following an active lifestyle throughout life was associated with better intellectual performance and positive effects on relevant aspects of quality of life.

  10. [Virginia Apgar and her scale]. (United States)

    van Gijn, Jan; Gijselhart, Joost P


    Virginia Apgar (1909-1974), born in New Jersey, managed to continue medical school despite the financial crisis of 1929, continued for a brief time in surgery and subsequently became one of the first specialists in anaesthesiology. In 1949 she was appointed to a professorship, the first woman to reach this rank at Columbia University in New York. She then dedicated herself to obstetric anaesthesiology and devised the well known scale for the initial assessment of newborn babies, according to 5 criteria. From 1959 she worked for the National Foundation for Infantile Paralysis (now March of Dimes), to expand its activities from prevention of poliomyelitis to other aspects of preventive child care, such as rubella vaccination and testing for rhesus antagonism. She remained single; in her private life she enjoyed fly fishing, took lessons in aviation and was an accomplished violinist.

  11. Significant scales in community structure. (United States)

    Traag, V A; Krings, G; Van Dooren, P


    Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of "significance" of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine "good" resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationality plays no role.

  12. Scaling, Microstructure and Dynamic Fracture

    Energy Technology Data Exchange (ETDEWEB)

    Minich, R W; Kumar, M; Schwarz, A; Cazamias, J


    The relationship between pullback velocity and impact velocity is studied for different microstructures in Cu. A size distribution of potential nucleation sites is derived under the conditions of an applied stochastic stress field. The size distribution depends on flow stress leading to a connection between the plastic flow appropriate to a given microstructure and nucleation rate. The pullback velocity in turn depends on the nucleation rate resulting in a prediction for the relationship between pullback velocity and flow stress. The theory is compared to observations of Cu on Cu gas-gun experiments (10-50 GPa) for a diverse set of microstructures. The scaling law is incorporated into a 1D finite difference code and is shown to reproduce the experimental data with one adjustable parameter that depends only on a nucleation exponent, {Lambda}.

  13. Size scaling of static friction. (United States)

    Braun, O M; Manini, Nicola; Tosatti, Erio


    Sliding friction across a thin soft lubricant film typically occurs by stick slip, the lubricant fully solidifying at stick, yielding and flowing at slip. The static friction force per unit area preceding slip is known from molecular dynamics (MD) simulations to decrease with increasing contact area. That makes the large-size fate of stick slip unclear and unknown; its possible vanishing is important as it would herald smooth sliding with a dramatic drop of kinetic friction at large size. Here we formulate a scaling law of the static friction force, which for a soft lubricant is predicted to decrease as f(m)+Δf/A(γ) for increasing contact area A, with γ>0. Our main finding is that the value of f(m), controlling the survival of stick slip at large size, can be evaluated by simulations of comparably small size. MD simulations of soft lubricant sliding are presented, which verify this theory.

  14. Large scale nanopatterning of graphene

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, P.L., E-mail: [Research Institute for Technical Physics and Materials Science (MFA) of HAS, Korean-Hungarian Joint Laboratory for Nanosciences, Budapest H-1525, P.O. Box 49 (Hungary); Budapest University of Technology and Economics (BUTE), Department of Physics, Solid State Physics Laboratory, Budapest H-1521, P.O. Box 91 (Hungary); Tovari, E.; Csonka, S. [Budapest University of Technology and Economics (BUTE), Department of Physics, Solid State Physics Laboratory, Budapest H-1521, P.O. Box 91 (Hungary); Kamaras, K. [Research Institute for Solid State Physics and Optics of HAS, Budapest H-1525, P.O. Box 49 (Hungary); Horvath, Z.E.; Biro, L.P. [Research Institute for Technical Physics and Materials Science (MFA) of HAS, Korean-Hungarian Joint Laboratory for Nanosciences, Budapest H-1525, P.O. Box 49 (Hungary)


    Recently, we have shown that the shaping of atomically perfect zig-zag oriented edges can be performed by exploiting the orientation dependent oxidation in graphene, by annealing the samples in inert atmosphere, where the oxygen source is the SiO{sub 2} substrate itself. In the present study, we showed that the large scale patterning of graphene using a conventional lithography technique can be combined with the control of crystallographic orientation and edge shaping. We applied electron beam lithography (EBL) followed by low energy O{sup +}/Ar{sup +} plasma etching for patterning mechanically exfoliated graphene flakes. As AFM imaging of the samples revealed, the controlled oxidation transformed the originally circular holes to polygonal shape with edges parallel with the zig-zag direction, showing the possibility of atomically precise, large area patterning of graphene.

  15. Bacterial Communities: Interactions to Scale

    Directory of Open Access Journals (Sweden)

    Reed M. Stubbendieck


    Full Text Available In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities.

  16. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso


    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  17. Scaling and the Smoluchowski equations (United States)

    Goodisman, J.; Chaiken, J.


    The Smoluchowski equations, which describe coalescence growth, take into account combination reactions between a j-mer and a k-mer to form a (j+k)-mer, but not breakup of larger clusters to smaller ones. All combination reactions are assumed to be second order, with rate constants Kjk. The Kjk are said to scale if Kλj,γk=λμγνKjk for j ⩽k. It can then be shown that, for large k, the number density or population of k-mers is given by Akae-bk, where A is a normalization constant (a function of a, b, and time), a =-(μ+ν), and bμ +ν-1 depends linearly on time. We prove this in a simple, transparent manner. We also discuss the origin of odd-even population oscillations for small k. A common scaling arises from the ballistic model, which assumes that the velocity of a k-mer is proportional to 1/√mk (Maxwell distribution), i.e., thermal equilibrium. This does not hold for the nascent distribution of clusters produced from monomers by reactive collisions. By direct calculation, invoking conservation of momentum in collisions, we show that, for this distribution, velocities are proportional to mk-0.577. This leads to μ +ν=0.090, intermediate between the ballistic (0.167) and diffusive (0.000) results. These results are discussed in light of the existence of systems in the experimental literature which apparently correspond to very negative values of μ +ν.

  18. Large-scale galaxy bias (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian


    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  19. Mechanics over micro and nano scales

    CERN Document Server

    Chakraborty, Suman


    Discusses the fundaments of mechanics over micro and nano scales in a level accessible to multi-disciplinary researchers, with a balance of mathematical details and physical principles Covers life sciences and chemistry for use in emerging applications related to mechanics over small scales Demonstrates the explicit interconnection between various scale issues and the mechanics of miniaturized systems

  20. Economies of Scale and Rural Schools. (United States)

    Tholkes, Robert J.; Sederberg, Charles H.


    Economies of scale frequently have been advanced as a rationale for rural school consolidation. This article defines the economies of scale principle; describes its application to public education; and reviews selected studies, 1959-86, from a rural education perspective. Notes the possible overstatement of economics of scale in some studies.…

  1. Development of a Media Literacy Skills Scale (United States)

    Eristi, Bahadir; Erdem, Cahit


    This study aims to develop a reliable and valid scale to identify the levels of media users' media literacy skills. The scale development process was carried out in nine steps as recommended in the literature. Before the scale was administered, the items were reviewed by field experts and language experts and a pilot study was carried out.…

  2. Scale dependent inference in landscape genetics (United States)

    Samuel A. Cushman; Erin L. Landguth


    Ecological relationships between patterns and processes are highly scale dependent. This paper reports the first formal exploration of how changing scale of research away from the scale of the processes governing gene flow affects the results of landscape genetic analysis. We used an individual-based, spatially explicit simulation model to generate patterns of genetic...

  3. An Aesthetic Value Scale of the Rorschach. (United States)

    Insua, Ana Maria


    An aesthetic value scale of the Rorschach cards was built by the successive interval method. This scale was compared with the ratings obtained by means of the Semantic Differential Scales and was found to successfully differentiate sexes in their judgment of card attractiveness. (Author)

  4. Why Online Education Will Attain Full Scale (United States)

    Sener, John


    Online higher education has attained scale and is poised to take the next step in its growth. Although significant obstacles to a full scale adoption of online education remain, we will see full scale adoption of online higher education within the next five to ten years. Practically all higher education students will experience online education in…




  6. Cross-scale analysis of fire regimes (United States)

    Donald A. Falk; Carol Miller; Donald McKenzie; Anne E. Black


    Cross-scale spatial and temporal perspectives are important for studying contagious landscape disturbances such as fire, which are controlled by myriad processes operating at different scales. We examine fire regimes in forests of western North America, focusing on how observed patterns of fire frequency change across spatial scales. To quantify changes in fire...

  7. Developing a new apathy measurement scale: Dimensional Apathy Scale. (United States)

    Radakovic, Ratko; Abrahams, Sharon


    Apathy is both a symptom and syndrome prevalent in neurodegenerative disease, including motor system disorders, that affects motivation to display goal directed functions. Levy and Dubois (2006) suggested three apathetic subtypes, Cognitive, Emotional-affective and Auto-activation, all with discrete neural correlates and functional impairments. The aim of this study was to create a new apathy measure; the Dimensional Apathy Scale (DAS), which assesses apathetic subtypes and is suitable for use in patient groups with motor dysfunction. 311 healthy participants (mean=37.4, S.D.=15.0) completed a 45-item questionnaire. Horn's parallel analysis of principal factors and Exploratory Factor Analysis resulted in 4 factors (Executive, Emotional, Cognitive Initiation and Behavioural Initiation) that account for 28.9% of the total variance. Twenty four items were subsequently extracted to form 3 subscales--Executive, Emotional and Behavioural/Cognitive Initiation. The subscale items show good internal consistency reliability. A weak to moderate relationship was found with depression using Becks Depression Inventory II. The DAS is a well-constructed method for assessing multidimensional apathy suitable for application to investigate this syndrome in different disease pathologies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Developing and testing attitude scales around IT. (United States)

    Ward, Rod; Glogowska, Margaret; Pollard, Katherine; Moule, Pam


    Information technology (IT) is an integral component of the healthcare delivery arsenal. However, not all professionals are happy or comfortable with such technology. To assess professionals' attitudes to IT-use in the workplace, a new questionnaire, the Information Technology Attitude Scales for Health (ITASH). which comprises three scales that can be used to measure the attitudes of UK health professionals, has been developed. Here, the authors describe existing scales, why a new scale was required, and how analysing data from a questionnaire using exploratory factor analysis determined the components of the three scales: efficiency of care; education, training and development; and control.

  9. Computational applications of DNA physical scales

    DEFF Research Database (Denmark)

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren


    The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...... that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models...

  10. Kibble-Zurek scaling in holography (United States)

    Natsuume, Makoto; Okamura, Takashi


    The Kibble-Zurek (KZ) mechanism describes the generations of topological defects when a system undergoes a second-order phase transition via quenches. We study the holographic KZ scaling using holographic superconductors. The scaling can be understood analytically from a scaling analysis of the bulk action. The argument is reminiscent of the scaling analysis of the mean-field theory but is more subtle and is not entirely obvious. This is because the scaling is not the one of the original bulk theory but is an emergent one that appears only at the critical point. The analysis is also useful to determine the dynamic critical exponent z .


    Directory of Open Access Journals (Sweden)

    Lukas CECHURA


    Full Text Available The paper analyses scale efficiency in European pork production. The analysis shows significant differences in the exploitation of economies of scale among EU member states. In particular, old member states exhibit increasing returns to scale whereas most new member states show either constant or decreasing returns to scale. The differences among old and new member states are also pronounced from a dynamic perspective: whereas the old member states improved their productivity in pork production through scale efficiency, its impact in new member states was rather negative.

  12. Scaling solutions for dilaton quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Henz, T.; Pawlowski, J.M., E-mail:; Wetterich, C.


    Scaling solutions for the effective action in dilaton quantum gravity are investigated within the functional renormalization group approach. We find numerical solutions that connect ultraviolet and infrared fixed points as the ratio between scalar field and renormalization scale k is varied. In the Einstein frame the quantum effective action corresponding to the scaling solutions becomes independent of k. The field equations derived from this effective action can be used directly for cosmology. Scale symmetry is spontaneously broken by a non-vanishing cosmological value of the scalar field. For the cosmology corresponding to our scaling solutions, inflation arises naturally. The effective cosmological constant becomes dynamical and vanishes asymptotically as time goes to infinity.

  13. Scaling solutions for dilaton quantum gravity (United States)

    Henz, T.; Pawlowski, J. M.; Wetterich, C.


    Scaling solutions for the effective action in dilaton quantum gravity are investigated within the functional renormalization group approach. We find numerical solutions that connect ultraviolet and infrared fixed points as the ratio between scalar field and renormalization scale k is varied. In the Einstein frame the quantum effective action corresponding to the scaling solutions becomes independent of k. The field equations derived from this effective action can be used directly for cosmology. Scale symmetry is spontaneously broken by a non-vanishing cosmological value of the scalar field. For the cosmology corresponding to our scaling solutions, inflation arises naturally. The effective cosmological constant becomes dynamical and vanishes asymptotically as time goes to infinity.

  14. Scaling laws for coastal overwash morphology (United States)

    Lazarus, Eli D.


    Overwash is a physical process of coastal sediment transport driven by storm events and is essential to landscape resilience in low-lying barrier environments. This work establishes a comprehensive set of scaling laws for overwash morphology: unifying quantitative descriptions with which to compare overwash features by their morphological attributes across case examples. Such scaling laws also help relate overwash features to other morphodynamic phenomena. Here morphometric data from a physical experiment are compared with data from natural examples of overwash features. The resulting scaling relationships indicate scale invariance spanning several orders of magnitude. Furthermore, these new relationships for overwash morphology align with classic scaling laws for fluvial drainages and alluvial fans.

  15. The modified procedures in coercivity scaling*

    Directory of Open Access Journals (Sweden)

    Najgebauer Mariusz


    Full Text Available The paper presents a scaling approach to the analysis of coercivity. The Widom-based procedure of coercivity scaling has been tested for non-oriented electrical steel. Due to insufficient results, the scaling procedure was improved relating to the method proposed by Van den Bossche. The modified procedure of coercivity scaling gave better results, in comparison to the original approach. The influence of particular parameters and a range of measurement data used in the estimations on the final effect of the coercivity scaling were discussed.

  16. Fluctuation scaling, Taylor's law, and crime. (United States)

    Hanley, Quentin S; Khatun, Suniya; Yosef, Amal; Dyer, Rachel-May


    Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026) while burglary exhibited a greater exponent (α = 1.292 ± 0.029) indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs) to 2.094 ± 0081 (Other Crimes). Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  17. Fluctuation scaling, Taylor's law, and crime.

    Directory of Open Access Journals (Sweden)

    Quentin S Hanley

    Full Text Available Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026 while burglary exhibited a greater exponent (α = 1.292 ± 0.029 indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs to 2.094 ± 0081 (Other Crimes. Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  18. Statistical and Judgmental Criteria for Scale Purification

    DEFF Research Database (Denmark)

    Wieland, Andreas; Durach, Christian F.; Kembro, Joakim


    of scale purification, to critically analyze the current state of scale purification in supply chain management (SCM) research and to provide suggestions for advancing the scale-purification process. Design/methodology/approach A framework for making scale-purification decisions is developed and used...... of methodological rigor and coherence is identified when it comes to current purification practices in empirical SCM research. Suggestions for methodological improvements are provided. Research limitations/implications The framework and additional suggestions will help to advance the knowledge about scale...... to analyze and critically reflect on the application of scale purification in leading SCM journals. Findings This research highlights the need for rigorous scale-purification decisions based on both statistical and judgmental criteria. By applying the proposed framework to the SCM discipline, a lack...

  19. Scaling Laws, Eartquakes, Chaos and Predictions (United States)

    Allègre, C. J.; Le Mouel, J.; Narteau, C.


    The scaling organization of fracture tectonics (S.O.F.T) model developed by Allegre et al. (1995) is based on an energy splitting combined with a renormalization group approach. This approach is a link between physical approaches, multiblock approaches (like Burridge-Knopoff) and scaling approaches to earthquakes. The basis of this approach was to use scaling transfer mechanism and to compute for each scale the probability of failure. We define a critical point by the convergences of failure probability for different scales. Depending on the different parameter values and the different scaling transfer laws, we compute different cases: some with precursors following by large earthquakes; some without precursor but with a large event; some with aftershocks and some with small number of aftershocks; and some with a pure creep without quakes. those models suggest a trail for predictions, which is the studies of various parameters depending on different scales (electromag with various frequencies or seismic noise of various frequencies).

  20. A small scale honey dehydrator. (United States)

    Gill, R S; Hans, V S; Singh, Sukhmeet; Pal Singh, Parm; Dhaliwal, S S


    A small scale honey dehydrator has been designed, developed, and tested to reduce moisture content of honey below 17 %. Experiments have been conducted for honey dehydration by using drying air at ambient temperature, 30 and 40 °C and water at 35, 40 and 45 °C. In this dehydrator, hot water has been circulated in a water jacket around the honey container to heat honey. The heated honey has been pumped through a sieve to form honey streams through which drying air passes for moisture removal. The honey streams help in increasing the exposed surface area of honey in contact with drying air, thus resulting in faster dehydration of honey. The maximum drying rate per square meter area of honey exposed to drying air was found to be 197.0 g/h-m(2) corresponding to the drying air and water temperature of 40 and 45 °C respectively whereas it was found to be minimum (74.8 g/h-m(2)) corresponding to the drying air at ambient temperature (8-17 °C) and water at 35 °C. The energy cost of honey moisture content reduction from 25.2 to 16.4 % was Rs. 6.20 to Rs. 17.36 (US $ 0.10 to US $ 0.28 (One US $ = 62.00 Indian Rupee on February, 2014) per kilogram of honey.

  1. Rating scales and Rasch measurement. (United States)

    Andrich, David


    Assessments with ratings in ordered categories have become ubiquitous in health, biological and social sciences. Ratings are used when a measuring instrument of the kind found in the natural sciences is not available to assess some property in terms of degree - for example, greater or smaller, better or worse, or stronger or weaker. The handling of ratings has ranged from the very elementary to the highly sophisticated. In an elementary form, and assumed in classical test theory, the ratings are scored with successive integers and treated as measurements; in a sophisticated form, and used in modern test theory, the ratings are characterized by probabilistic response models with parameters for persons and the rating categories. Within modern test theory, two paradigms, similar in many details but incompatible on crucial points, have emerged. For the purposes of this article, these are termed the statistical modeling and experimental measurement paradigms. Rather than reviewing a compendium of available methods and models for analyzing ratings in detail, the article focuses on the incompatible differences between these two paradigms, with implications for choice of model and inferences. It shows that the differences have implications for different roles for substantive researchers and psychometricians in designing instruments with rating scales. To illustrate these differences, an example is provided.

  2. Binary Multidimensional Scaling for Hashing. (United States)

    Huang, Yameng; Lin, Zhouchen


    Hashing is a useful technique for fast nearest neighbor search due to its low storage cost and fast query speed. Unsupervised hashing aims at learning binary hash codes for the original features so that the pairwise distances can be best preserved. While several works have targeted on this task, the results are not satisfactory mainly due to the oversimplified model. In this paper, we propose a unified and concise unsupervised hashing framework, called Binary Multidimensional Scaling (BMDS), which is able to learn the hash code for distance preservation in both batch and online mode. In the batch mode, unlike most existing hashing methods, we do not need to simplify the model by predefining the form of hash map. Instead, we learn the binary codes directly based on the pairwise distances among the normalized original features by Alternating Minimization. This enables a stronger expressive power of the hash map. In the online mode, we consider the holistic distance relationship between current query example and those we have already learned, rather than only focusing on current data chunk. It is useful when the data come in a streaming fashion. Empirical results show that while being efficient for training, our algorithm outperforms state-of-the-art methods by a large margin in terms of distance preservation, which is practical for real-world applications.

  3. Universal scaling in sports ranking (United States)

    Deng, Weibing; Li, Wei; Cai, Xu; Bulou, Alain; Wang, Qiuping A.


    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters.

  4. Scaling Agile Infrastructure to People

    CERN Document Server

    Jones, B; Traylen, S; Arias, N Barrientos


    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow ...

  5. Large-Scale Sequence Comparison. (United States)

    Lal, Devi; Verma, Mansi


    There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.

  6. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas


    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  7. Universal scaling in sports ranking

    CERN Document Server

    Deng, Weibing; Cai, Xu; Bulou, Alain; Wang, Qiuping A


    Ranking is a ubiquitous phenomenon in the human society. By clicking the web pages of Forbes, you may find all kinds of rankings, such as world's most powerful people, world's richest people, top-paid tennis stars, and so on and so forth. Herewith, we study a specific kind, sports ranking systems in which players' scores and prize money are calculated based on their performances in attending various tournaments. A typical example is tennis. It is found that the distributions of both scores and prize money follow universal power laws, with exponents nearly identical for most sports fields. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player will top the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simul...

  8. Reactive/Adsorptive transport in (partially-) saturated porous media: from pore scale to core scale

    NARCIS (Netherlands)

    Raoof, A.


    Pore-scale modeling provides opportunities to study transport phenomena in fundamental ways because detailed information is available at the microscopic pore scale. This offers the best hope for bridging the traditional gap that exists between pore scale and macro (lab) scale description of the

  9. Meso-scale machining capabilities and issues

    Energy Technology Data Exchange (ETDEWEB)



    Meso-scale manufacturing processes are bridging the gap between silicon-based MEMS processes and conventional miniature machining. These processes can fabricate two and three-dimensional parts having micron size features in traditional materials such as stainless steels, rare earth magnets, ceramics, and glass. Meso-scale processes that are currently available include, focused ion beam sputtering, micro-milling, micro-turning, excimer laser ablation, femto-second laser ablation, and micro electro discharge machining. These meso-scale processes employ subtractive machining technologies (i.e., material removal), unlike LIGA, which is an additive meso-scale process. Meso-scale processes have different material capabilities and machining performance specifications. Machining performance specifications of interest include minimum feature size, feature tolerance, feature location accuracy, surface finish, and material removal rate. Sandia National Laboratories is developing meso-scale electro-mechanical components, which require meso-scale parts that move relative to one another. The meso-scale parts fabricated by subtractive meso-scale manufacturing processes have unique tribology issues because of the variety of materials and the surface conditions produced by the different meso-scale manufacturing processes.

  10. Scale selection for supervised image segmentation

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M J; Loog, Marco


    Finding the right scales for feature extraction is crucial for supervised image segmentation based on pixel classification. There are many scale selection methods in the literature; among them the one proposed by Lindeberg is widely used for image structures such as blobs, edges and ridges. Those...... unsupervised scale selection paradigms and present a supervised alternative. In particular, the so-called max rule is proposed, which selects a scale for each pixel to have the largest confidence in the classification across the scales. In interpreting the classifier as a complex image filter, we can relate...... our approach back to Lindeberg's original proposal. In the experiments, the max rule is applied to artificial and real-world image segmentation tasks, which is shown to choose the right scales for different problems and lead to better segmentation results....

  11. Scales used in research and applications

    Directory of Open Access Journals (Sweden)

    Wanderson Lyrio Bermudes


    Full Text Available In scientific research, we always seek to excellence in methodology, since the definition of the best method is as important as the choice of the scale to be used. This study aims to identify the types of scales used in research and its applications. The four most common types of scale are: nominal, ordinal, interval and ratio. Among the attitude scales used in scientific research, we highlight the Thurstone and the Likert. The Thurstone scale is used to measure a probable human attitude without indicating the intensity. The Likert scale consists of five items ranging from complete disagreement to total agreement on certain statement. It differs from Thurstone’s due to the degree of intensity that is covered by its answers and it has been more used.

  12. Development of the Holistic Nursing Competence Scale. (United States)

    Takase, Miyuki; Teraoka, Sachiko


    This study developed a scale to measure the nursing competence of Japanese registered nurses and to test its psychometric properties. Following the derivation of scale items and pilot testing, the final version of the scale was administered to 331 nurses to establish its internal consistency, as well as its construct and criterion-related validity. Using an exploratory and a confirmatory factor analysis, 36 items with a five-factor structure were retained to form the Holistic Nursing Competence Scale. These factors illustrate nurses' general aptitude and their competencies in staff education and management, ethical practice, the provision of nursing care, and professional development. The Scale has a positive correlation with the length of clinical experience. A Cronbach's alpha coefficient was 0.967. The Scale is a reliable and valid measure, helping both nurses and organizations to correctly evaluate nurses' competence and identify their needs for professional development. © 2011 Blackwell Publishing Asia Pty Ltd.

  13. Universal geometrical scaling of the elliptic flow

    Directory of Open Access Journals (Sweden)

    Andrés C.


    Full Text Available The presence of scaling variables in experimental observables provide very valuable indications of the dynamics underlying a given physical process. In the last years, the search for geometric scaling, that is the presence of a scaling variable which encodes all geometrical information of the collision as well as other external quantities as the total energy, has been very active. This is motivated, in part, for being one of the genuine predictions of the Color Glass Condensate formalism for saturation of partonic densities. Here we extend these previous findings to the case of experimental data on elliptic flow. We find an excellent scaling for all centralities and energies, from RHIC to LHC, with a simple generalization of the scaling previously found for other observables and systems. Interestingly, the case of the photons, difficult to reconcile in most formalisms, nicely fit the scaling curve. We discuss on the possible interpretations of this finding in terms of initial or final state effects.

  14. Time scales, their users, and leap seconds (United States)

    Seidelmann, P. Kenneth; Seago, John H.


    Numerous time scales exist to address specific user requirements. Accurate dynamical time scales (barycentric, geocentric and terrestrial) have been developed based on the theory of relativity. A family of time scales has been developed based on the rotation of the Earth that includes Universal Time (specifically UT1), which serves as the traditional astronomical basis of civil time. International Atomic Time (TAI) is also maintained as a fundamental time scale based on the output of atomic frequency standards. Coordinated Universal Time (UTC) is an atomic scale for worldwide civil timekeeping, referenced to TAI, but with epoch adjustments via so-called leap seconds to remain within one second of UT1. A review of the development of the time scales, the status of the leap-second issue, and user considerations and perspectives are discussed. A description of some more recent applications for time usage is included.

  15. Scaling Consumers' Purchase Involvement: A New Approach

    Directory of Open Access Journals (Sweden)

    Jörg Kraigher-Krainer


    Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.

  16. Further validation of the Indecisiveness Scale. (United States)

    Gayton, W F; Clavin, R H; Clavin, S L; Broida, J


    Scores on the Indecisiveness Scale have been shown to be correlated with scores on measures of obsessive-compulsive tendencies and perfectionism for women. This study examined the validity of the Indecisiveness Scale with 41 men whose mean age was 21.1 yr. Indecisiveness scores were significantly correlated with scores on measures of obsessive-compulsive tendencies and perfectionism. Also, undeclared majors had a significantly higher mean on the Indecisiveness Scale than did declared majors.

  17. Urban Scaling of Cities in the Netherlands


    van Raan, Anthony F. J.; Gerwin van der Meulen; Willem Goedhart


    We investigated the socioeconomic scaling behavior of all cities with more than 50,000 inhabitants in the Netherlands and found significant superlinear scaling of the gross urban product with population size. Of these cities, 22 major cities have urban agglomerations and urban areas defined by the Netherlands Central Bureau of Statistics. For these major cities we investigated the superlinear scaling for three separate modalities: the cities defined as municipalities, their urban agglomeratio...

  18. Resource Complementarity and IT Economies of Scale

    DEFF Research Database (Denmark)

    Woudstra, Ulco; Berghout, Egon; Tan, Chee-Wee


    In this study, we explore economies of scale for IT infrastructure and application services. An in-depth appreciation of economies of scale is imperative for an adequate understanding of the impact of IT investments. Our findings indicate that even low IT spending organizations can make...... a difference by devoting at least 60% of their total IT budget on IT infrastructure in order to foster economies of scale and extract strategic benefits....

  19. Euthanasia attitude; A comparison of two scales. (United States)

    Aghababaei, Naser; Farahani, Hojjatollah; Hatami, Javad


    The main purposes of the present study were to see how the term "euthanasia" influences people's support for or opposition to euthanasia; and to see how euthanasia attitude relates to religious orientation and personality factors. In this study two different euthanasia attitude scales were compared. 197 students were selected to fill out either the Euthanasia Attitude Scale (EAS) or Wasserman's Attitude Towards Euthanasia scale (ATE scale). The former scale includes the term "euthanasia", the latter does not. All participants filled out 50 items of International Personality Item Pool, 16 items of the the HEXACO openness, and 14 items of Religious Orientation Scale-Revised. Results indicated that even though the two groups were not different in terms of gender, age, education, religiosity and personality, mean score on the ATE scale was significantly higher than that of the EAS. Euthanasia attitude was negatively correlated with religiosity and conscientiousness and it was positively correlated with psychoticism and openness. It can be concluded that analyzing the attitude towards euthanasia with the use of EAS rather than the ATE scale results in lower levels of opposition against euthanasia. This study raises the question of whether euthanasia attitude scales should contain definitions and concepts of euthanasia or they should describe cases of it.

  20. Scale Mismatches in Management of Urban Landscapes

    Directory of Open Access Journals (Sweden)

    Sara T. Borgström


    Full Text Available Urban landscapes constitute the future environment for most of the world's human population. An increased understanding of the urbanization process and of the effects of urbanization at multiple scales is, therefore, key to ensuring human well-being. In many conventional natural resource management regimes, incomplete knowledge of ecosystem dynamics and institutional constraints often leads to institutional management frameworks that do not match the scale of ecological patterns and processes. In this paper, we argue that scale mismatches are particularly pronounced in urban landscapes. Urban green spaces provide numerous important ecosystem services to urban citizens, and the management of these urban green spaces, including recognition of scales, is crucial to the well-being of the citizens. From a qualitative study of the current management practices in five urban green spaces within the Greater Stockholm Metropolitan Area, Sweden, we found that 1 several spatial, temporal, and functional scales are recognized, but the cross-scale interactions are often neglected, and 2 spatial and temporal meso-scales are seldom given priority. One potential effect of the neglect of ecological cross-scale interactions in these highly fragmented landscapes is a gradual reduction in the capacity of the ecosystems to provide ecosystem services. Two important strategies for overcoming urban scale mismatches are suggested: 1 development of an integrative view of the whole urban social-ecological landscape, and 2 creation of adaptive governance systems to support practical management.

  1. Socially responsible marketing decisions - scale development

    Directory of Open Access Journals (Sweden)

    Dina Lončarić


    Full Text Available The purpose of this research is to develop a measurement scale for evaluating the implementation level of the concept of social responsibility in taking marketing decisions, in accordance with a paradigm of the quality-of-life marketing. A new scale of "socially responsible marketing decisions" has been formed and its content validity, reliability and dimensionality have been analyzed. The scale has been tested on a sample of the most successful Croatian firms. The research results lead us to conclude that the scale has satisfactory psychometric characteristics but that it is necessary to improve it by generating new items and by testing it on a greater number of samples.

  2. A numerical exercise in musical scales (United States)

    Hartmann, George C.


    This paper investigates why the 12-note scale, having equal intervals, seems to be the best representation of scales constructed from purely harmonic intervals. Is it possible that other equal temperament scales with more or less than 12 notes would serve just as well? The investigation is done by displaying the difference between a set of harmonic notes and scales with equal intervals having n notes per octave. The difference is small when n is equal to 12, but also when n equals 19 and 29. The number density of notes per unit frequency intervals is also investigated.

  3. Ergodicity breakdown and scaling from single sequences

    Energy Technology Data Exchange (ETDEWEB)

    Kalashyan, Armen K. [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Buiatti, Marco [Laboratoire de Neurophysique et Physiologie, CNRS UMR 8119 Universite Rene Descartes - Paris 5 45, rue des Saints Peres, 75270 Paris Cedex 06 (France); Cognitive Neuroimaging Unit - INSERM U562, Service Hospitalier Frederic Joliot, CEA/DRM/DSV, 4 Place du general Leclerc, 91401 Orsay Cedex (France); Grigolini, Paolo [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Dipartimento di Fisica ' E.Fermi' - Universita di Pisa and INFM, Largo Pontecorvo 3, 56127 Pisa (Italy); Istituto dei Processi Chimico, Fisici del CNR Area della Ricerca di Pisa, Via G. Moruzzi 1, 56124 Pisa (Italy)], E-mail:


    In the ergodic regime, several methods efficiently estimate the temporal scaling of time series characterized by long-range power-law correlations by converting them into diffusion processes. However, in the condition of ergodicity breakdown, the same methods give ambiguous results. We show that in such regime, two different scaling behaviors emerge depending on the age of the windows used for the estimation. We explain the ambiguity of the estimation methods by the different influence of the two scaling behaviors on each method. Our results suggest that aging drastically alters the scaling properties of non-ergodic processes.

  4. Hierarchical Scaling in Systems of Natural Cities

    CERN Document Server

    Chen, Yanguang


    Hierarchies can be modeled by a set of exponential functions, from which we can derive a set of power laws indicative of scaling. These scaling laws are followed by many natural and social phenomena such as cities, earthquakes, and rivers. This paper is devoted to revealing the scaling patterns in systems of natural cities by reconstructing the hierarchy with cascade structure. The cities of America, Britain, France, and Germany are taken as examples to make empirical analyses. The hierarchical scaling relations can be well fitted to the data points within the scaling ranges of the size and area of the natural cities. The size-number and area-number scaling exponents are close to 1, and the allometric scaling exponent is slightly less than 1. The results suggest that natural cities follow hierarchical scaling laws and hierarchical conservation law. Zipf's law proved to be one of the indications of the hierarchical scaling, and the primate law of city-size distribution represents a local pattern and can be mer...

  5. Framing scales and scaling frames : the politics of scale and its implications for the governance of the Dutch intensive agriculture

    NARCIS (Netherlands)

    Lieshout, van M.


    With this thesis, I aim to get a better understanding of scale framing in interaction, and the implications of scale framing for the nature and course of governance processes about complex problems. In chapter 1, I introduce the starting points: the conceptual framework, the research aim, the

  6. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling (United States)

    Bronstert, Axel; Heistermann, Maik; Francke, Till


    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on

  7. Reinterpreting aircraft measurements in anisotropic scaling turbulence

    Directory of Open Access Journals (Sweden)

    S. J. Hovde


    Full Text Available Due to both systematic and turbulent induced vertical fluctuations, the interpretation of atmospheric aircraft measurements requires a theory of turbulence. Until now virtually all the relevant theories have been isotropic or "quasi isotropic" in the sense that their exponents are the same in all directions. However almost all the available data on the vertical structure shows that it is scaling but with exponents different from the horizontal: the turbulence is scaling but anisotropic. In this paper, we show how such turbulence can lead to spurious breaks in the scaling and to the spurious appearance of the vertical scaling exponent at large horizontal lags.

    We demonstrate this using 16 legs of Gulfstream 4 aircraft near the top of the troposphere following isobars each between 500 and 3200 km in length. First we show that over wide ranges of scale, the horizontal spectra of the aircraft altitude are nearly k-5/3. In addition, we show that the altitude and pressure fluctuations along these fractal trajectories have a high degree of coherence with the measured wind (especially with its longitudinal component. There is also a strong phase relation between the altitude, pressure and wind fluctuations; for scales less than ≈40 km (on average the wind fluctuations lead the pressure and altitude, whereas for larger scales, the pressure fluctuations leads the wind. At the same transition scale, there is a break in the wind spectrum which we argue is caused by the aircraft starting to accurately follow isobars at the larger scales. In comparison, the temperature and humidity have low coherencies and phases and there are no apparent scale breaks, reinforcing the hypothesis that it is the aircraft trajectory that is causally linked to the scale breaks in the wind measurements.

    Using spectra and structure functions for the wind, we then estimate their exponents (β, H at small (5/3, 1/3 and large scales (2

  8. Statistics for Locally Scaled Point Patterns

    DEFF Research Database (Denmark)

    Prokesová, Michaela; Hahn, Ute; Vedel Jensen, Eva B.


    scale factor. The main emphasis of the present paper is on analysis of such models. Statistical methods are developed for estimation of scaling function and template parameters as well as for model validation. The proposed methods are assessed by simulation and used in the analysis of a vegetation...


    African Journals Online (AJOL)


    function indicating that there is room for expansion in output and productivity of yam farmers in. Edo State. This can be ... Keywords: Allocative Efficiency, Elasticity of Production, Return to Scale, Yam. INTRODUCTION .... 1, 2 or 3) where the respondent is operating (Olukosi and Ogungbile1989). • Likert scale was used to ...

  10. Scaling service delivery in a failed state

    NARCIS (Netherlands)

    Muilerman, Sander; Vellema, Sietze


    The increased use of sustainability standards in the international trade in cocoa challenges companies to find effective modes of service delivery to large numbers of small-scale farmers. A case study of the Sustainable Tree Crops Program targeting the small-scale cocoa producers in Côte d’Ivoire

  11. White Mango Scale, Aulacaspis tubercularis , Distribution and ...

    African Journals Online (AJOL)

    Mango is attacked by many insect pests which reduce the quality and productivity of the crop. Among the insect pests attacking mango plant, white mango scale is the most devastating insect pest. White mango scale, was reported since 2010 from Guto Gida district of East Wollega zone. The distribution and severity of white ...

  12. Further Validation of the Relational Ethics Scale. (United States)

    Hargrave, Terry D.; Bomba, Anne K.


    Conducted two studies to examine effects of marital status and age on Relational Ethics Scale. Study One indicated that scale was reliable and valid among single, never married young adults (n=162). Study Two examined differences between scores for this population and original normative sample. Findings suggest that ethical issues with…

  13. Ecology. Invariants, scaling laws, and ecological complexity. (United States)

    Marquet, P A


    There has been much debate about scaling laws in nature. It is believed that as body size increases the number of individuals in the population decreases. As Marquet explains in his Perspective, an elegant new study in two totally separate stream communities (Schmid et al.) confirms that this scaling law holds across more than 400 species of invertebrates.

  14. Scaling Science | IDRC - International Development Research Centre

    International Development Research Centre (IDRC) Digital Library (Canada)


    Feb 23, 2018 ... The scaling of research and innovation that creates social impact is a priority for IDRC and the development community broadly, but how best to achieve impact at scale is far from straightforward. While we can learn a great deal from standard private sector models, these paradigms are designed to achieve ...

  15. Student Engagement Scale: Development, Reliability and Validity (United States)

    Gunuc, Selim; Kuzu, Abdullah


    In this study, the purpose was to develop a student engagement scale for higher education. The participants were 805 students. In the process of developing the item pool regarding the scale, related literature was examined in detail and interviews were held. Six factors--valuing, sense of belonging, cognitive engagement, peer relationships…

  16. Scaling science | IDRC - International Development Research Centre

    International Development Research Centre (IDRC) Digital Library (Canada)


    Dec 7, 2017 ... Scaling our impact IDRC is committed to supporting the generation, identification, and testing of scalable ideas and innovation, as highlighted in Objective 1 of the Centre's Strategic Plan. With this agenda in mind, we're focussed on advancing our understanding of how scaling up research and innovation ...

  17. Visuomotor Dissociation in Cerebral Scaling of Size

    NARCIS (Netherlands)

    Potgieser, Adriaan R. E.; de Jong, Bauke M.


    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in



    Miller, James C.; Coble, Keith H.; Vergara, Oscar


    Economies of scale are investigated and the impacts of farm payment limitations for producers of cotton and soybeans in Mississippi are evaluated. Limits proposed by the Senate following the recent farm bill debate are overlaid on estimates of the scale economies for the cost of producing these crops to determine the different impacts on farm efficiency and welfare benefits.

  19. Perception of Parents Scale: Development and Validation. (United States)

    Wintre, Maxine Gallander; Yaffe, Marvin

    This paper describes the development and validation of the Perception of Parents Scale (POPS), which was designed to measure the transformation in parent-child relations from the initial positions of authority and obedience to the mature position of mutual reciprocity. A 51-item, 4-point Likert scale was designed. Items were divided into three…

  20. A Review of Reading Motivation Scales (United States)

    Davis, Marcia H.; Tonks, Stephen M.; Hock, Michael; Wang, Wenhao; Rodriguez, Aldo


    Reading motivation is a critical contributor to reading achievement and has the potential to influence its development. Educators, researchers, and evaluators need to select the best reading motivation scales for their research and classroom. The goals of this review were to identify a set of reading motivation student self-report scales used in…

  1. No-scale SUGRA SO(10) Inflation

    Indian Academy of Sciences (India)

    Ila Garg


    Oct 9, 2017 ... Higgs fields for the inflaton. A no-scale SUGRA model of inflation based on the SU(5) GUT using the 24, 5 and. 5 Higgs in the superpotential has been constructed [14]. In the present work, we study inflation in a renormal- izable grand unified theory based on the SO(10) gauge group with no-scale SUGRA.

  2. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such ...

  3. A Clinimetric Overview of Scar Assessment Scales

    NARCIS (Netherlands)

    van der Wal, M.B.A.; Verhaegen, P.D.H.M.; Middelkoop, E.; van Zuijlen, P.P.M.


    Standardized validated evaluation instruments are mandatory to increase the level of evidence in scar management. Scar assessment scales are potentially suitable for this purpose, but the most appropriate scale still needs to be determined. This review will elaborate on several clinically relevant

  4. SCALE Code System 6.2.2

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)


    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.

  5. The New Environmental Paradigm Scale: A Reexamination. (United States)

    Geller, Jack M.; Lasley, Paul


    Explains how the New Environmental Paradigm Scale (NEP) is used to examine and measure paradigmatic shifts in the public's orientation toward the physical environment. Study findings across three different populations confirm the dimensionality of a three-factor model. An appendix contains the NEP scale and item numbers. (ML)

  6. Scale-sensitive governance of the environment

    NARCIS (Netherlands)

    Padt, F.; Opdam, P.F.M.; Polman, N.B.P.; Termeer, C.J.A.M.


    Sensitivity to scales is one of the key challenges in environmental governance. Climate change, food production, energy supply, and natural resource management are examples of environmental challenges that stretch across scales and require action at multiple levels. Governance systems are typically

  7. The one scale that rules them all (United States)

    Ouellette, Jennifer


    There are very real constraints on how large a complex organism can grow. This is the essence of all modern-day scaling laws, and the subject of Geoffrey West's provocative new book Scale: the Universal Laws of Life and Death in Organisms, Cities and Companies

  8. Large-Scale Reform Comes of Age (United States)

    Fullan, Michael


    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  9. Cardinal Scales for Public Health Evaluation

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Policy studies often evaluate health for a population by summing the individuals' health as measured by a scale that is ordinal or that depends on risk attitudes. We develop a method using a different type of preferences, called preference intensity or cardinal preferences, to construct scales...

  10. Adjustment of the Internal Tax Scale

    CERN Multimedia


    In application of Article R V 2.03 of the Staff Regulations, the internal tax scale has been adjusted with effect on 1 January 2012. The new scale may be consulted via the CERN Admin e-guide.  The notification of internal annual tax certificate for the financial year 2012 takes into account this adjustment. HR Department (Tel. 73907)

  11. Anchoring the Panic Disorder Severity Scale (United States)

    Keough, Meghan E.; Porter, Eliora; Kredlow, M. Alexandra; Worthington, John J.; Hoge, Elizabeth A.; Pollack, Mark H.; Shear, M. Katherine; Simon, Naomi M.


    The Panic Disorder Severity Scale (PDSS) is a clinician-administered measure of panic disorder symptom severity widely used in clinical research. This investigation sought to provide clinically meaningful anchor points for the PDSS both in terms of clinical severity as measured by the Clinical Global Impression-Severity Scale (CGI-S) and to extend…

  12. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.


    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  13. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  14. Assessing wildfire risks at multiple spatial scales (United States)

    Justin Fitch


    In continuation of the efforts to advance wildfire science and develop tools for wildland fire managers, a spatial wildfire risk assessment was carried out using Classification and Regression Tree analysis (CART) and Geographic Information Systems (GIS). The analysis was performed at two scales. The small-scale assessment covered the entire state of New Mexico, while...

  15. Transdisciplinary Application of Cross-Scale Resilience (United States)

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlyingdiscontinuity hypothesis are re...

  16. Facilitating Internet-Scale Code Retrieval (United States)

    Bajracharya, Sushil Krishna


    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  17. Scale invariant Volkov–Akulov supergravity

    Directory of Open Access Journals (Sweden)

    S. Ferrara


    Full Text Available A scale invariant goldstino theory coupled to supergravity is obtained as a standard supergravity dual of a rigidly scale-invariant higher-curvature supergravity with a nilpotent chiral scalar curvature. The bosonic part of this theory describes a massless scalaron and a massive axion in a de Sitter Universe.

  18. Multiscaling behavior of atomic-scale friction (United States)

    Jannesar, M.; Jamali, T.; Sadeghi, A.; Movahed, S. M. S.; Fesler, G.; Meyer, E.; Khoshnevisan, B.; Jafari, G. R.


    The scaling behavior of friction between rough surfaces is a well-known phenomenon. It might be asked whether such a scaling feature also exists for friction at an atomic scale despite the absence of roughness on atomically flat surfaces. Indeed, other types of fluctuations, e.g., thermal and instrumental fluctuations, become appreciable at this length scale and can lead to scaling behavior of the measured atomic-scale friction. We investigate this using the lateral force exerted on the tip of an atomic force microscope (AFM) when the tip is dragged over the clean NaCl (001) surface in ultra-high vacuum at room temperature. Here the focus is on the fluctuations of the lateral force profile rather than its saw-tooth trend; we first eliminate the trend using the singular value decomposition technique and then explore the scaling behavior of the detrended data, which contains only fluctuations, using the multifractal detrended fluctuation analysis. The results demonstrate a scaling behavior for the friction data ranging from 0.2 to 2 nm with the Hurst exponent H =0.61 ±0.02 at a 1 σ confidence interval. Moreover, the dependence of the generalized Hurst exponent, h (q ) , on the index variable q confirms the multifractal or multiscaling behavior of the nanofriction data. These results prove that fluctuation of nanofriction empirical data has a multifractal behavior which deviates from white noise.

  19. Scaling laws predict global microbial diversity. (United States)

    Locey, Kenneth J; Lennon, Jay T


    Scaling laws underpin unifying theories of biodiversity and are among the most predictively powerful relationships in biology. However, scaling laws developed for plants and animals often go untested or fail to hold for microorganisms. As a result, it is unclear whether scaling laws of biodiversity will span evolutionarily distant domains of life that encompass all modes of metabolism and scales of abundance. Using a global-scale compilation of ∼35,000 sites and ∼5.6⋅10(6) species, including the largest ever inventory of high-throughput molecular data and one of the largest compilations of plant and animal community data, we show similar rates of scaling in commonness and rarity across microorganisms and macroscopic plants and animals. We document a universal dominance scaling law that holds across 30 orders of magnitude, an unprecedented expanse that predicts the abundance of dominant ocean bacteria. In combining this scaling law with the lognormal model of biodiversity, we predict that Earth is home to upward of 1 trillion (10(12)) microbial species. Microbial biodiversity seems greater than ever anticipated yet predictable from the smallest to the largest microbiome.

  20. Negative Life Events Scale for Students (NLESS) (United States)

    Buri, John R.; Cromett, Cristina E.; Post, Maria C.; Landis, Anna Marie; Alliegro, Marissa C.


    Rationale is presented for the derivation of a new measure of stressful life events for use with students [Negative Life Events Scale for Students (NLESS)]. Ten stressful life events questionnaires were reviewed, and the more than 600 items mentioned in these scales were culled based on the following criteria: (a) only long-term and unpleasant…

  1. Scale Length of the Galactic Thin Disk

    Indian Academy of Sciences (India)


    synthetic stellar population model, gives strong evidence that the Galactic thin disk density scale length, hR, ... be preferred to investigate the stellar distribution, specially at large distances from the. Sun. In this paper, we present ... city gradient according to age metallicity and age scale height relations. In the model, the key ...

  2. Scaling Research Results: Design and Evaluation | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Scaling Research Results: Design and Evaluation. Canada's International Development Research Centre (IDRC) supports research to seek scalable solutions to improve the lives of people in the developing world. While there is general understanding of the meaning of "scaling up/ within the domain of research for ...

  3. Kalman plus weights: a time scale algorithm (United States)

    Greenhall, C. A.


    KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.

  4. Designing the Nuclear Energy Attitude Scale. (United States)

    Calhoun, Lawrence; And Others


    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  5. Scaling and metastable behavior in uniaxial ferroelectrics

    NARCIS (Netherlands)

    Fernández del Castillo, J.R.; Noheda, B.; Cereceda, N.; Gonzalo, J.A.; Iglesias, T.; Przeslawski, J.


    Improved experimental resolution and computer aided data analysis of hysteresis loops at T≈TC in uniaxial ferroelectrics triglycene sulfate (ordinary critical point), and triglycine selenate (quasitricritical point) show that scaling holds in a wide range of scaled fields spanning many orders of

  6. Strontium Removal: Full-Scale Ohio Demonstrations (United States)

    The objectives of this presentation are to present a brief overview of past bench-scale research to evaluate the impact lime softening on strontium removal from drinking water and present full-scale drinking water treatment studies to impact of lime softening and ion exchange sof...

  7. Price Discrimination, Economies of Scale, and Profits. (United States)

    Park, Donghyun


    Demonstrates that it is possible for economies of scale to induce a price-discriminating monopolist to sell in an unprofitable market where the average cost always exceeds the price. States that higher profits in the profitable market caused by economies of scale may exceed losses incurred in the unprofitable market. (CMK)

  8. The minimum scale of grooving on faults

    NARCIS (Netherlands)

    Candela, T.; Brodsky, E.E.


    At the field scale, nearly all fault surfaces contain grooves generated as one side of the fault slips past the other. Grooves are so common that they are one of the key indicators of principal slip surfaces. Here, we show that at sufficiently small scales, grooves do not exist on fault surfaces. A

  9. Developing a Scale for Learner Autonomy Support (United States)

    Oguz, Aytunga


    The aim of the present study is to develop a scale to determine how necessary the primary and secondary school teachers view the learner autonomy support behaviours and how much they perform these behaviours. The study group was composed of 324 primary and secondary school teachers. The process of developing the scale involved a literature scan,…

  10. Characterizing Soil Cracking at the Field Scale (United States)

    Physical characterization of the soil cracking has always been a major challenge in scaling soil water interaction to the field level. This scaling would allow for the soil water flow in the field to be modeled in two distinct pools: across the soil matrix and in preferential flows thus tackling maj...

  11. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard


    benefit from some form of regularization and, secondly, arguing that the problem of scale has not been taken care of in a very satis- factory manner, we come to a combined resolution of both of these shortcomings by proposing a technique that we coin scale regularization. This regularization problem can...

  12. Sample-Starved Large Scale Network Analysis (United States)


    Applications to materials science 2.1 Foundational principles for large scale inference on structure of covariance We developed general principles for...concise but accessible format. These principles are applicable to large-scale complex network applications arising genomics , connectomics, eco-informatics...available to estimate or detect patterns in the matrix. 15. SUBJECT TERMS multivariate dependency structure multivariate spatio-temporal prediction

  13. Multiscaling behavior of atomic-scale friction. (United States)

    Jannesar, M; Jamali, T; Sadeghi, A; Movahed, S M S; Fesler, G; Meyer, E; Khoshnevisan, B; Jafari, G R


    The scaling behavior of friction between rough surfaces is a well-known phenomenon. It might be asked whether such a scaling feature also exists for friction at an atomic scale despite the absence of roughness on atomically flat surfaces. Indeed, other types of fluctuations, e.g., thermal and instrumental fluctuations, become appreciable at this length scale and can lead to scaling behavior of the measured atomic-scale friction. We investigate this using the lateral force exerted on the tip of an atomic force microscope (AFM) when the tip is dragged over the clean NaCl (001) surface in ultra-high vacuum at room temperature. Here the focus is on the fluctuations of the lateral force profile rather than its saw-tooth trend; we first eliminate the trend using the singular value decomposition technique and then explore the scaling behavior of the detrended data, which contains only fluctuations, using the multifractal detrended fluctuation analysis. The results demonstrate a scaling behavior for the friction data ranging from 0.2 to 2 nm with the Hurst exponent H=0.61±0.02 at a 1σ confidence interval. Moreover, the dependence of the generalized Hurst exponent, h(q), on the index variable q confirms the multifractal or multiscaling behavior of the nanofriction data. These results prove that fluctuation of nanofriction empirical data has a multifractal behavior which deviates from white noise.

  14. Developing a News Media Literacy Scale (United States)

    Ashley, Seth; Maksl, Adam; Craft, Stephanie


    Using a framework previously applied to other areas of media literacy, this study developed and assessed a measurement scale focused specifically on critical news media literacy. Our scale appears to successfully measure news media literacy as we have conceptualized it based on previous research, demonstrated through assessments of content,…

  15. Continuous Road Network Generalization throughout All Scales

    NARCIS (Netherlands)

    Suba, R.; Meijers, B.M.; van Oosterom, P.J.M.


    Until now, road network generalization has mainly been applied to the task of generalizing from one fixed source scale to another fixed target scale. These actions result in large differences in content and representation, e.g., a sudden change of the representation of road segments from areas to

  16. Scaling analysis of meteorite shower mass distributions

    DEFF Research Database (Denmark)

    Oddershede, Lene; Meibom, A.; Bohr, Jakob


    Meteorite showers are the remains of extraterrestrial objects which are captivated by the gravitational field of the Earth. We have analyzed the mass distribution of fragments from 16 meteorite showers for scaling. The distributions exhibit distinct scaling behavior over several orders of magnetude...

  17. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren


    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models...

  18. Scaling and four-quark fragmentation

    NARCIS (Netherlands)

    Scholten, O.; Bosveld, G. D.


    The conditions for a scaling behaviour from the fragmentation process leading to slow protons are discussed. The scaling referred to implies that the fragmentation functions depend on the light-cone momentum fraction only. It is shown that differences in the fragmentation functions for valence- and

  19. Litteraturstudie af forskning om environment rating scales

    DEFF Research Database (Denmark)

    Næsby, Torben


    Litteraturstudiet omhandler forskning om de internationalt anvendte evalueringssmetoder ECERS (Early Childhood Environment Rating Scale) og ITERS (Infant/Toddler Environment Rating Scale), der begge er instrumenter til måling af kvalitet og værktøjer til evaluering og udvikling af kvalitet i...

  20. On the Density Scaling of Liquid Dynamics (United States)


    squalane in Fig. 1, which is representative of the literature results for dielectric relaxation times of supercooled liquids and polymers. Many liquids...reduced viscosity (filled symbols) of squalane . The data extend over many decades of viscosity, therefore both quantities scale with identical scaling

  1. Numerical Methods and Turbulence Modeling for LES of Piston Engines: Impact on Flow Motion and Combustion

    Directory of Open Access Journals (Sweden)

    Misdariis A.


    Full Text Available In this article, Large Eddy Simulations (LES of Spark Ignition (SI engines are performed to evaluate the impact of the numerical set-upon the predictedflow motion and combustion process. Due to the high complexity and computational cost of such simulations, the classical set-up commonly includes “low” order numerical schemes (typically first or second-order accurate in time and space as well as simple turbulence models (such as the well known constant coefficient Smagorinsky model (Smagorinsky J. (1963 Mon. Weather Rev. 91, 99-164. The scope of this paper is to evaluate the feasibility and the potential benefits of using high precision methods for engine simulations, relying on higher order numerical methods and state-of-the-art Sub-Grid-Scale (SGS models. For this purpose, two high order convection schemes from the Two-step Taylor Galerkin (TTG family (Colin and Rudgyard (2000 J. Comput. Phys. 162, 338-371 and several SGS turbulence models, namely Dynamic Smagorinsky (Germano et al. (1991 Phys. Fluids 3, 1760-1765 and sigma (Baya Toda et al. (2010 Proc. Summer Program 2010, Stanford, Center for Turbulence Research, NASA Ames/Stanford Univ., pp. 193-202 are considered to improve the accuracy of the classically used Lax-Wendroff (LW (Lax and Wendroff (1964 Commun. Pure Appl. Math. 17, 381-398 - Smagorinsky set-up. This evaluation is performed considering two different engine configurations from IFP Energies nouvelles. The first one is the naturally aspirated four-valve spark-ignited F7P engine which benefits from an exhaustive experimental and numerical characterization. The second one, called Ecosural, is a highly supercharged spark-ignited engine. Unique realizations of engine cycles have been simulated for each set-up starting from the same initial conditions and the comparison is made with experimental and previous numerical results for the F7P configuration. For the Ecosural engine, experimental results are not available yet and only

  2. Modifying patch-scale connectivity to initiate landscape change: an experimental approach to link scales (United States)

    Peters, D. P.; Herrick, J.; Okin, G. S.; Pillsbury, F. C.; Duniway, M.; Vivoni, E. R.; Sala, O.; Havstad, K.; Monger, H. C.; Yao, J.; Anderson, J.


    Nonlinear interactions and feedbacks across spatial and temporal scales are common features of biological and physical systems. These emergent behaviors often result in surprises that challenge the ability of scientists to understand and predict system behavior at one scale based on information at finer or broader scales. Changes in ecosystem states under directional changes in climate represent a class of challenging dynamics of particular significance in many terrestrial ecosystems of the world. We are focusing on one system of global relevance and importance (conversion of arid grasslands to degraded shrublands). We are using a novel, multi-scale manipulative experiment to understand the key processes governing state changes, and to test specific hypotheses about how patterns and processes interact across scales to potentially reverse shrublands to grasslands or to other alternative states. We are using this experiment combined with simulation models to address two questions: (1) At what spatial scales do fine-scale processes propagate to exhibit broad-scale impacts? (2) At what spatial scales do broad-scale drivers overwhelm fine-scale processes? In this experiment, we initiate grass-soil feedbacks via the redistribution of resources at the plant and patch scale using Connectivity Modifiers (ConMods). These patterns are expected to propagate through time and space to influence grass dominance at the landscape scale with implications for regional scale land-atmosphere interactions. Initial results show that ConMods are effective in reducing horizontal water redistribution, and increasing local water availability to result in recruitment and growth of grasses and other herbaceous plants. We are integrating this information with a suite of process-based ecosystem-hydrologic-aeolian-atmospheric simulation models to investigate threshold dynamics and feedbacks across scales, and to predict alternative states under climate change. We believe this cross-scale approach

  3. Scale invariance from phase transitions to turbulence

    CERN Document Server

    Lesne, Annick


    During a century, from the Van der Waals mean field description (1874) of gases to the introduction of renormalization group (RG techniques 1970), thermodynamics and statistical physics were just unable to account for the incredible universality which was observed in numerous critical phenomena. The great success of RG techniques is not only to solve perfectly this challenge of critical behaviour in thermal transitions but to introduce extremely useful tools in a wide field of daily situations where a system exhibits scale invariance. The introduction of scaling, scale invariance and universality concepts has been a significant turn in modern physics and more generally in natural sciences. Since then, a new "physics of scaling laws and critical exponents", rooted in scaling approaches, allows quantitative descriptions of numerous phenomena, ranging from phase transitions to earthquakes, polymer conformations, heartbeat rhythm, diffusion, interface growth and roughening, DNA sequence, dynamical systems, chaos ...

  4. Modified dispersion relations, inflation, and scale invariance (United States)

    Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward


    For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.

  5. Scale-locality of magnetohydrodynamic turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Aluie, Hussein [Los Alamos National Laboratory; Eyink, Gregory L [JOHNS HOPKINS UNIV.


    We investigate the scale-locality of cascades of conserved invariants at high kinetic and magnetic Reynolds numbers in the 'inertial-inductive range' of magnetohydrodynamic (MHD) turbulence, where velocity and magnetic field increments exhibit suitable power-law scaling. We prove that fluxes of total energy and cross-helicity - or, equivalently, fluxes of Elsaesser energies - are dominated by the contributions of local triads. Corresponding spectral transfers are also scale-local when defined using octave wavenumber bands. Flux and transfer of magnetic helicity may be dominated by nonlocal triads. The magnetic stretching term also may be dominated by non-local triads but we prove that it can convert energy only between velocity and magnetic modes at comparable scales. We explain the disagreement with numerical studies that have claimed conversion non locally between disparate scales. We present supporting data from a 1024{sup 3} simulation of forced MHD turbulence.

  6. Lagrangian scale of particle dispersion in turbulence. (United States)

    Xia, Hua; Francois, Nicolas; Punzmann, Horst; Shats, Michael


    Transport of mass, heat and momentum in turbulent flows by far exceeds that in stable laminar fluid motions. As turbulence is a state of a flow dominated by a hierarchy of scales, it is not clear which of these scales mostly affects particle dispersion. Also, it is not uncommon that turbulence coexists with coherent vortices. Here we report on Lagrangian statistics in laboratory two-dimensional turbulence. Our results provide direct experimental evidence that fluid particle dispersion is determined by a single measurable Lagrangian scale related to the forcing scale. These experiments offer a new way of predicting dispersion in turbulent flows in which one of the low energy scales possesses temporal coherency. The results are applicable to oceanographic and atmospheric data, such as those obtained from trajectories of free-drifting instruments in the ocean.

  7. Neural scaling laws for an uncertain world

    CERN Document Server

    Howard, Marc W


    The Weber-Fechner law describes the form of psychological space in many behavioral experiments involving perception of one-dimensional physical quantities. If the physical quantity is expressed using multiple neural receptors, then placing receptive fields evenly along a logarithmic scale naturally leads to the psychological Weber-Fechner law. In the visual system, the spacing and width of extrafoveal receptive fields are consistent with logarithmic scaling. Other sets of neural "receptors" appear to show the same qualitative properties, suggesting that this form of neural scaling reflects a solution to a very general problem. This paper argues that these neural scaling laws enable the brain to represent information about the world efficiently without making any assumptions about the statistics of the world. This analysis suggests that the organization of neural scales to represent one-dimensional quantities, including more abstract quantities such as numerosity, time, and allocentric space, should have a uni...

  8. Testing Asteroseismic Scaling Relations with Interferometry

    Directory of Open Access Journals (Sweden)

    White T. R.


    Full Text Available The asteroseismic scaling relations for the frequency of maximum oscillation power, vmax, and the large frequency separation, Δν, provide an easy way to directly determine the masses and radii of stars with detected solar-like oscillations. With the vast amount of data available from the CoRoT and Kepler missions, the convenience of the scaling relations has resulted in their wide-spread use. But how valid are the scaling relations when applied to red giants, which have a substantially different structure than the Sun? Verifying the scaling relations empirically requires independent measurements. We report on the current state and future prospects of interferometric tests of the scaling relations.

  9. Detecting Critical Scales in Fragmented Landscapes

    Directory of Open Access Journals (Sweden)

    Timothy Keitt


    Full Text Available We develop methods for quantifying habitat connectivity at multiple scales and assigning conservation priority to habitat patches based on their contribution to connectivity. By representing the habitat mosaic as a mathematical "graph," we show that percolation theory can be used to quantify connectivity at multiple scales from empirical landscape data. Our results indicate that connectivity of landscapes is highly scale dependent, exhibiting a marked transition at a characteristic distance and varying significantly for organisms with different dispersal behavior. More importantly, we show that the sensitivity and importance of landscape pattern is also scale dependent, peaking at scales associated with the percolation transition. In addition, the sensitivity analysis allows us to identify critical "stepping stone" patches that, when removed from the landscape, cause large changes in connectivity.

  10. Organization and scaling in water supply networks (United States)

    Cheng, Likwan; Karney, Bryan W.


    Public water supply is one of the society's most vital resources and most costly infrastructures. Traditional concepts of these networks capture their engineering identity as isolated, deterministic hydraulic units, but overlook their physics identity as related entities in a probabilistic, geographic ensemble, characterized by size organization and property scaling. Although discoveries of allometric scaling in natural supply networks (organisms and rivers) raised the prospect for similar findings in anthropogenic supplies, so far such a finding has not been reported in public water or related civic resource supplies. Examining an empirical ensemble of large number and wide size range, we show that water supply networks possess self-organized size abundance and theory-explained allometric scaling in spatial, infrastructural, and resource- and emission-flow properties. These discoveries establish scaling physics for water supply networks and may lead to novel applications in resource- and jurisdiction-scale water governance.

  11. MLDS: Maximum Likelihood Difference Scaling in R

    Directory of Open Access Journals (Sweden)

    Kenneth Knoblauch


    Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.

  12. Large Scale Metal Additive Techniques Review

    Energy Technology Data Exchange (ETDEWEB)

    Nycz, Andrzej [ORNL; Adediran, Adeola I [ORNL; Noakes, Mark W [ORNL; Love, Lonnie J [ORNL


    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  13. Scale Dependence of Dark Energy Antigravity (United States)

    Perivolaropoulos, L.


    We investigate the effects of negative pressure induced by dark energy (cosmological constant or quintessence) on the dynamics at various astrophysical scales. Negative pressure induces a repulsive term (antigravity) in Newton's law which dominates on large scales. Assuming a value of the cosmological constant consistent with the recent SnIa data we determine the critical scale $r_c$ beyond which antigravity dominates the dynamics ($r_c \\sim 1Mpc $) and discuss some of the dynamical effects implied. We show that dynamically induced mass estimates on the scale of the Local Group and beyond are significantly modified due to negative pressure. We also briefly discuss possible dynamical tests (eg effects on local Hubble flow) that can be applied on relatively small scales (a few $Mpc$) to determine the density and equation of state of dark energy.

  14. The Relationship Satisfaction scale – Psychometric properties

    Directory of Open Access Journals (Sweden)

    Espen Røysamb


    Full Text Available The aim of this study was to establish the psychometric properties of the new Relationship Satisfaction (RS scale. Two population based samples were used: The Norwegian Mother and Child Cohort Study (MoBa, N=117,178 and The Quality of Life study (N=347. Convergent and discriminant validity was investigated in relation to the Quality of Marriage Index (QMI, the Satisfaction With Life Scale (SWLS, Relationship Satisfaction of partner, Big Five personality traits (IPIP50 and future relationship dissolution. The full scale with ten items (RS10 and a short version with five items (RS5 showed good psychometric properties. The scale has high internal and test-retest reliability and high structural, convergent, and discriminant validity. Measurement invariance across gender was established. Additionally, predictive validity was evidenced by prediction of future relationship dissolution. We conclude that the RS scale is highly useful as a generic measure of global relationship satisfaction.


    National Aeronautics and Space Administration — USAGE OF DISSIMILARITY MEASURES AND MULTIDIMENSIONAL SCALING FOR LARGE SCALE SOLAR DATA ANALYSIS Juan M Banda, Rafal Anrgyk ABSTRACT: This work describes the...

  16. Improving the coastal record of tsunamis in the ESI-07 scale: Tsunami Environmental Effects Scale (TEE-16 scale)

    Energy Technology Data Exchange (ETDEWEB)

    Lario, J.; Bardaji, T.; Silva, P.G.; Zazo, C.; Goy, J.L.


    This paper discusses possibilities to improve the Environmental Seismic Intensity Scale (ESI-07 scale), a scale based on the effects of earthquakes in the environment. This scale comprises twelve intensity degrees and considers primary and secondary effects, one of them the occurrence of tsunamis. Terminology and physical tsunami parameters corresponding to different intensity levels are often misleading and confusing. The present work proposes: i) a revised and updated catalogue of environmental and geological effects of tsunamis, gathering all the available information on Tsunami Environmental Effects (TEEs) produced by recent earthquake-tsunamis; ii) a specific intensity scale (TEE-16) for the effects of tsunamis in the natural environment at coastal areas. The proposed scale could be used in future tsunami events and, in historic and paleo-tsunami studies. The new TEE- 16 scale incorporates the size specific parameters already considered in the ESI-07 scale, such as wave height, run-up and inland extension of inundation, and a comprehensive and more accurate terminology that covers all the different intensity levels identifiable in the geological record (intensities VI-XII). The TEE-16 scale integrates the description and quantification of the potential sedimentary and erosional features (beach scours, transported boulders and classical tsunamites) derived from different tsunami events at diverse coastal environments (e.g. beaches, estuaries, rocky cliffs,). This new approach represents an innovative advance in relation to the tsunami descriptions provided by the ESI-07 scale, and allows the full application of the proposed scale in paleoseismological studies. The analysis of the revised and updated tsunami environmental damage suggests that local intensities recorded in coastal areas do not correlate well with the TEE-16 intensity (normally higher), but shows a good correlation with the earthquake magnitude (Mw). Tsunamis generated by earthquakes can then be

  17. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.


    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  18. The minimum scale of grooving on faults (United States)

    Candela, T.; Brodsky, E. E.


    The roughness of fault surfaces is the fingerprint of past slip events and a major parameter controlling the resistance to slip. The most obvious slip indicator and record of tractions are the grooves and striations with elongate axes in the direction of slip. We focus on this roughness feature by analyzing the micro-roughness of slip surfaces from natural and experimental fault zones at scales of several millimeters down to one micron. For each topographic map acquired by White Light Interferometry, an average Fourier spectrum is computed in the slip parallel and slip perpendicular direction seeking to define the scale dependence of the roughness anisotropy. We show that natural and experimental fault surfaces have a minimum scale of grooving at 4-500 micrometers. Below this scale, fault surfaces are isotropic. We have systematically measured this minimum scale of grooving on 42 topographic maps of eight different natural fault zones and 25 topographic maps of nine experimental fault zones. Our results are interpreted in terms of the aspect ratio H/L with H the average asperity height and L the observation scale. This aspect ratio is proportional to the strain necessary to completely flatten the asperities. H/L systematically increases with the decreasing of L. The transition between anisotropic and isotropic is well predicted by a critical aspect ratio. With the scale of observation decreasing the grooves become steeper and once they reach a critical aspect ratio they fail. At all scales, evidence of failure of the slip surfaces are observed and we interpret the minimum scale of grooving as a manifestation of the change in deformation mode from brittle- to plastic-dominated. As the scale of observation decreases, the aspect ratio of the grooves increases and the resulting higher stress concentrations at micro-asperities favor plasticity. The transition is dependent on the rock properties and faulting history, and for each fault one unique critical aspect ratio

  19. Mokken scale analysis : Between the Guttman scale and parametric item response theory

    NARCIS (Netherlands)

    van Schuur, Wijbrandt H.


    This article introduces a model of ordinal unidimensional measurement known as Mokken scale analysis. Mokken scaling is based on principles of Item Response Theory (IRT) that originated in the Guttman scale. I compare the Mokken model with both Classical Test Theory (reliability or factor analysis)

  20. The Denver II Scales and the Griffiths Scales of Mental Development ...

    African Journals Online (AJOL)

    The general aim of the study was to investigate the use of the Denver II and the Griffiths Scales on a pre-school black Xhosa-speaking sample. Specifically, the aim was to investigate the relationship between the Denver II Scales and the Griffiths Scales, in order to provide the first step in establishing the validity of the Denver ...

  1. Do Balanced Scales Assess Bipolar Constructs? The Case of the STAI Scales (United States)

    Vautier, Stephane; Pohl, Steffi


    Balanced scales, that is, scales based on items whose content is either negatively or positively polarized, are often used in the hope of measuring a bipolar construct. Research has shown that usually balanced scales do not yield 1-dimensional measurements. This threatens their construct validity. The authors show how to test bipolarity while…

  2. Multi-scale biomedical systems: measurement challenges (United States)

    Summers, R.


    Multi-scale biomedical systems are those that represent interactions in materials, sensors, and systems from a holistic perspective. It is possible to view such multi-scale activity using measurement of spatial scale or time scale, though in this paper only the former is considered. The biomedical application paradigm comprises interactions that range from quantum biological phenomena at scales of 10-12 for one individual to epidemiological studies of disease spread in populations that in a pandemic lead to measurement at a scale of 10+7. It is clear that there are measurement challenges at either end of this spatial scale, but those challenges that relate to the use of new technologies that deal with big data and health service delivery at the point of care are also considered. The measurement challenges lead to the use, in many cases, of model-based measurement and the adoption of virtual engineering. It is these measurement challenges that will be uncovered in this paper.

  3. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay


    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  4. Multi-scale gravity and cosmology (United States)

    Calcagni, Gianluca


    The gravitational dynamics and cosmological implications of three classes of recently introduced multi-scale spacetimes (with, respectively, ordinary, weighted and q-derivatives) are discussed. These spacetimes are non-Riemannian: the metric structure is accompanied by an independent measure-differential structure with the characteristics of a multi-fractal, namely, different dimensionality at different scales and, at ultra-short distances, a discrete symmetry known as discrete scale invariance. Under this minimal paradigm, five general features arise: (a) the big-bang singularity can be replaced by a finite bounce, (b) the cosmological constant problem is reinterpreted, since accelerating phases can be mimicked by the change of geometry with the time scale, without invoking a slowly rolling scalar field, (c) the discreteness of geometry at Planckian scales can leave an observable imprint of logarithmic oscillations in cosmological spectra and (d) give rise to an alternative mechanism to inflation or (e) to a fully analytic model of cyclic mild inflation, where near scale invariance of the perturbation spectrum can be produced without strong acceleration. Various properties of the models and exact dynamical solutions are discussed. In particular, the multi-scale geometry with weighted derivatives is shown to be a Weyl integrable spacetime.

  5. Continuous Road Network Generalization throughout All Scales

    Directory of Open Access Journals (Sweden)

    Radan Šuba


    Full Text Available Until now, road network generalization has mainly been applied to the task of generalizing from one fixed source scale to another fixed target scale. These actions result in large differences in content and representation, e.g., a sudden change of the representation of road segments from areas to lines, which may confuse users. Therefore, we aim at the continuous generalization of a road network for the whole range, from the large scale, where roads are represented as areas, to mid- and small scales, where roads are represented progressively more frequently as lines. As a consequence of this process, there is an intermediate scale range where at the same time some roads will be represented as areas, while others will be represented as lines. We propose a new data model together with a specific data structure where for all map objects, a range of valid map scales is stored. This model is based on the integrated and explicit representation of: (1 a planar area partition; and (2 a linear road network. This enables the generalization process to include the knowledge and understanding of a linear network. This paper further discusses the actual generalization options and algorithms for populating this data structure with high quality vario-scale cartographic content.

  6. Development of Islamic Spiritual Health Scale (ISHS). (United States)

    Khorashadizadeh, Fatemeh; Heydari, Abbas; Nabavi, Fatemeh Heshmati; Mazlom, Seyed Reza; Ebrahimi, Mahdi; Esmaili, Habibollah


    To develop and psychometrically assess spiritual health scale based on Islamic view in Iran. The cross-sectional study was conducted at Imam Ali and Quem hospitals in Mashhad and Imam Ali and Imam Reza hospitals in Bojnurd, Iran, from 2015 to 2016 In the first stage, an 81-item Likert-type scale was developed using a qualitative approach. The second stage comprised quantitative component. The scale's impact factor, content validity ratio, content validity index, face validity and exploratory factor analysis were calculated. Test-retest and internal consistency was used to examine the reliability of the instrument. Data analysis was done using SPSS 11. Of 81 items in the scale, those with impact factor above 1.5, content validity ratio above 0.62, and content validity index above 0.79 were considered valid and the rest were discarded, resulting in a 61-item scale. Exploratory factor analysis reduced the list of items to 30, which were divided into seven groups with a minimum eigen value of 1 for each factor. But according to scatter plot, attributes of the concept of spiritual health included love to creator, duty-based life, religious rationality, psychological balance, and attention to afterlife. Internal reliability of the scale was calculated by alpha Cronbach coefficient as 0.91. There was solid evidence of the strength factor structure and reliability of the Islamic Spiritual Health Scale which provides a unique way for spiritual health assessment of Muslims.

  7. Scaled CMOS Technology Reliability Users Guide (United States)

    White, Mark


    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  8. Nurse competence scale: development and psychometric testing. (United States)

    Meretoja, Riitta; Isoaho, Hannu; Leino-Kilpi, Helena


    Self-assessment assists nurses to maintain and improve their practice by identifying their strengths and areas that may need to be further developed. Professional competence profiles encourage them to take an active part in the learning process of continuing education. Although competence recognition offers a way to motivate practising nurses to produce quality care, few measuring tools are available for this purpose. This paper describes the development and testing of the Nurse Competence Scale, an instrument with which the level of nurse competence can be assessed in different hospital work environments. The categories of the Nurse Competence Scale were derived from Benner's From Novice to Expert competency framework. A seven-step approach, including literature review and six expert groups, was used to identify and validate the indicators of nurse competence. After a pilot test, psychometric testing of the Nurse Competence Scale (content, construct and concurrent validity, and internal consistency) was undertaken with 498 nurses. The 73-item scale consists of seven categories, with responses on a visual analogy scale format. The frequency of using competencies was additionally tested with a four-point scale. Self-assessed overall scores indicated a high level of competence across categories. The Nurse Competence Scale data were normally distributed. The higher the frequency of using competencies, the higher was the self-assessed level of competence. Age and length of work experience had a positive but not very strong correlation with level of competence. According to the item analysis, the categories of the Nurse Competence Scale showed good internal consistency. The results provide strong evidence of the reliability and validity of the Nurse Competence Scale.

  9. Validity of four pain intensity rating scales. (United States)

    Ferreira-Valente, Maria Alexandra; Pais-Ribeiro, José Luís; Jensen, Mark P


    The Visual Analogue Scale (VAS), Numerical Rating Scale (NRS), Verbal Rating Scale (VRS), and the Faces Pain Scale-Revised (FPS-R) are among the most commonly used measures of pain intensity in clinical and research settings. Although evidence supports their validity as measures of pain intensity, few studies have compared them with respect to the critical validity criteria of responsivity, and no experiment has directly compared all 4 measures in the same study. The current study compared the relative validity of VAS, NRS, VRS, and FPS-R for detecting differences in painful stimulus intensity and differences between men and women in response to experimentally induced pain. One hundred twenty-seven subjects underwent four 20-second cold pressor trials with temperature order counterbalanced across 1°C, 3°C, 5°C, and 7°C and rated pain intensity using all 4 scales. Results showed statistically significant differences in pain intensity between temperatures for each scale, with lower temperatures resulting in higher pain intensity. The order of responsivity was as follows: NRS, VAS, VRS, and FPS-R. However, there were relatively small differences in the responsivity between scales. A statistically significant sex main effect was also found for the NRS, VRS, and FPS-R. The findings are consistent with previous studies supporting the validity of each scale. The most support emerged for the NRS as being both (1) most responsive and (2) able to detect sex differences in pain intensity. The results also provide support for the validity of the scales for use in Portuguese samples. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  10. Rating scales for observer performance studies (United States)

    Nishikawa, Robert M.; Jiang, Yulei; Metz, Charles E.


    We compared the performance of radiologists reading a set of screening mammograms with and without CADe as measured by the BI-RADS assessment scale to that measured by a 9-point rating scale. Eight MQSA radiologists read 300 screening mammograms, of which 66 cases contained at least one cancer and 234 were normal based on two-year follow-up. Both without and then with CADe, the radiologists gave their BI-RADS assessment for each case and, for each suspicious lesion in the image, reported their confidence on a 9-point scale (1=no evidence for recall; 5=equivocal; 9=overwhelming evidence for recall) that the lesion needed to be worked up. The radiologists were instructed to read the cases as they would clinically. We used MRMC ROC analysis employing PROPROC curve fitting to analyze the data, once for the BI-RADS data and again for that collected on the 9-point scale. Given that the radiologists were reading screening mammograms and were instructed to read in their normal clinical manner, not all radiologists used the full BI-RADS scale. Two radiologists used only BI-RADS 0,1 and 2, three used the full scale, and three used the full scale but employed categories 3, 4 and 5 sparingly. This mimics what occurs clinically, according to the literature. The BI-RADS and the 9-point rating scales gave similar results in terms of AUC. However, the 95% CIs of the estimates of AUC were substantially smaller for the 9-point scale.

  11. New Empirical Earthquake Source‐Scaling Laws

    KAUST Repository

    Thingbaijam, Kiran Kumar S.


    We develop new empirical scaling laws for rupture width W, rupture length L, rupture area A, and average slip D, based on a large database of rupture models. The database incorporates recent earthquake source models in a wide magnitude range (M 5.4–9.2) and events of various faulting styles. We apply general orthogonal regression, instead of ordinary least-squares regression, to account for measurement errors of all variables and to obtain mutually self-consistent relationships. We observe that L grows more rapidly with M compared to W. The fault-aspect ratio (L/W) tends to increase with fault dip, which generally increases from reverse-faulting, to normal-faulting, to strike-slip events. At the same time, subduction-inter-face earthquakes have significantly higher W (hence a larger rupture area A) compared to other faulting regimes. For strike-slip events, the growth of W with M is strongly inhibited, whereas the scaling of L agrees with the L-model behavior (D correlated with L). However, at a regional scale for which seismogenic depth is essentially fixed, the scaling behavior corresponds to the W model (D not correlated with L). Self-similar scaling behavior with M − log A is observed to be consistent for all the cases, except for normal-faulting events. Interestingly, the ratio D/W (a proxy for average stress drop) tends to increase with M, except for shallow crustal reverse-faulting events, suggesting the possibility of scale-dependent stress drop. The observed variations in source-scaling properties for different faulting regimes can be interpreted in terms of geological and seismological factors. We find substantial differences between our new scaling relationships and those of previous studies. Therefore, our study provides critical updates on source-scaling relations needed in seismic–tsunami-hazard analysis and engineering applications.

  12. Length and Time Scales in Continental Drift (United States)

    Phillips, B. R.; Bunge, H.


    Nonlinear feedback between continents and the mantle through thermal blanketing has long been surmised as a mechanism for continental drift and Wilson cycles. Paleomagnetism provides ample evidence for large scale (10,000 km) continental motion on time scales of several hundred million years, indicative of large scale mantle circulation. While much has been learned about the interactions between continents and mantle flow from analog and numerical modeling studies in two and three dimensions, a rigorous sensitivity study on the effects of continents in high resolution 3D spherical mantle convection models has yet to be pursued. As a result, a quantitative understanding of the scales of continental motion as they relate to relevant fluid dynamic processes is lacking. Here we focus on the effect of continental size. Continents covering 30% of the surface are representative of a supercontinent such as Pangea, smaller continents (10% of Earth's surface) are representative of present day Asia, and still smaller continents (3% of Earth's surface) are similar to present day Antarctica. These continents are introduced into simple end-member mantle flow regimes characterized by combinations of bottom or internal heating and uniform or layered mantle viscosity. We find that large scale mantle structure, and correspondingly the large scale displacement of continents, depends not only on mantle heating mode and radial viscosity structure, but also on continental size. Supercontinents promote heterogeneity on the largest scales (spherical harmonic degree one), especially when combined with strong bottom heating and a high viscosity lower mantle. Degree one heterogeneities in turn drive cyclical continental motion, with continents moving from the hot to the cold hemisphere on time scales of several hundred million years. Smaller continents are unable to initiate degree one convection. As a result, their motion is governed by shorter length and time scales. We apply these

  13. Rasch analysis of the participation scale (P-scale): usefulness of the P-scale to a rehabilitation services network. (United States)

    Souza, Mariana Angélica Peixoto; Coster, Wendy Jane; Mancini, Marisa Cotta; Dutra, Fabiana Caetano Martins Silva; Kramer, Jessica; Sampaio, Rosana Ferreira


    A person's participation is acknowledged as an important outcome of the rehabilitation process. The Participation Scale (P-Scale) is an instrument that was designed to assess the participation of individuals with a health condition or disability. The scale was developed in an effort to better describe the participation of people living in middle-income and low-income countries. The aim of this study was to use Rasch analysis to examine whether the Participation Scale is suitable to assess the perceived ability to take part in participation situations by patients with diverse levels of function. The sample was comprised by 302 patients from a public rehabilitation services network. Participants had orthopaedic or neurological health conditions, were at least 18 years old, and completed the Participation Scale. Rasch analysis was conducted using the Winsteps software. The mean age of all participants was 45.5 years (standard deviation = 14.4), 52% were male, 86% had orthopaedic conditions, and 52% had chronic symptoms. Rasch analysis was performed using a dichotomous rating scale, and only one item showed misfit. Dimensionality analysis supported the existence of only one Rasch dimension. The person separation index was 1.51, and the item separation index was 6.38. Items N2 and N14 showed Differential Item Functioning between men and women. Items N6 and N12 showed Differential Item Functioning between acute and chronic conditions. The item difficulty range was -1.78 to 2.09 logits, while the sample ability range was -2.41 to 4.61 logits. The P-Scale was found to be useful as a screening tool for participation problems reported by patients in a rehabilitation context, despite some issues that should be addressed to further improve the scale.

  14. Grizzly bear habitat selection is scale dependent. (United States)

    Ciarniello, Lana M; Boyce, Mark S; Seip, Dale R; Heard, Douglas C


    The purpose of our study is to show how ecologists' interpretation of habitat selection by grizzly bears (Ursus arctos) is altered by the scale of observation and also how management questions would be best addressed using predetermined scales of analysis. Using resource selection functions (RSF) we examined how variation in the spatial extent of availability affected our interpretation of habitat selection by grizzly bears inhabiting mountain and plateau landscapes. We estimated separate models for females and males using three spatial extents: within the study area, within the home range, and within predetermined movement buffers. We employed two methods for evaluating the effects of scale on our RSF designs. First, we chose a priori six candidate models, estimated at each scale, and ranked them using Akaike Information Criteria. Using this method, results changed among scales for males but not for females. For female bears, models that included the full suite of covariates predicted habitat use best at each scale. For male bears that resided in the mountains, models based on forest successional stages ranked highest at the study-wide and home range extents, whereas models containing covariates based on terrain features ranked highest at the buffer extent. For male bears on the plateau, each scale estimated a different highest-ranked model. Second, we examined differences among model coefficients across the three scales for one candidate model. We found that both the magnitude and direction of coefficients were dependent upon the scale examined; results varied between landscapes, scales, and sexes. Greenness, reflecting lush green vegetation, was a strong predictor of the presence of female bears in both landscapes and males that resided in the mountains. Male bears on the plateau were the only animals to select areas that exposed them to a high risk of mortality by humans. Our results show that grizzly bear habitat selection is scale dependent. Further, the

  15. Network robustness under large-scale attacks

    CERN Document Server

    Zhou, Qing; Liu, Ruifang; Cui, Shuguang


    Network Robustness under Large-Scale Attacks provides the analysis of network robustness under attacks, with a focus on large-scale correlated physical attacks. The book begins with a thorough overview of the latest research and techniques to analyze the network responses to different types of attacks over various network topologies and connection models. It then introduces a new large-scale physical attack model coined as area attack, under which a new network robustness measure is introduced and applied to study the network responses. With this book, readers will learn the necessary tools to evaluate how a complex network responds to random and possibly correlated attacks.

  16. Scaling and universality in magnetocaloric materials

    DEFF Research Database (Denmark)

    Smith, Anders; Nielsen, Kaspar Kirstein; Bahl, Christian R. H.


    fields are not universal, showing significant variation for models in the same universality class. As regards the adiabatic temperature change, it is not determined exclusively by the singular part of the free energy and its derivatives. We show that the field dependence of the adiabatic temperature...... itself. However, this is only true in the critical region near Tc and for small fields; for finite fields, scaling with constant exponents, in general, break down, even at Tc. The field dependence can then be described by field-dependent scaling exponents. We show that the scaling exponents at finite...

  17. Development of a Chinese Superstitious Belief Scale. (United States)

    Huang, Li-Shia; Teng, Ching-I


    Traditional Western superstitious beliefs, such as black cats and the number 13 bringing bad luck, may not be applicable to different cultures. This study develops a Chinese Superstitious Belief Scale by conducting two studies with 363 and 395 participants, respectively. Exploratory factor analysis was used to construct the scale and then structural equation modeling was applied to verify its reliability and validity. The scale contains six dimensions, Homonym, Traditional customs, Power of crystal, Horoscope, Feng-shui, and Luck for gambling. Findings are helpful for understanding the difference between Chinese superstitions and the traditional Western superstitions and permits subsequent development of sociopsychological theories on correlates and effects of Chinese superstitions.

  18. Corroded scale analysis from water distribution pipes

    Directory of Open Access Journals (Sweden)

    Rajaković-Ognjanović Vladana N.


    Full Text Available The subject of this study was the steel pipes that are part of Belgrade's drinking water supply network. In order to investigate the mutual effects of corrosion and water quality, the corrosion scales on the pipes were analyzed. The idea was to improve control of corrosion processes and prevent impact of corrosion on water quality degradation. The instrumental methods for corrosion scales characterization used were: scanning electron microscopy (SEM, for the investigation of corrosion scales of the analyzed samples surfaces, X-ray diffraction (XRD, for the analysis of the presence of solid forms inside scales, scanning electron microscopy (SEM, for the microstructural analysis of the corroded scales, and BET adsorption isotherm for the surface area determination. Depending on the composition of water next to the pipe surface, corrosion of iron results in the formation of different compounds and solid phases. The composition and structure of the iron scales in the drinking water distribution pipes depends on the type of the metal and the composition of the aqueous phase. Their formation is probably governed by several factors that include water quality parameters such as pH, alkalinity, buffer intensity, natural organic matter (NOM concentration, and dissolved oxygen (DO concentration. Factors such as water flow patterns, seasonal fluctuations in temperature, and microbiological activity as well as water treatment practices such as application of corrosion inhibitors can also influence corrosion scale formation and growth. Therefore, the corrosion scales found in iron and steel pipes are expected to have unique features for each site. Compounds that are found in iron corrosion scales often include goethite, lepidocrocite, magnetite, hematite, ferrous oxide, siderite, ferrous hydroxide, ferric hydroxide, ferrihydrite, calcium carbonate and green rusts. Iron scales have characteristic features that include: corroded floor, porous core that contains

  19. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid


    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  20. Reionization: Characteristic Scales, Topology And Observability

    Energy Technology Data Exchange (ETDEWEB)

    Iliev, Ilian T.; /Canadian Inst. Theor. Astrophys. /Zurich U.; Shapiro, Paul R.; /Texas U., Astron. Dept.; Mellema, Garrelt; /Stockholm Observ.; Pen, Ue-Li; McDonald, Patrick; /Canadian Inst. Theor. Astrophys.; Alvarez, Marcelo A.; /KIPAC, Menlo Park


    Recently the numerical simulations of the process of reionization of the universe at z > 6 have made a qualitative leap forward, reaching sufficient sizes and dynamic range to determine the characteristic scales of this process. This allowed making the first realistic predictions for a variety of observational signatures. We discuss recent results from large-scale radiative transfer and structure formation simulations on the observability of high-redshift Ly-{alpha} sources. We also briefly discuss the dependence of the characteristic scales and topology of the ionized and neutral patches on the reionization parameters.