WorldWideScience

Sample records for sub-grid scale sgs

  1. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    Science.gov (United States)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  2. Sub-grid scale combustion models for large eddy simulation of unsteady premixed flame propagation around obstacles.

    Science.gov (United States)

    Di Sarli, Valeria; Di Benedetto, Almerinda; Russo, Gennaro

    2010-08-15

    In this work, an assessment of different sub-grid scale (sgs) combustion models proposed for large eddy simulation (LES) of steady turbulent premixed combustion (Colin et al., Phys. Fluids 12 (2000) 1843-1863; Flohr and Pitsch, Proc. CTR Summer Program, 2000, pp. 61-82; Kim and Menon, Combust. Sci. Technol. 160 (2000) 119-150; Charlette et al., Combust. Flame 131 (2002) 159-180; Pitsch and Duchamp de Lageneste, Proc. Combust. Inst. 29 (2002) 2001-2008) was performed to identify the model that best predicts unsteady flame propagation in gas explosions. Numerical results were compared to the experimental data by Patel et al. (Proc. Combust. Inst. 29 (2002) 1849-1854) for premixed deflagrating flame in a vented chamber in the presence of three sequential obstacles. It is found that all sgs combustion models are able to reproduce qualitatively the experiment in terms of step of flame acceleration and deceleration around each obstacle, and shape of the propagating flame. Without adjusting any constants and parameters, the sgs model by Charlette et al. also provides satisfactory quantitative predictions for flame speed and pressure peak. Conversely, the sgs combustion models other than Charlette et al. give correct predictions only after an ad hoc tuning of constants and parameters. Copyright 2010 Elsevier B.V. All rights reserved.

  3. Numerical aspects of drift kinetic turbulence: Ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    KAUST Repository

    Samtaney, Ravi

    2012-01-01

    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter. © 2012 IOP Publishing Ltd.

  4. Numerical aspects of drift kinetic turbulence: ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    International Nuclear Information System (INIS)

    Samtaney, Ravi

    2012-01-01

    We present a numerical method based on an Eulerian approach to solve the Vlasov-Poisson system for 4D drift kinetic turbulence. Our numerical approach uses a conservative formulation with high-order (fourth and higher) evaluation of the numerical fluxes coupled with a fourth-order accurate Poisson solver. The fluxes are computed using a low-dissipation high-order upwind differencing method or a tuned high-resolution finite difference method with no numerical dissipation. Numerical results are presented for the case of imposed ion temperature and density gradients. Different forms of controlled regularization to achieve a well-posed system are used to obtain convergent resolved simulations. The regularization of the equations is achieved by means of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism to the Vlasov equation and derive sub-grid-scale (SGS) terms analogous to the Reynolds stress terms in hydrodynamic turbulence. We present a priori quantifications of these SGS terms in resolved simulations of drift-kinetic turbulence by applying a sharp filter.

  5. Simulations of mixing in Inertial Confinement Fusion with front tracking and sub-grid scale models

    Science.gov (United States)

    Rana, Verinder; Lim, Hyunkyung; Melvin, Jeremy; Cheng, Baolian; Glimm, James; Sharp, David

    2015-11-01

    We present two related results. The first discusses the Richtmyer-Meshkov (RMI) and Rayleigh-Taylor instabilities (RTI) and their evolution in Inertial Confinement Fusion simulations. We show the evolution of the RMI to the late time RTI under transport effects and tracking. The role of the sub-grid scales helps capture the interaction of turbulence with diffusive processes. The second assesses the effects of concentration on the physics model and examines the mixing properties in the low Reynolds number hot spot. We discuss the effect of concentration on the Schmidt number. The simulation results are produced using the University of Chicago code FLASH and Stony Brook University's front tracking algorithm.

  6. Sub-grid-scale effects on short-wave instability in magnetized hall-MHD plasma

    International Nuclear Information System (INIS)

    Miura, H.; Nakajima, N.

    2010-11-01

    Aiming to clarify effects of short-wave modes on nonlinear evolution/saturation of the ballooning instability in the Large Helical Device, fully three-dimensional simulations of the single-fluid MHD and the Hall MHD equations are carried out. A moderate parallel heat conductivity plays an important role both in the two kinds of simulations. In the single-fluid MHD simulations, the parallel heat conduction effectively suppresses short-wave ballooning modes but it turns out that the suppression is insufficient in comparison to an experimental result. In the Hall MHD simulations, the parallel heat conduction triggers a rapid growth of the parallel flow and enhance nonlinear couplings. A comparison between single-fluid and the Hall MHD simulations reveals that the Hall MHD model does not necessarily improve the saturated pressure profile, and that we may need a further extension of the model. We also find by a comparison between two Hall MHD simulations with different numerical resolutions that sub-grid-scales of the Hall term should be modeled to mimic an inverse energy transfer in the wave number space. (author)

  7. A SUB-GRID VOLUME-OF-FLUIDS (VOF) MODEL FOR MIXING IN RESOLVED SCALE AND IN UNRESOLVED SCALE COMPUTATIONS

    International Nuclear Information System (INIS)

    Vold, Erik L.; Scannapieco, Tony J.

    2007-01-01

    A sub-grid mix model based on a volume-of-fluids (VOF) representation is described for computational simulations of the transient mixing between reactive fluids, in which the atomically mixed components enter into the reactivity. The multi-fluid model allows each fluid species to have independent values for density, energy, pressure and temperature, as well as independent velocities and volume fractions. Fluid volume fractions are further divided into mix components to represent their 'mixedness' for more accurate prediction of reactivity. Time dependent conversion from unmixed volume fractions (denoted cf) to atomically mixed (af) fluids by diffusive processes is represented in resolved scale simulations with the volume fractions (cf, af mix). In unresolved scale simulations, the transition to atomically mixed materials begins with a conversion from unmixed material to a sub-grid volume fraction (pf). This fraction represents the unresolved small scales in the fluids, heterogeneously mixed by turbulent or multi-phase mixing processes, and this fraction then proceeds in a second step to the atomically mixed fraction by diffusion (cf, pf, af mix). Species velocities are evaluated with a species drift flux, ρ i u di = ρ i (u i -u), used to describe the fluid mixing sources in several closure options. A simple example of mixing fluids during 'interfacial deceleration mixing with a small amount of diffusion illustrates the generation of atomically mixed fluids in two cases, for resolved scale simulations and for unresolved scale simulations. Application to reactive mixing, including Inertial Confinement Fusion (ICF), is planned for future work.

  8. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    OpenAIRE

    Sam Ali Al; Szasz Robert; Revstedt Johan

    2015-01-01

    The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simu...

  9. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    Directory of Open Access Journals (Sweden)

    Sam Ali Al

    2015-01-01

    Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.

  10. Application of a Steady Meandering River with Piers Using a Lattice Boltzmann Sub-Grid Model in Curvilinear Coordinate Grid

    Directory of Open Access Journals (Sweden)

    Liping Chen

    2018-05-01

    Full Text Available A sub-grid multiple relaxation time (MRT lattice Boltzmann model with curvilinear coordinates is applied to simulate an artificial meandering river. The method is based on the D2Q9 model and standard Smagorinsky sub-grid scale (SGS model is introduced to simulate meandering flows. The interpolation supplemented lattice Boltzmann method (ISLBM and the non-equilibrium extrapolation method are used for second-order accuracy and boundary conditions. The proposed model was validated by a meandering channel with a 180° bend and applied to a steady curved river with piers. Excellent agreement between the simulated results and previous computational and experimental data was found, showing that MRT-LBM (MRT lattice Boltzmann method coupled with a Smagorinsky sub-grid scale (SGS model in a curvilinear coordinates grid is capable of simulating practical meandering flows.

  11. Impact of Sub-grid Soil Textural Properties on Simulations of Hydrological Fluxes at the Continental Scale Mississippi River Basin

    Science.gov (United States)

    Kumar, R.; Samaniego, L. E.; Livneh, B.

    2013-12-01

    Knowledge of soil hydraulic properties such as porosity and saturated hydraulic conductivity is required to accurately model the dynamics of near-surface hydrological processes (e.g. evapotranspiration and root-zone soil moisture dynamics) and provide reliable estimates of regional water and energy budgets. Soil hydraulic properties are commonly derived from pedo-transfer functions using soil textural information recorded during surveys, such as the fractions of sand and clay, bulk density, and organic matter content. Typically large scale land-surface models are parameterized using a relatively coarse soil map with little or no information on parametric sub-grid variability. In this study we analyze the impact of sub-grid soil variability on simulated hydrological fluxes over the Mississippi River Basin (≈3,240,000 km2) at multiple spatio-temporal resolutions. A set of numerical experiments were conducted with the distributed mesoscale hydrologic model (mHM) using two soil datasets: (a) the Digital General Soil Map of the United States or STATSGO2 (1:250 000) and (b) the recently collated Harmonized World Soil Database based on the FAO-UNESCO Soil Map of the World (1:5 000 000). mHM was parameterized with the multi-scale regionalization technique that derives distributed soil hydraulic properties via pedo-transfer functions and regional coefficients. Within the experimental framework, the 3-hourly model simulations were conducted at four spatial resolutions ranging from 0.125° to 1°, using meteorological datasets from the NLDAS-2 project for the time period 1980-2012. Preliminary results indicate that the model was able to capture observed streamflow behavior reasonably well with both soil datasets, in the major sub-basins (i.e. the Missouri, the Upper Mississippi, the Ohio, the Red, and the Arkansas). However, the spatio-temporal patterns of simulated water fluxes and states (e.g. soil moisture, evapotranspiration) from both simulations, showed marked

  12. Urban runoff (URO) process for MODFLOW 2005: simulation of sub-grid scale urban hydrologic processes in Broward County, FL

    Science.gov (United States)

    Decker, Jeremy D.; Hughes, J.D.

    2013-01-01

    Climate change and sea-level rise could cause substantial changes in urban runoff and flooding in low-lying coast landscapes. A major challenge for local government officials and decision makers is to translate the potential global effects of climate change into actionable and cost-effective adaptation and mitigation strategies at county and municipal scales. A MODFLOW process is used to represent sub-grid scale hydrology in urban settings to help address these issues. Coupled interception, surface water, depression, and unsaturated zone storage are represented. A two-dimensional diffusive wave approximation is used to represent overland flow. Three different options for representing infiltration and recharge are presented. Additional features include structure, barrier, and culvert flow between adjacent cells, specified stage boundaries, critical flow boundaries, source/sink surface-water terms, and the bi-directional runoff to MODFLOW Surface-Water Routing process. Some abilities of the Urban RunOff (URO) process are demonstrated with a synthetic problem using four land uses and varying cell coverages. Precipitation from a hypothetical storm was applied and cell by cell surface-water depth, groundwater level, infiltration rate, and groundwater recharge rate are shown. Results indicate the URO process has the ability to produce time-varying, water-content dependent infiltration and leakage, and successfully interacts with MODFLOW.

  13. Sub-grid scale representation of vegetation in global land surface schemes: implications for estimation of the terrestrial carbon sink

    Directory of Open Access Journals (Sweden)

    J. R. Melton

    2014-02-01

    Full Text Available Terrestrial ecosystem models commonly represent vegetation in terms of plant functional types (PFTs and use their vegetation attributes in calculations of the energy and water balance as well as to investigate the terrestrial carbon cycle. Sub-grid scale variability of PFTs in these models is represented using different approaches with the "composite" and "mosaic" approaches being the two end-members. The impact of these two approaches on the global carbon balance has been investigated with the Canadian Terrestrial Ecosystem Model (CTEM v 1.2 coupled to the Canadian Land Surface Scheme (CLASS v 3.6. In the composite (single-tile approach, the vegetation attributes of different PFTs present in a grid cell are aggregated and used in calculations to determine the resulting physical environmental conditions (soil moisture, soil temperature, etc. that are common to all PFTs. In the mosaic (multi-tile approach, energy and water balance calculations are performed separately for each PFT tile and each tile's physical land surface environmental conditions evolve independently. Pre-industrial equilibrium CLASS-CTEM simulations yield global totals of vegetation biomass, net primary productivity, and soil carbon that compare reasonably well with observation-based estimates and differ by less than 5% between the mosaic and composite configurations. However, on a regional scale the two approaches can differ by > 30%, especially in areas with high heterogeneity in land cover. Simulations over the historical period (1959–2005 show different responses to evolving climate and carbon dioxide concentrations from the two approaches. The cumulative global terrestrial carbon sink estimated over the 1959–2005 period (excluding land use change (LUC effects differs by around 5% between the two approaches (96.3 and 101.3 Pg, for the mosaic and composite approaches, respectively and compares well with the observation-based estimate of 82.2 ± 35 Pg C over the same

  14. Modeling lightning-NOx chemistry on a sub-grid scale in a global chemical transport model

    Directory of Open Access Journals (Sweden)

    A. Gressent

    2016-05-01

    Full Text Available For the first time, a plume-in-grid approach is implemented in a chemical transport model (CTM to parameterize the effects of the nonlinear reactions occurring within high concentrated NOx plumes from lightning NOx emissions (LNOx in the upper troposphere. It is characterized by a set of parameters including the plume lifetime, the effective reaction rate constant related to NOx–O3 chemical interactions, and the fractions of NOx conversion into HNO3 within the plume. Parameter estimates were made using the Dynamical Simple Model of Atmospheric Chemical Complexity (DSMACC box model, simple plume dispersion simulations, and the 3-D Meso-NH (non-hydrostatic mesoscale atmospheric model. In order to assess the impact of the LNOx plume approach on the NOx and O3 distributions on a large scale, simulations for the year 2006 were performed using the GEOS-Chem global model with a horizontal resolution of 2° × 2.5°. The implementation of the LNOx parameterization implies an NOx and O3 decrease on a large scale over the region characterized by a strong lightning activity (up to 25 and 8 %, respectively, over central Africa in July and a relative increase downwind of LNOx emissions (up to 18 and 2 % for NOx and O3, respectively, in July. The calculated variability in NOx and O3 mixing ratios around the mean value according to the known uncertainties in the parameter estimates is at a maximum over continental tropical regions with ΔNOx [−33.1, +29.7] ppt and ΔO3 [−1.56, +2.16] ppb, in January, and ΔNOx [−14.3, +21] ppt and ΔO3 [−1.18, +1.93] ppb, in July, mainly depending on the determination of the diffusion properties of the atmosphere and the initial NO mixing ratio injected by lightning. This approach allows us (i to reproduce a more realistic lightning NOx chemistry leading to better NOx and O3 distributions on the large scale and (ii to focus on other improvements to reduce remaining uncertainties from processes

  15. Influence of Sub-grid-Scale Isentropic Transports on McRAS Evaluations using ARM-CART SCM Datasets

    Science.gov (United States)

    Sud, Y. C.; Walker, G. K.; Tao, W. K.

    2004-01-01

    In GCM-physics evaluations with the currently available ARM-CART SCM datasets, McRAS produced very similar character of near surface errors of simulated temperature and humidity containing typically warm and moist biases near the surface and cold and dry biases aloft. We argued it must have a common cause presumably rooted in the model physics. Lack of vertical adjustment of horizontal transport was thought to be a plausible source. Clearly, debarring such a freedom would force the incoming air to diffuse into the grid-cell which would naturally bias the surface air to become warm and moist while the upper air becomes cold and dry, a characteristic feature of McRAS biases. Since, the errors were significantly larger in the two winter cases that contain potentially more intense episodes of cold and warm advective transports, it further reaffirmed our argument and provided additional motivation to introduce the corrections. When the horizontal advective transports were suitably modified to allow rising and/or sinking following isentropic pathways of subgrid scale motions, the outcome was to cool and dry (or warm and moisten) the lower (or upper) levels. Ever, crude approximations invoking such a correction reduced the temperature and humidity biases considerably. The tests were performed on all the available ARM-CART SCM cases with consistent outcome. With the isentropic corrections implemented through two different numerical approximations, virtually similar benefits were derived further confirming the robustness of our inferences. These results suggest the need for insentropic advective transport adjustment in a GCM due to subgrid scale motions.

  16. Development of SGS for various waste drums

    International Nuclear Information System (INIS)

    Kim, Ki-Hong; Ryu, Young-Gerl; Kwak, Kyung-Kil; Ji, Yong-Young

    2006-01-01

    Radioactive waste assay system was manufactured to measure the individual nuclides' activity in homogeneous and non-homogeneous waste drums and to exclude worker's exposure. After measuring the activities of all individual γ-emitters, our system was programmed to calculate the activities of α, Β emitters, automatically and then calculated total activities of drum by utilizing scaling factor (relationship between α, Β emitters and Co-60, Cs-137). In general, SGS (Segmented gamma Scanning system) divided a waste drum into 8 segments vertically, and also 8 sectors in one segment to minimize the error. And SGS can be determined the density of drum by using the several matrix correction methods such as transmission ratio, differential peak absorption and mean density correction, individually or by combination. However, from the NPPs and other nuclear facilities, various drum (100∼350L) could be generated. To analyze the activities of γ-emitters from various drums, we modified the collimator (horizontal and vertical) and added detector mover to the existing SGS system. As a results, the measurement error was <12% in a short distance (10 segments, Co-60; 47.87μCi and Cs-137; 101.16μCi) and was <25% in a long distance (8 segments, same sources). This system can be applied to the drum which TGS system does not analyze drum (for example, high density, high activities and large volume). (author)

  17. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    Science.gov (United States)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  18. Sub-Grid Modeling of Electrokinetic Effects in Micro Flows

    Science.gov (United States)

    Chen, C. P.

    2005-01-01

    Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this

  19. SGS Modeling of the Internal Energy Equation in LES of Supersonic Channel Flow

    Science.gov (United States)

    Raghunath, Sriram; Brereton, Giles

    2011-11-01

    DNS of fully-developed turbulent supersonic channel flows (Reτ = 190) at up to Mach 3 indicate that the turbulent heat fluxes depend only weakly on Mach number, while the viscous dissipation and pressure dilatation do so strongly. Moreover, pressure dilatation makes a significant contribution to the internal energy budget at Mach 3 and higher. The balance between these terms is critical to determining the temperature (and so molecular viscosity) from the internal energy equation and so, in LES of these flows, it is essential to use accurate SGS models for the viscous dissipation and the pressure dilatation. In this talk, we present LES results for supersonic channel flow, using SGS models for these terms that are based on the resolved-scale dilatation, an inverse timescale, and SGS momentum fluxes, which intrinsically represent this Mach number effect.

  20. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    Science.gov (United States)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  1. Permafrost sub-grid heterogeneity of soil properties key for 3-D soil processes and future climate projections

    Directory of Open Access Journals (Sweden)

    Christian Beer

    2016-08-01

    Full Text Available There are massive carbon stocks stored in permafrost-affected soils due to the 3-D soil movement process called cryoturbation. For a reliable projection of the past, recent and future Arctic carbon balance, and hence climate, a reliable concept for representing cryoturbation in a land surface model (LSM is required. The basis of the underlying transport processes is pedon-scale heterogeneity of soil hydrological and thermal properties as well as insulating layers, such as snow and vegetation. Today we still lack a concept of how to reliably represent pedon-scale properties and processes in a LSM. One possibility could be a statistical approach. This perspective paper demonstrates the importance of sub-grid heterogeneity in permafrost soils as a pre-requisite to implement any lateral transport parametrization. Representing such heterogeneity at the sub-pixel size of a LSM is the next logical step of model advancements. As a result of a theoretical experiment, heterogeneity of thermal and hydrological soil properties alone lead to a remarkable initial sub-grid range of subsoil temperature of 2 deg C, and active-layer thickness of 150 cm in East Siberia. These results show the way forward in representing combined lateral and vertical transport of water and soil in LSMs.

  2. Smaller global and regional carbon emissions from gross land use change when considering sub-grid secondary land cohorts in a global dynamic vegetation model

    Science.gov (United States)

    Yue, Chao; Ciais, Philippe; Li, Wei

    2018-02-01

    Several modelling studies reported elevated carbon emissions from historical land use change (ELUC) by including bidirectional transitions on the sub-grid scale (termed gross land use change), dominated by shifting cultivation and other land turnover processes. However, most dynamic global vegetation models (DGVMs) that have implemented gross land use change either do not account for sub-grid secondary lands, or often have only one single secondary land tile over a model grid cell and thus cannot account for various rotation lengths in shifting cultivation and associated secondary forest age dynamics. Therefore, it remains uncertain how realistic the past ELUC estimations are and how estimated ELUC will differ between the two modelling approaches with and without multiple sub-grid secondary land cohorts - in particular secondary forest cohorts. Here we investigated historical ELUC over 1501-2005 by including sub-grid forest age dynamics in a DGVM. We run two simulations, one with no secondary forests (Sageless) and the other with sub-grid secondary forests of six age classes whose demography is driven by historical land use change (Sage). Estimated global ELUC for 1501-2005 is 176 Pg C in Sage compared to 197 Pg C in Sageless. The lower ELUC values in Sage arise mainly from shifting cultivation in the tropics under an assumed constant rotation length of 15 years, being 27 Pg C in Sage in contrast to 46 Pg C in Sageless. Estimated cumulative ELUC values from wood harvest in the Sage simulation (31 Pg C) are however slightly higher than Sageless (27 Pg C) when the model is forced by reconstructed harvested areas because secondary forests targeted in Sage for harvest priority are insufficient to meet the prescribed harvest area, leading to wood harvest being dominated by old primary forests. An alternative approach to quantify wood harvest ELUC, i.e. always harvesting the close-to-mature forests in both Sageless and Sage, yields similar values of 33 Pg C by both

  3. The Storm Surge and Sub-Grid Inundation Modeling in New York City during Hurricane Sandy

    Directory of Open Access Journals (Sweden)

    Harry V. Wang

    2014-03-01

    Full Text Available Hurricane Sandy inflicted heavy damage in New York City and the New Jersey coast as the second costliest storm in history. A large-scale, unstructured grid storm tide model, Semi-implicit Eulerian Lagrangian Finite Element (SELFE, was used to hindcast water level variation during Hurricane Sandy in the mid-Atlantic portion of the U.S. East Coast. The model was forced by eight tidal constituents at the model’s open boundary, 1500 km away from the coast, and the wind and pressure fields from atmospheric model Regional Atmospheric Modeling System (RAMS provided by Weatherflow Inc. The comparisons of the modeled storm tide with the NOAA gauge stations from Montauk, NY, Long Island Sound, encompassing New York Harbor, Atlantic City, NJ, to Duck, NC, were in good agreement, with an overall root mean square error and relative error in the order of 15–20 cm and 5%–7%, respectively. Furthermore, using large-scale model outputs as the boundary conditions, a separate sub-grid model that incorporates LIDAR data for the major portion of the New York City was also set up to investigate the detailed inundation process. The model results compared favorably with USGS’ Hurricane Sandy Mapper database in terms of its timing, local inundation area, and the depth of the flooding water. The street-level inundation with water bypassing the city building was created and the maximum extent of horizontal inundation was calculated, which was within 30 m of the data-derived estimate by USGS.

  4. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.

    Science.gov (United States)

    Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Cheng, A.

    2017-12-01

    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity, and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation, and cloudiness. Unlike other similar methods, comparatively few new prognostic variables needs to be introduced, making the technique computationally efficient. In the base version of SHOC it is SGS turbulent kinetic energy (TKE), and in the developmental version — SGS TKE, and variances of total water and moist static energy (MSE). SHOC is now incorporated into a version of GFS that will become a part of the NOAA Next Generation Global Prediction System based around NOAA GFDL's FV3 dynamical core, NOAA Environmental Modeling System (NEMS) coupled modeling infrastructure software, and a set novel physical parameterizations. Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these quantities. Radiative transfer parameterization uses cloudiness computed by SHOC. An outstanding problem with implementation of SHOC in the NCEP global models is excessively large high level tropical cloudiness. Comparison of the moments of the SGS PDF diagnosed by SHOC to the moments calculated in a GigaLES simulation of tropical deep convection case (GATE), shows that SHOC diagnoses too narrow PDF distributions of total cloud water and MSE in the areas of deep convective detrainment. A subsequent sensitivity study of SHOC's diagnosed cloud fraction (CF) to higher order input moments of the SGS PDF

  5. Autonomous Operation of Hybrid Microgrid with AC and DC Sub-Grids

    DEFF Research Database (Denmark)

    Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    the power flow among all the sources distributed throughout the two types of sub-grids, which certainly is tougher than previous efforts developed for only either ac or dc microgrid. This wider scope of control has not yet been investigated, and would certainly rely on the coordinated operation of dc...... sources, ac sources and interlinking converters. Suitable control and normalization schemes are therefore developed for controlling them with results presented for showing the overall performance of the hybrid microgrid.......This paper investigates on the active and reactive power sharing of an autonomous hybrid microgrid. Unlike existing microgrids which are purely ac, the hybrid microgrid studied here comprises dc and ac sub-grids, interconnected by power electronic interfaces. The main challenge here is to manage...

  6. The SGS3 protein involved in PTGS finds a family

    Directory of Open Access Journals (Sweden)

    Bateman Alex

    2002-08-01

    Full Text Available Abstract Background Post transcriptional gene silencing (PTGS is a recently discovered phenomenon that is an area of intense research interest. Components of the PTGS machinery are being discovered by genetic and bioinformatics approaches, but the picture is not yet complete. Results The gene for the PTGS impaired Arabidopsis mutant sgs3 was recently cloned and was not found to have similarity to any other known protein. By a detailed analysis of the sequence of SGS3 we have defined three new protein domains: the XH domain, the XS domain and the zf-XS domain, that are shared with a large family of uncharacterised plant proteins. This work implicates these plant proteins in PTGS. Conclusion The enigmatic SGS3 protein has been found to contain two predicted domains in common with a family of plant proteins. The other members of this family have been predicted to be transcription factors, however this function seems unlikely based on this analysis. A bioinformatics approach has implicated a new family of plant proteins related to SGS3 as potential candidates for PTGS related functions.

  7. Implement a Sub-grid Turbulent Orographic Form Drag in WRF and its application to Tibetan Plateau

    Science.gov (United States)

    Zhou, X.; Yang, K.; Wang, Y.; Huang, B.

    2017-12-01

    Sub-grid-scale orographic variation exerts turbulent form drag on atmospheric flows. The Weather Research and Forecasting model (WRF) includes a turbulent orographic form drag (TOFD) scheme that adds the stress to the surface layer. In this study, another TOFD scheme has been incorporated in WRF3.7, which exerts an exponentially decaying drag on each model layer. To investigate the effect of the new scheme, WRF with the old and new one was used to simulate the climate over the complex terrain of the Tibetan Plateau. The two schemes were evaluated in terms of the direct impact (on wind) and the indirect impact (on air temperature, surface pressure and precipitation). Both in winter and summer, the new TOFD scheme reduces the mean bias in the surface wind, and clearly reduces the root mean square error (RMSEs) in comparisons with the station measurements (Figure 1). Meanwhile, the 2-m air temperature and surface pressure is also improved (Figure 2) due to the more warm air northward transport across south boundary of TP in winter. The 2-m air temperature is hardly improved in summer but the precipitation improvement is more obvious, with reduced mean bias and RMSEs. This is due to the weakening of water vapor flux (at low-level flow with the new scheme) crossing the Himalayan Mountains from South Asia.

  8. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    Science.gov (United States)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF

  9. The roles of the Saccharomyces cerevisiae RecQ helicase SGS1 in meiotic genome surveillance.

    Directory of Open Access Journals (Sweden)

    Amit Dipak Amin

    2010-11-01

    Full Text Available The Saccharomyces cerevisiae RecQ helicase Sgs1 is essential for mitotic and meiotic genome stability. The stage at which Sgs1 acts during meiosis is subject to debate. Cytological experiments showed that a deletion of SGS1 leads to an increase in synapsis initiation complexes and axial associations leading to the proposal that it has an early role in unwinding surplus strand invasion events. Physical studies of recombination intermediates implicate it in the dissolution of double Holliday junctions between sister chromatids.In this work, we observed an increase in meiotic recombination between diverged sequences (homeologous recombination and an increase in unequal sister chromatid events when SGS1 is deleted. The first of these observations is most consistent with an early role of Sgs1 in unwinding inappropriate strand invasion events while the second is consistent with unwinding or dissolution of recombination intermediates in an Mlh1- and Top3-dependent manner. We also provide data that suggest that Sgs1 is involved in the rejection of 'second strand capture' when sequence divergence is present. Finally, we have identified a novel class of tetrads where non-sister spores (pairs of spores where each contains a centromere marker from a different parent are inviable. We propose a model for this unusual pattern of viability based on the inability of sgs1 mutants to untangle intertwined chromosomes. Our data suggest that this role of Sgs1 is not dependent on its interaction with Top3. We propose that in the absence of SGS1 chromosomes may sometimes remain entangled at the end of pre-meiotic replication. This, combined with reciprocal crossing over, could lead to physical destruction of the recombined and entangled chromosomes. We hypothesise that Sgs1, acting in concert with the topoisomerase Top2, resolves these structures.This work provides evidence that Sgs1 interacts with various partner proteins to maintain genome stability throughout

  10. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    Energy Technology Data Exchange (ETDEWEB)

    Buschman, Francis X., E-mail: Francis.Buschman@unnpp.gov; Aumiller, David L.

    2017-02-15

    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  11. Use of fundamental condensation heat transfer experiments for the development of a sub-grid liquid jet condensation model

    International Nuclear Information System (INIS)

    Buschman, Francis X.; Aumiller, David L.

    2017-01-01

    Highlights: • Direct contact condensation data on liquid jets up to 1.7 MPa in pure steam and in the presence of noncondensable gas. • Identified a pressure effect on the impact of noncondensables to suppress condensation heat transfer not captured in existing data or correlations. • Pure steam data is used to develop a new correlation for condensation heat transfer on subcooled liquid jets. • Noncondensable data used to develop a modification to the renewal time estimate used in the Young and Bajorek correlation for condensation suppression in the presence of noncondensables. • A jet injection boundary condition, using a sub-grid jet condensation model, is developed for COBRA-IE which provides a more detailed estimate of the condensation rate on the liquid jet and allows the use of jet specific closure relationships. - Abstract: Condensation on liquid jets is an important phenomenon for many different facets of nuclear power plant transients and analyses such as containment spray cooling. An experimental facility constructed at the Pennsylvania State University, the High Pressure Liquid Jet Condensation Heat Transfer facility (HPLJCHT), has been used to perform steady-state condensation heat transfer experiments in which the temperature of the liquid jet is measured at different axial locations allowing the condensation rate to be determined over the jet length. Test data have been obtained in a pure steam environment and with varying concentrations of noncondensable gas. This data extends the available jet condensation data from near atmospheric pressure up to a pressure of 1.7 MPa. An empirical correlation for the liquid side condensation heat transfer coefficient has been developed based on the data obtained in pure steam. The data obtained with noncondensable gas were used to develop a correlation for the renewal time as used in the condensation suppression model developed by Young and Bajorek. This paper describes a new sub-grid liquid jet

  12. Air quality impact of two power plants using a sub-grid

    International Nuclear Information System (INIS)

    Drevet, Jerome; Musson-Genon, Luc

    2012-01-01

    Modeling point source emissions of air pollutants with regional Eulerian models is likely to lead to errors because a 3D Eulerian model is not able to correctly reproduce the evolution of a plume near its source. To overcome these difficulties, we applied a Gaussian puff model - imbedded within a 3D Eulerian model - for an impact assessment of EDF fossil fuel-fired power plants of Porcheville and Vitry, Ile-de-France. We simulated an entire year of atmospheric processes for an area covering the Paris region with the Polyphemus platform with which we conducted various scenarios with or without a Gaussian puff model, referred as Plume-in-grid, to independently handle 'with major point source emissions in Ile-de-France. Our study focuses on four chemical compounds (NO, NO 2 , SO 2 and O 3 ). The use of a Gaussian model is important, particularly for primary compounds with low reactivity such as SO, especially as industrial stacks are the major source of its emissions. SO 2 concentrations simulated using Plume-in-grid tare closer to the concentrations measured by the stations of the air quality agencies (Associations Agreees de Surveillance de la Qualite de l'Air, AASQA), although they remain largely overestimated. The use of a Gaussian model increases the concentrations near the source and lowers background levels of various chemical species (except O 3 ). The simulated concentrations may vary by over 30 % depending on whether we consider the Gaussian model for primary compounds such as SO 2 and NO, and around 2 % for secondary compounds such as NO 2 and O 3 . Regarding the impact of fossil fuel-fired power plants, simulated concentrations are increased by about 1 μg/m 3 approximately for SO 2 annual averages close to the Porcheville stack and are lowered by about 0.5 μg/m 3 far from the sources, highlighting the less diffusive character of the Gaussian model by comparison with the Eulerian model. The integration of a sub-grid Gaussian model offers the advantage of

  13. Biosensors and Biofuel Cells based on Vertically Aligned Carbon Nanotubes for Integrated Energy Sensing, Generation, and Storage (SGS) Systems

    Science.gov (United States)

    Pandey, Archana; Prasad, Abhishek; Khin Yap, Yoke

    2010-03-01

    Diabetes is a growing health issue in the nation. Thus in-situ glucose sensors that can monitor the glucose level in our body are in high demand. Furthermore, it will be exciting if the excessive blood sugar can be converted into usable energy, and be stored in miniature batteries for applications. This will be the basis for an integrated energy sensing, generation, and storage (SGS) system in the future. Here we report the use of functionalized carbon nanotubes arrays as the glucose sensors as well as fuel cells that can convert glucose into energy. In principle, these devices can be integrated to detect excessive blood glucose and then convert the glucose into energy. They are also inline with our efforts on miniature 3D microbatteries using CNTs [1]. All these devices will be the basis for future SGS systems. Details of these results will be discussed in the meeting. [1] Wang et al., in 206^th Meeting of the Electrochemical Society, October 3-8, Honolulu, Hawaii (2004), Symposium Q1, abstract 1492. Y. K. Yap acknowledges supports from DARPA (DAAD17-03-C-0115), USDA (2007-35603-17740), and the Multi-Scale Technologies Institute (MuSTI) at MTU.

  14. High speed corner and gap-seal computations using an LU-SGS scheme

    Science.gov (United States)

    Coirier, William J.

    1989-01-01

    The hybrid Lower-Upper Symmetric Gauss-Seidel (LU-SGS) algorithm was added to a widely used series of 2D/3D Euler/Navier-Stokes solvers and was demonstrated for a particular class of high-speed flows. A limited study was conducted to compare the hybrid LU-SGS for approximate Newton iteration and diagonalized Beam-Warming (DBW) schemes on a work and convergence history basis. The hybrid LU-SGS algorithm is more efficient and easier to implement than the DBW scheme originally present in the code for the cases considered. The code was validated for the hypersonic flow through two mutually perpendicular flat plates and then used to investigate the flow field in and around a simplified scramjet module gap seal configuration. Due to the similarities, the gap seal flow was compared to hypersonic corner flow at the same freestream conditions and Reynolds number.

  15. Holliday junction-containing DNA structures persist in cells lacking Sgs1 or Top3 following exposure to DNA damage

    DEFF Research Database (Denmark)

    Mankouri, Hocine W; Ashton, Thomas M; Hickson, Ian D

    2011-01-01

    The Sgs1-Rmi1-Top3 "dissolvasome" is required for the maintenance of genome stability and has been implicated in the processing of various types of DNA structures arising during DNA replication. Previous investigations have revealed that unprocessed (X-shaped) homologous recombination repair (HRR...... structures arising in Sgs1-deficient strains are eliminated when Sgs1 is reactivated in vivo. We propose that HJ resolvases and Sgs1-Top3-Rmi1 comprise two independent processes to deal with HJ-containing DNA intermediates arising during HRR in S-phase....

  16. An investigation of the sub-grid variability of trace gases and aerosols for global climate modeling

    Directory of Open Access Journals (Sweden)

    Y. Qian

    2010-07-01

    Full Text Available One fundamental property and limitation of grid based models is their inability to identify spatial details smaller than the grid cell size. While decades of work have gone into developing sub-grid treatments for clouds and land surface processes in climate models, the quantitative understanding of sub-grid processes and variability for aerosols and their precursors is much poorer. In this study, WRF-Chem is used to simulate the trace gases and aerosols over central Mexico during the 2006 MILAGRO field campaign, with multiple spatial resolutions and emission/terrain scenarios. Our analysis focuses on quantifying the sub-grid variability (SGV of trace gases and aerosols within a typical global climate model grid cell, i.e. 75×75 km2.

    Our results suggest that a simulation with 3-km horizontal grid spacing adequately reproduces the overall transport and mixing of trace gases and aerosols downwind of Mexico City, while 75-km horizontal grid spacing is insufficient to represent local emission and terrain-induced flows along the mountain ridge, subsequently affecting the transport and mixing of plumes from nearby sources. Therefore, the coarse model grid cell average may not correctly represent aerosol properties measured over polluted areas. Probability density functions (PDFs for trace gases and aerosols show that secondary trace gases and aerosols, such as O3, sulfate, ammonium, and nitrate, are more likely to have a relatively uniform probability distribution (i.e. smaller SGV over a narrow range of concentration values. Mostly inert and long-lived trace gases and aerosols, such as CO and BC, are more likely to have broad and skewed distributions (i.e. larger SGV over polluted regions. Over remote areas, all trace gases and aerosols are more uniformly distributed compared to polluted areas. Both CO and O3 SGV vertical profiles are nearly constant within the PBL during daytime, indicating that trace gases

  17. Esc2 and Sgs1 act in functionally distinct branches of the homologous recombination repair pathway in Saccharomyces cerevisiae

    DEFF Research Database (Denmark)

    Mankouri, Hocine W; Ngo, Hien-Ping; Hickson, Ian D

    2009-01-01

    , the accumulation of these structures in esc2 (but not sgs1) mutants is entirely dependent on Mph1, a protein that shows structural similarity to the Fanconi anemia group M protein (FANCM). In the absence of both Esc2 and Sgs1, the intra-S-phase DNA damage checkpoint response is compromised after exposure to MMS...

  18. Numerical aspects of drift kinetic turbulence: Ill-posedness, regularization and a priori estimates of sub-grid-scale terms

    KAUST Repository

    Samtaney, Ravi

    2012-01-01

    of a simple collisional model, by inclusion of an ad-hoc hyperviscosity or artificial viscosity term or by implicit dissipation in upwind schemes. Comparisons between the various methods and regularizations are presented. We apply a filtering formalism

  19. Boundary Conditions and SGS Models for LES of Wall-Bounded Separated Flows: An Application to Engine-Like Geometries

    Directory of Open Access Journals (Sweden)

    Piscaglia F.

    2013-11-01

    Full Text Available The implementation and the combination of advanced boundary conditions and subgrid scale models for Large Eddy Simulations are presented. The goal is to perform reliable cold flow LES simulations in complex geometries, such as in the cylinders of internal combustion engines. The implementation of an inlet boundary condition for synthetic turbulence generation and of two subgrid scale models, the local Dynamic Smagorinsky and the Wall-Adapting Local Eddy-viscosity SGS model ( WALE is described. The WALE model is based on the square of the velocity gradient tensor and it accounts for the effects of both the strain and the rotation rate of the smallest resolved turbulent fluctuations and it recovers the proper y3 near-wall scaling for the eddy viscosity without requiring dynamic pressure; hence, it is supposed to be a very reliable model for ICE simulation. Model validation has been performed separately on two steady state flow benches: a backward facing step geometry and a simple IC engine geometry with one axed central valve. A discussion on the completeness of the LES simulation (i.e. LES simulation quality is given.

  20. On the influence of cloud fraction diurnal cycle and sub-grid cloud optical thickness variability on all-sky direct aerosol radiative forcing

    International Nuclear Information System (INIS)

    Min, Min; Zhang, Zhibo

    2014-01-01

    The objective of this study is to understand how cloud fraction diurnal cycle and sub-grid cloud optical thickness variability influence the all-sky direct aerosol radiative forcing (DARF). We focus on the southeast Atlantic region where transported smoke is often observed above low-level water clouds during burning seasons. We use the CALIOP observations to derive the optical properties of aerosols. We developed two diurnal cloud fraction variation models. One is based on sinusoidal fitting of MODIS observations from Terra and Aqua satellites. The other is based on high-temporal frequency diurnal cloud fraction observations from SEVIRI on board of geostationary satellite. Both models indicate a strong cloud fraction diurnal cycle over the southeast Atlantic region. Sensitivity studies indicate that using a constant cloud fraction corresponding to Aqua local equatorial crossing time (1:30 PM) generally leads to an underestimated (less positive) diurnal mean DARF even if solar diurnal variation is considered. Using cloud fraction corresponding to Terra local equatorial crossing time (10:30 AM) generally leads overestimation. The biases are a typically around 10–20%, but up to more than 50%. The influence of sub-grid cloud optical thickness variability on DARF is studied utilizing the cloud optical thickness histogram available in MODIS Level-3 daily data. Similar to previous studies, we found the above-cloud smoke in the southeast Atlantic region has a strong warming effect at the top of the atmosphere. However, because of the plane-parallel albedo bias the warming effect of above-cloud smoke could be significantly overestimated if the grid-mean, instead of the full histogram, of cloud optical thickness is used in the computation. This bias generally increases with increasing above-cloud aerosol optical thickness and sub-grid cloud optical thickness inhomogeneity. Our results suggest that the cloud diurnal cycle and sub-grid cloud variability are important factors

  1. A sub-grid, mixture-fraction-based thermodynamic equilibrium model for gas phase combustion in FIRETEC: development and results

    Science.gov (United States)

    M. M. Clark; T. H. Fletcher; R. R. Linn

    2010-01-01

    The chemical processes of gas phase combustion in wildland fires are complex and occur at length-scales that are not resolved in computational fluid dynamics (CFD) models of landscape-scale wildland fire. A new approach for modelling fire chemistry in HIGRAD/FIRETEC (a landscape-scale CFD wildfire model) applies a mixture– fraction model relying on thermodynamic...

  2. Shu proteins promote the formation of homologous recombination intermediates that are processed by Sgs1-Rmi1-Top3

    DEFF Research Database (Denmark)

    Mankouri, Hocine W; Ngo, Hien-Ping; Hickson, Ian D

    2007-01-01

    CSM2, PSY3, SHU1, and SHU2 (collectively referred to as the SHU genes) were identified in Saccharomyces cerevisiae as four genes in the same epistasis group that suppress various sgs1 and top3 mutant phenotypes when mutated. Although the SHU genes have been implicated in homologous recombination ...

  3. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  4. Evidence that yeast SGS1, DNA2, SRS2, and FOB1 interact to maintain rDNA stability

    International Nuclear Information System (INIS)

    Tao Weitao; Budd, Martin; Campbell, Judith L.

    2003-01-01

    We and others have proposed that faulty processing of arrested replication forks leads to increases in recombination and chromosome instability in Saccharomyces cerevisiae. Now we use the ribosomal DNA locus, which is a good model for all stages of DNA replication, to test this hypothesis. We showed previously that DNA replication pausing at the ribosomal DNA replication fork barrier (RFB) is accompanied by the occurrence of double-strand breaks near the RFB. Both pausing and breakage are elevated in the hypomorphic dna2-2 helicase mutant. Deletion of FOB1 suppresses the elevated pausing and DSB formation. Our current work shows that mutation inactivating Sgs1, the yeast RecQ helicase ortholog, also causes accumulation of stalled replication forks and DSBs at the rDNA RFB. Either deletion of FOB1, which suppresses fork blocking and certain types of rDNA recombination, or an increase in SIR2 gene dosage, which suppresses rDNA recombination, reduces the number of forks persisting at the RFB. Although dna2-2 sgs1Δ double mutants are conditionally lethal, they do not show enhanced rDNA defects compared to sgs1Δ alone. However, surprisingly, the dna2-2 sgs1Δ lethality is suppressed by deletion of FOB1. On the other hand, the dna2-2 sgs1Δ lethality is only partially suppressed by deletion of rad51Δ. We propose that the replication-associated defects that we document in the rDNA are characteristic of similar events occurring either stochastically throughout the genome or at other regions where replication forks move slowly or stall, such as telomeres, centromeres, or replication slow zones

  5. Evidence that yeast SGS1, DNA2, SRS2, and FOB1 interact to maintain rDNA stability

    Energy Technology Data Exchange (ETDEWEB)

    Tao Weitao; Budd, Martin; Campbell, Judith L

    2003-11-27

    We and others have proposed that faulty processing of arrested replication forks leads to increases in recombination and chromosome instability in Saccharomyces cerevisiae. Now we use the ribosomal DNA locus, which is a good model for all stages of DNA replication, to test this hypothesis. We showed previously that DNA replication pausing at the ribosomal DNA replication fork barrier (RFB) is accompanied by the occurrence of double-strand breaks near the RFB. Both pausing and breakage are elevated in the hypomorphic dna2-2 helicase mutant. Deletion of FOB1 suppresses the elevated pausing and DSB formation. Our current work shows that mutation inactivating Sgs1, the yeast RecQ helicase ortholog, also causes accumulation of stalled replication forks and DSBs at the rDNA RFB. Either deletion of FOB1, which suppresses fork blocking and certain types of rDNA recombination, or an increase in SIR2 gene dosage, which suppresses rDNA recombination, reduces the number of forks persisting at the RFB. Although dna2-2 sgs1{delta} double mutants are conditionally lethal, they do not show enhanced rDNA defects compared to sgs1{delta} alone. However, surprisingly, the dna2-2 sgs1{delta} lethality is suppressed by deletion of FOB1. On the other hand, the dna2-2 sgs1{delta} lethality is only partially suppressed by deletion of rad51{delta}. We propose that the replication-associated defects that we document in the rDNA are characteristic of similar events occurring either stochastically throughout the genome or at other regions where replication forks move slowly or stall, such as telomeres, centromeres, or replication slow zones.

  6. Clonal growth and fine-scale genetic structure in tanoak (Notholithocarpus densiflorus: Fagaceae)

    Science.gov (United States)

    Richard S. Dodd; Wasima Mayer; Alejandro Nettel; Zara Afzal-Rafii

    2013-01-01

    The combination of sprouting and reproduction by seed can have important consequences on fine-scale spatial distribution of genetic structure (SGS). SGS is an important consideration for species’ restoration because it determines the minimum distance among seed trees to maximize genetic diversity while not prejudicing locally adapted genotypes. Local environmental...

  7. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacó n Rebollo, Tomá s; Dia, Ben Mansour

    2015-01-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  8. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacón Rebollo, Tomás

    2015-03-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  9. SGS3 Cooperates with RDR6 in Triggering Geminivirus-Induced Gene Silencing and in Suppressing Geminivirus Infection in Nicotiana Benthamiana

    Directory of Open Access Journals (Sweden)

    Fangfang Li

    2017-09-01

    Full Text Available RNA silencing has an important role in defending against virus infection in plants. Plants with the deficiency of RNA silencing components often show enhanced susceptibility to viral infections. RNA-dependent RNA polymerase (RDRs mediated-antiviral defense has a pivotal role in resistance to many plant viruses. In RDR6-mediated defense against viral infection, a plant-specific RNA binding protein, Suppressor of Gene Silencing 3 (SGS3, was also found to fight against some viruses in Arabidopsis. In this study, we showed that SGS3 from Nicotiana benthamiana (NbSGS3 is required for sense-RNA induced post-transcriptional gene silencing (S-PTGS and initiating sense-RNA-triggered systemic silencing. Further, the deficiency of NbSGS3 inhibited geminivirus-induced endogenous gene silencing (GIEGS and promoted geminivirus infection. During TRV-mediated NbSGS3 or N. benthamiana RDR6 (NbRDR6 silencing process, we found that their expression can be effectively fine-tuned. Plants with the knock-down of both NbSGS3 and NbRDR6 almost totally blocked GIEGS, and were more susceptible to geminivirus infection. These data suggest that NbSGS3 cooperates with NbRDR6 against GIEGS and geminivirus infection in N. benthamiana, which provides valuable information for breeding geminivirus-resistant plants.

  10. A Rad53 independent function of Rad9 becomes crucial for genome maintenance in the absence of the Recq helicase Sgs1.

    Directory of Open Access Journals (Sweden)

    Ida Nielsen

    Full Text Available The conserved family of RecQ DNA helicases consists of caretaker tumour suppressors, that defend genome integrity by acting on several pathways of DNA repair that maintain genome stability. In budding yeast, Sgs1 is the sole RecQ helicase and it has been implicated in checkpoint responses, replisome stability and dissolution of double Holliday junctions during homologous recombination. In this study we investigate a possible genetic interaction between SGS1 and RAD9 in the cellular response to methyl methane sulphonate (MMS induced damage and compare this with the genetic interaction between SGS1 and RAD24. The Rad9 protein, an adaptor for effector kinase activation, plays well-characterized roles in the DNA damage checkpoint response, whereas Rad24 is characterized as a sensor protein also in the DNA damage checkpoint response. Here we unveil novel insights into the cellular response to MMS-induced damage. Specifically, we show a strong synergistic functionality between SGS1 and RAD9 for recovery from MMS induced damage and for suppression of gross chromosomal rearrangements, which is not the case for SGS1 and RAD24. Intriguingly, it is a Rad53 independent function of Rad9, which becomes crucial for genome maintenance in the absence of Sgs1. Despite this, our dissection of the MMS checkpoint response reveals parallel, but unequal pathways for Rad53 activation and highlights significant differences between MMS- and hydroxyurea (HU-induced checkpoint responses with relation to the requirement of the Sgs1 interacting partner Topoisomerase III (Top3. Thus, whereas earlier studies have documented a Top3-independent role of Sgs1 for an HU-induced checkpoint response, we show here that upon MMS treatment, Sgs1 and Top3 together define a minor but parallel pathway to that of Rad9.

  11. LOW-MASS GALAXY FORMATION IN COSMOLOGICAL ADAPTIVE MESH REFINEMENT SIMULATIONS: THE EFFECTS OF VARYING THE SUB-GRID PHYSICS PARAMETERS

    International Nuclear Information System (INIS)

    ColIn, Pedro; Vazquez-Semadeni, Enrique; Avila-Reese, Vladimir; Valenzuela, Octavio; Ceverino, Daniel

    2010-01-01

    We present numerical simulations aimed at exploring the effects of varying the sub-grid physics parameters on the evolution and the properties of the galaxy formed in a low-mass dark matter halo (∼7 x 10 10 h -1 M sun at redshift z = 0). The simulations are run within a cosmological setting with a nominal resolution of 218 pc comoving and are stopped at z = 0.43. For simulations that cannot resolve individual molecular clouds, we propose the criterion that the threshold density for star formation, n SF , should be chosen such that the column density of the star-forming cells equals the threshold value for molecule formation, N ∼ 10 21 cm -2 , or ∼8 M sun pc -2 . In all of our simulations, an extended old/intermediate-age stellar halo and a more compact younger stellar disk are formed, and in most cases, the halo's specific angular momentum is slightly larger than that of the galaxy, and sensitive to the SF/feedback parameters. We found that a non-negligible fraction of the halo stars are formed in situ in a spheroidal distribution. Changes in the sub-grid physics parameters affect significantly and in a complex way the evolution and properties of the galaxy: (1) lower threshold densities n SF produce larger stellar effective radii R e , less peaked circular velocity curves V c (R), and greater amounts of low-density and hot gas in the disk mid-plane; (2) when stellar feedback is modeled by temporarily switching off radiative cooling in the star-forming regions, R e increases (by a factor of ∼2 in our particular model), the circular velocity curve becomes flatter, and a complex multi-phase gaseous disk structure develops; (3) a more efficient local conversion of gas mass to stars, measured by a stellar particle mass distribution biased toward larger values, increases the strength of the feedback energy injection-driving outflows and inducing burstier SF histories; (4) if feedback is too strong, gas loss by galactic outflows-which are easier to produce in low

  12. 3' fragment of miR173-programmed RISC-cleaved RNA is protected from degradation in a complex with RISC and SGS3.

    Science.gov (United States)

    Yoshikawa, Manabu; Iki, Taichiro; Tsutsui, Yasuhiro; Miyashita, Kyoko; Poethig, R Scott; Habu, Yoshiki; Ishikawa, Masayuki

    2013-03-05

    trans-acting small interfering RNAs (tasiRNAs) are plant-specific endogenous siRNAs produced via a unique pathway whose first step is the microRNA (miRNA)-programmed RNA-induced silencing complex (RISC)-mediated cleavage of tasiRNA gene (TAS) transcripts. One of the products is subsequently transformed into tasiRNAs by a pathway that requires several factors including SUPPRESSOR OF GENE SILENCING3 (SGS3) and RNA-DEPENDENT RNA POLYMERASE6. Here, using in vitro assembled ARGONAUTE (AGO)1-RISCs, we show that SGS3 is recruited onto RISCs only when they bind target RNA. Following cleavage by miRNA173 (miR173)-programmed RISC, SGS3 was found in complexes containing cleaved TAS2 RNA and RISC. The 3' cleavage fragment (the source of tasiRNAs) was protected from degradation in this complex. Depletion of SGS3 did not affect TAS2 RNA cleavage by miR173-programmed RISC, but did affect the stability of the 3' cleavage fragment. When the 3' nucleotide of 22-nt miR173 was deleted or the corresponding nucleotide in TAS2 RNA was mutated, the complex was not observed and the 3' cleavage fragment was degraded. Importantly, these changes in miR173 or TAS2 RNA are known to lead to a loss of tasiRNA production in vivo. These results suggest that (i) SGS3 associates with AGO1-RISC via the double-stranded RNA formed by the 3'-terminal nucleotides of 22-nt miR173 and corresponding target RNA, which probably protrudes from the AGO1-RISC molecular surface, (ii) SGS3 protects the 3' cleavage fragment of TAS2 RNA from degradation, and (iii) the observed SGS3-dependent stabilization of the 3' fragment of TAS2 RNA is key to tasiRNA production.

  13. SGS Analysis of the Evolution Equations of the Mixture Fraction and the Progress Variable Variances in the Presence of Spray Combustion

    Directory of Open Access Journals (Sweden)

    H. Meftah

    2010-03-01

    Full Text Available In this paper, direct numerical simulation databases have been generated to analyze the impact of the propagation of a spray flame on several subgrid scales (SGS models dedicated to the closure of the transport equations of the subgrid fluctuations of the mixture fraction Z and the progress variable c. Computations have been carried out starting from a previous inert database [22] where a cold flame has been ignited in the center of the mixture when the droplet segregation and evaporation rate were at their highest levels. First, a RANS analysis has shown a brutal increase of the mixture fraction fluctuations due to the fuel consumption by the flame. Indeed, local vapour mass fraction reaches then a minimum value, far from the saturation level. It leads to a strong increase of the evaporation rate, which is also accompanied by a diminution of the oxidiser level. In a second part of this paper, a detailed evaluation of the subgrid models allowing to close the variance and the dissipation rates of the mixture fraction and the progress variable has been carried out. Models that have been selected for their efficiency in inert flows have shown a very good behaviour in the framework of reactive flows.

  14. Controlling meiotic recombinational repair - specifying the roles of ZMMs, Sgs1 and Mus81/Mms4 in crossover formation.

    Directory of Open Access Journals (Sweden)

    Ashwini Oke

    2014-10-01

    Full Text Available Crossovers (COs play a critical role in ensuring proper alignment and segregation of homologous chromosomes during meiosis. How the cell balances recombination between CO vs. noncrossover (NCO outcomes is not completely understood. Further lacking is what constrains the extent of DNA repair such that multiple events do not arise from a single double-strand break (DSB. Here, by interpreting signatures that result from recombination genome-wide, we find that synaptonemal complex proteins promote crossing over in distinct ways. Our results suggest that Zip3 (RNF212 promotes biased cutting of the double Holliday-junction (dHJ intermediate whereas surprisingly Msh4 does not. Moreover, detailed examination of conversion tracts in sgs1 and mms4-md mutants reveal distinct aberrant recombination events involving multiple chromatid invasions. In sgs1 mutants, these multiple invasions are generally multichromatid involving 3-4 chromatids; in mms4-md mutants the multiple invasions preferentially resolve into one or two chromatids. Our analysis suggests that Mus81/Mms4 (Eme1, rather than just being a minor resolvase for COs is crucial for both COs and NCOs in preventing chromosome entanglements by removing 3'- flaps to promote second-end capture. Together our results force a reevaluation of how key recombination enzymes collaborate to specify the outcome of meiotic DNA repair.

  15. Reduced fine-scale spatial genetic structure in grazed populations of Dianthus carthusianorum.

    Science.gov (United States)

    Rico, Y; Wagner, H H

    2016-11-01

    Strong spatial genetic structure in plant populations can increase homozygosity, reducing genetic diversity and adaptive potential. The strength of spatial genetic structure largely depends on rates of seed dispersal and pollen flow. Seeds without dispersal adaptations are likely to be dispersed over short distances within the vicinity of the mother plant, resulting in spatial clustering of related genotypes (fine-scale spatial genetic structure, hereafter spatial genetic structure (SGS)). However, primary seed dispersal by zoochory can promote effective dispersal, increasing the mixing of seeds and influencing SGS within plant populations. In this study, we investigated the effects of seed dispersal by rotational sheep grazing on the strength of SGS and genetic diversity using 11 nuclear microsatellites for 49 populations of the calcareous grassland forb Dianthus carthusianorum. Populations connected by rotational sheep grazing showed significantly weaker SGS and higher genetic diversity than populations in ungrazed grasslands. Independent of grazing treatment, small populations showed significantly stronger SGS and lower genetic diversity than larger populations, likely due to genetic drift. A lack of significant differences in the strength of SGS and genetic diversity between populations that were recently colonized and pre-existing populations suggested that populations colonized after the reintroduction of rotational sheep grazing were likely founded by colonists from diverse source populations. We conclude that dispersal by rotational sheep grazing has the potential to considerably reduce SGS within D. carthusianorum populations. Our study highlights the effectiveness of landscape management by rotational sheep grazing to importantly reduce genetic structure at local scales within restored plant populations.

  16. Diagnostic value of 99mTc-pertechnetate salivary gland scintigraphy (SGS) in Sjoegren's syndrome (SS). Comparative study with symptomatic non Sjoegren patients and healthy controls

    International Nuclear Information System (INIS)

    Lobo, G.; Ladron de Guevara, D.; Zerboni, A.; Aguilera, S.

    2002-01-01

    The aim of this study was to describe SGS findings in patients with SS, and to compare them with non Sjoegren symptomatic and healthy control individual, estimating performance of SGS in SS diagnosis. Materials and Method: Fifty three control individual (average age: 53.7 yr, range: 27-83 yr) and 169 patients with subjective xerostomia underwent 99mTc-pertechnetate scintigraphy. The symptomatic group consisted of: 112 patients with Sjoegren's syndrome (average age: 53.7 yr, range:16-81 yr) according to modifying European Classification Criteria, 42 patients with fibromyalgia (FM)(average age:48.2 yr, range:19-76 yr) who presented non-specific chronic sialadenitis or normal labial biopsy, and 15 patients with keratoconjunctivitis sicca (KS) (average age: 40.9 yr, range:23-57 yr). SGS was performed following i.v. injection of 10 mCi 99mTc-pertechnetate, in dynamic acquisition of 60 15-sec frames, and giving lemon juice orally at 20 min. Irregular regions of interest (ROI) over salivary glands and brain for background assessment were drawn, building time-activity curves. SGS was classified according to visual intensity of gland tracer uptake and excretion before and after lemon and curve evaluation, in: normal (intensity of gland uptake fourfold background activity, ascending curve with fast and profound fall after lemon) , mild alteration (light decrease in gland uptake or excretion, with a normal curve shape), moderate alteration (evident uptake and excretion decrease with a median Mita curve) and severe alteration (very low or absent uptake, flat or slope curve). Scintigraphic findings were compared with diagnosis, calculating positive (PPV) and negative predictive value (NPV) for SS. Results: The results of SGS according to diagnosis are presented. SS group had higher incidence of severe alterations (p<0.001) than each of other clinics groups and lower proportion of mild alterations (p:0.008) and normal scans (p:0.005) than both control and KS patients. The KS

  17. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  18. Coarse grid simulation of bed expansion characteristics of industrial-scale gas–solid bubbling fluidized beds

    NARCIS (Netherlands)

    Wang, J.; van der Hoef, Martin Anton; Kuipers, J.A.M.

    2010-01-01

    Two-fluid modeling of the hydrodynamics of industrial-scale gas-fluidized beds proves a long-standing challenge for both engineers and scientists. In this study, we suggest a simple method to modify currently available drag correlations to allow for the effect of unresolved sub-grid scale

  19. A Rad53 Independent Function of Rad9 Becomes Crucial for Genome Maintenance in the Absence of the RecQ Helicase Sgs1

    DEFF Research Database (Denmark)

    Nielsen, Ida; Bentsen, Iben Bach; Andersen, Anni Hangaard

    2013-01-01

    becomes crucial for genome maintenance in the absence of Sgs1. Despite this, our dissection of the MMS checkpoint response reveals parallel, but unequal pathways for Rad53 activation and highlights significant differences between MMS- and hydroxyurea (HU)-induced checkpoint responses with relation...

  20. Arabidopsis RecQsim, a plant-specific member of the RecQ helicase family, can suppress the MMS hypersensitivity of the yeast sgs1 mutant

    NARCIS (Netherlands)

    Bagherieh-Najjar, MB; de Vries, OMH; Kroon, JTM; Wright, EL; Elborough, KM; Hille, J; Dijkwel, PP

    The Arabidopsis genome contains seven genes that belong to the RecQ family of ATP-dependent DNA helicases. RecQ members in Saccharomyces cerevisiae (SGS1) and man (WRN, BLM and RecQL4) are involved in DNA recombination, repair and genome stability maintenance, but little is known about the function

  1. 3′ fragment of miR173-programmed RISC-cleaved RNA is protected from degradation in a complex with RISC and SGS3

    Science.gov (United States)

    Yoshikawa, Manabu; Iki, Taichiro; Tsutsui, Yasuhiro; Miyashita, Kyoko; Poethig, R. Scott; Habu, Yoshiki; Ishikawa, Masayuki

    2013-01-01

    trans-acting small interfering RNAs (tasiRNAs) are plant-specific endogenous siRNAs produced via a unique pathway whose first step is the microRNA (miRNA)-programmed RNA-induced silencing complex (RISC)–mediated cleavage of tasiRNA gene (TAS) transcripts. One of the products is subsequently transformed into tasiRNAs by a pathway that requires several factors including SUPPRESSOR OF GENE SILENCING3 (SGS3) and RNA-DEPENDENT RNA POLYMERASE6. Here, using in vitro assembled ARGONAUTE (AGO)1–RISCs, we show that SGS3 is recruited onto RISCs only when they bind target RNA. Following cleavage by miRNA173 (miR173)-programmed RISC, SGS3 was found in complexes containing cleaved TAS2 RNA and RISC. The 3′ cleavage fragment (the source of tasiRNAs) was protected from degradation in this complex. Depletion of SGS3 did not affect TAS2 RNA cleavage by miR173-programmed RISC, but did affect the stability of the 3′ cleavage fragment. When the 3′ nucleotide of 22-nt miR173 was deleted or the corresponding nucleotide in TAS2 RNA was mutated, the complex was not observed and the 3′ cleavage fragment was degraded. Importantly, these changes in miR173 or TAS2 RNA are known to lead to a loss of tasiRNA production in vivo. These results suggest that (i) SGS3 associates with AGO1–RISC via the double-stranded RNA formed by the 3′-terminal nucleotides of 22-nt miR173 and corresponding target RNA, which probably protrudes from the AGO1–RISC molecular surface, (ii) SGS3 protects the 3′ cleavage fragment of TAS2 RNA from degradation, and (iii) the observed SGS3-dependent stabilization of the 3′ fragment of TAS2 RNA is key to tasiRNA production. PMID:23417299

  2. Survival and growth of yeast without telomere capping by Cdc13 in the absence of Sgs1, Exo1, and Rad9.

    Directory of Open Access Journals (Sweden)

    Hien-Ping Ngo

    2010-08-01

    Full Text Available Maintenance of telomere capping is absolutely essential to the survival of eukaryotic cells. Telomere capping proteins, such as Cdc13 and POT1, are essential for the viability of budding yeast and mammalian cells, respectively. Here we identify, for the first time, three genetic modifications that allow budding yeast cells to survive without telomere capping by Cdc13. We found that simultaneous inactivation of Sgs1, Exo1, and Rad9, three DNA damage response (DDR proteins, is sufficient to allow cell division in the absence of Cdc13. Quantitative amplification of ssDNA (QAOS was used to show that the RecQ helicase Sgs1 plays an important role in the resection of uncapped telomeres, especially in the absence of checkpoint protein Rad9. Strikingly, simultaneous deletion of SGS1 and the nuclease EXO1, further reduces resection at uncapped telomeres and together with deletion of RAD9 permits cell survival without CDC13. Pulsed-field gel electrophoresis studies show that cdc13-1 rad9Delta sgs1Delta exo1Delta strains can maintain linear chromosomes despite the absence of telomere capping by Cdc13. However, with continued passage, the telomeres of such strains eventually become short and are maintained by recombination-based mechanisms. Remarkably, cdc13Delta rad9Delta sgs1Delta exo1Delta strains, lacking any Cdc13 gene product, are viable and can grow indefinitely. Our work has uncovered a critical role for RecQ helicases in limiting the division of cells with uncapped telomeres, and this may provide one explanation for increased tumorigenesis in human diseases associated with mutations of RecQ helicases. Our results reveal the plasticity of the telomere cap and indicate that the essential role of telomere capping is to counteract specific aspects of the DDR.

  3. Subgrid-scale stresses and scalar fluxes constructed by the multi-scale turnover Lagrangian map

    Science.gov (United States)

    AL-Bairmani, Sukaina; Li, Yi; Rosales, Carlos; Xie, Zheng-tong

    2017-04-01

    The multi-scale turnover Lagrangian map (MTLM) [C. Rosales and C. Meneveau, "Anomalous scaling and intermittency in three-dimensional synthetic turbulence," Phys. Rev. E 78, 016313 (2008)] uses nested multi-scale Lagrangian advection of fluid particles to distort a Gaussian velocity field and, as a result, generate non-Gaussian synthetic velocity fields. Passive scalar fields can be generated with the procedure when the fluid particles carry a scalar property [C. Rosales, "Synthetic three-dimensional turbulent passive scalar fields via the minimal Lagrangian map," Phys. Fluids 23, 075106 (2011)]. The synthetic fields have been shown to possess highly realistic statistics characterizing small scale intermittency, geometrical structures, and vortex dynamics. In this paper, we present a study of the synthetic fields using the filtering approach. This approach, which has not been pursued so far, provides insights on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. The MTLM method is first generalized to model scalar fields produced by an imposed linear mean profile. We then calculate the subgrid-scale stress, SGS scalar flux, SGS scalar variance, as well as related quantities from the synthetic fields. Comparison with direct numerical simulations (DNSs) shows that the synthetic fields reproduce the probability distributions of the SGS energy and scalar dissipation rather well. Related geometrical statistics also display close agreement with DNS results. The synthetic fields slightly under-estimate the mean SGS energy dissipation and slightly over-predict the mean SGS scalar variance dissipation. In general, the synthetic fields tend to slightly under-estimate the probability of large fluctuations for most quantities we have examined. Small scale anisotropy in the scalar field originated from the imposed mean gradient is captured. The sensitivity of the synthetic fields on the input spectra is assessed by

  4. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    Science.gov (United States)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  5. An SGS3-like protein functions in RNA-directed DNA methylation and transcriptional gene silencing in Arabidopsis

    KAUST Repository

    Zheng, Zhimin

    2010-01-06

    RNA-directed DNA methylation (RdDM) is an important epigenetic mechanism for silencing transgenes and endogenous repetitive sequences such as transposons. The RD29A promoter-driven LUCIFERASE transgene and its corresponding endogenous RD29A gene are hypermethylated and silenced in the Arabidopsis DNA demethylase mutant ros1. By screening for second-site suppressors of ros1, we identified the RDM12 locus. The rdm12 mutation releases the silencing of the RD29A-LUC transgene and the endogenous RD29A gene by reducing the promoter DNA methylation. The rdm12 mutation also reduces DNA methylation at endogenous RdDM target loci, including transposons and other repetitive sequences. In addition, the rdm12 mutation affects the levels of small interfering RNAs (siRNAs) from some of the RdDM target loci. RDM12 encodes a protein with XS and coiled-coil domains, and is similar to SGS3, which is a partner protein of RDR6 and can bind to double-stranded RNAs with a 5′ overhang, and is required for several post-transcriptional gene silencing pathways. Our results show that RDM12 is a component of the RdDM pathway, and suggest that RdDM may involve double-stranded RNAs with a 5′ overhang and the partnering between RDM12 and RDR2. © 2010 Blackwell Publishing Ltd.

  6. Heteroduplex DNA position defines the roles of the Sgs1, Srs2, and Mph1 helicases in promoting distinct recombination outcomes.

    Directory of Open Access Journals (Sweden)

    Katrina Mitchel

    Full Text Available The contributions of the Sgs1, Mph1, and Srs2 DNA helicases during mitotic double-strand break (DSB repair in yeast were investigated using a gap-repair assay. A diverged chromosomal substrate was used as a repair template for the gapped plasmid, allowing mismatch-containing heteroduplex DNA (hDNA formed during recombination to be monitored. Overall DSB repair efficiencies and the proportions of crossovers (COs versus noncrossovers (NCOs were determined in wild-type and helicase-defective strains, allowing the efficiency of CO and NCO production in each background to be calculated. In addition, the products of individual NCO events were sequenced to determine the location of hDNA. Because hDNA position is expected to differ depending on whether a NCO is produced by synthesis-dependent-strand-annealing (SDSA or through a Holliday junction (HJ-containing intermediate, its position allows the underlying molecular mechanism to be inferred. Results demonstrate that each helicase reduces the proportion of CO recombinants, but that each does so in a fundamentally different way. Mph1 does not affect the overall efficiency of gap repair, and its loss alters the CO-NCO by promoting SDSA at the expense of HJ-containing intermediates. By contrast, Sgs1 and Srs2 are each required for efficient gap repair, strongly promoting NCO formation and having little effect on CO efficiency. hDNA analyses suggest that all three helicases promote SDSA, and that Sgs1 and Srs2 additionally dismantle HJ-containing intermediates. The hDNA data are consistent with the proposed role of Sgs1 in the dissolution of double HJs, and we propose that Srs2 dismantles nicked HJs.

  7. Effects of Resolution on the Simulation of Boundary-layer Clouds and the Partition of Kinetic Energy to Subgrid Scales

    Directory of Open Access Journals (Sweden)

    Anning Cheng

    2010-02-01

    Full Text Available Seven boundary-layer cloud cases are simulated with UCLA-LES (The University of California, Los Angeles – large eddy simulation model with different horizontal and vertical gridspacing to investigate how the results depend on gridspacing. Some variables are more sensitive to horizontal gridspacing, while others are more sensitive to vertical gridspacing, and still others are sensitive to both horizontal and vertical gridspacings with similar or opposite trends. For cloud-related variables having the opposite dependence on horizontal and vertical gridspacings, changing the gridspacing proportionally in both directions gives the appearance of convergence. In this study, we mainly discuss the impact of subgrid-scale (SGS kinetic energy (KE on the simulations with coarsening of horizontal and vertical gridspacings. A running-mean operator is used to separate the KE of the high-resolution benchmark simulations into that of resolved scales of coarse-resolution simulations and that of SGSs. The diagnosed SGS KE is compared with that parameterized by the Smagorinsky-Lilly SGS scheme at various gridspacings. It is found that the parameterized SGS KE for the coarse-resolution simulations is usually underestimated but the resolved KE is unrealistically large, compared to benchmark simulations. However, the sum of resolved and SGS KEs is about the same for simulations with various gridspacings. The partitioning of SGS and resolved heat and moisture transports is consistent with that of SGS and resolved KE, which means that the parameterized transports are underestimated but resolved-scale transports are overestimated. On the whole, energy shifts to large-scales as the horizontal gridspacing becomes coarse, hence the size of clouds and the resolved circulation increase, the clouds become more stratiform-like with an increase in cloud fraction, cloud liquid-water path and surface precipitation; when coarse vertical gridspacing is used, cloud sizes do not

  8. An improved anisotropy-resolving subgrid-scale model for flows in laminar–turbulent transition region

    International Nuclear Information System (INIS)

    Inagaki, Masahide; Abe, Ken-ichi

    2017-01-01

    Highlights: • An anisotropy-resolving subgrid-scale model, covering a wide range of grid resolutions, is improved. • The new model enhances its applicability to flows in the laminar-turbulent transition region. • A mixed-timescale subgrid-scale model is used as the eddy viscosity model. • The proposed model successfully predicts the channel flows at transitional Reynolds numbers. • The influence of the definition of the grid-filter width is also investigated. - Abstract: Some types of mixed subgrid-scale (SGS) models combining an isotropic eddy-viscosity model and a scale-similarity model can be used to effectively improve the accuracy of large eddy simulation (LES) in predicting wall turbulence. Abe (2013) has recently proposed a stabilized mixed model that maintains its computational stability through a unique procedure that prevents the energy transfer between the grid-scale (GS) and SGS components induced by the scale-similarity term. At the same time, since this model can successfully predict the anisotropy of the SGS stress, the predictive performance, particularly at coarse grid resolutions, is remarkably improved in comparison with other mixed models. However, since the stabilized anisotropy-resolving SGS model includes a transport equation of the SGS turbulence energy, k SGS , containing a production term proportional to the square root of k SGS , its applicability to flows with both laminar and turbulent regions is not so high. This is because such a production term causes k SGS to self-reproduce. Consequently, the laminar–turbulent transition region predicted by this model depends on the inflow or initial condition of k SGS . To resolve these issues, in the present study, the mixed-timescale (MTS) SGS model proposed by Inagaki et al. (2005) is introduced into the stabilized mixed model as the isotropic eddy-viscosity part and the production term in the k SGS transport equation. In the MTS model, the SGS turbulence energy, k es , estimated by

  9. Landscape-Level and Fine-Scale Genetic Structure of the Neo tropical Tree Protium spruceanum (Burseraceae)

    International Nuclear Information System (INIS)

    Vieira, F.D.A.; Fajardo, C.G.; De Souza, A.M.; Dulciniea De Carvalho, D.

    2010-01-01

    Knowledge of genetic structure at different scales and correlation with the current landscape is fundamental for evaluating the importance of evolutionary processes and identifying conservation units. Here, we used allozyme loci to examine the spatial genetic structure (SGS) of 230 individuals of Protium spruceanum, a native canopy-emergent in five fragments of Brazilian Atlantic forest (1 to 11.8 ha), and four ecological corridors (460 to 1000 m length). Wright's FST statistic and Mantel tests revealed little evidence of significant genetic structure at the landscape-scale (FST=0.027; rM=-0.051, P=.539). At fine-scale SGS, low levels of relatedness within fragments and corridors (Sp=0.008, P>.05) were observed. Differences in the levels and distribution of the SGS at both spatial scales are discussed in relation to biological and conservation strategies of corridors and forest fragments.

  10. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  11. The influence of Ag content and annealing time on structural and optical properties of SGS antimony-germanate glass doped with Er3+ ions

    Science.gov (United States)

    Zmojda, J.; Kochanowicz, M.; Miluski, P.; Baranowska, A.; Basa, A.; Jadach, R.; Sitarz, M.; Dorosz, D.

    2018-05-01

    A series of erbium doped SGS antimony-germanate glass embedding silver (Ag0) nanoparticles have been synthesized by a one-step melt-quench thermochemical reduction technique. The effect of NPs concentration and annealing time on the structural and photoluminescent (PL) properties were investigated. The Raman spectra as a function of temperature measured in-situ allow to determine the structural changes in vicinity of Ag+ ions and confirmed thermochemical reduction of Ag+ ions by Sb3+ ions. The surface plasmon resonance absorption band was evidenced near 450 nm. The impact of local field effect generated by Ag0 nanoparticles (NPs) and energy transfer from surface of silver NPs to trivalent erbium ions on near-infrared and up-conversion luminescence was described in terms of enhancement and quench phenomena.

  12. Fine-scale spatial genetic structure in predominantly selfing plants with limited seed dispersal: A rule or exception?

    Directory of Open Access Journals (Sweden)

    Sergei Volis

    2016-04-01

    Full Text Available Gene flow at a fine scale is still poorly understood despite its recognized importance for plant population demographic and genetic processes. We tested the hypothesis that intensity of gene flow will be lower and strength of spatial genetic structure (SGS will be higher in more peripheral populations because of lower population density. The study was performed on the predominantly selfing Avena sterilis and included: (1 direct measurement of dispersal in a controlled environment; and (2 analyses of SGS in three natural populations, sampled in linear transects at fixed increasing inter-plant distances. We found that in A. sterilis major seed dispersal is by gravity in close (less than 2 m vicinity of the mother plant, with a minor additional effect of wind. Analysis of SGS with six nuclear SSRs revealed a significant autocorrelation for the distance class of 1 m only in the most peripheral desert population, while in the two core populations with Mediterranean conditions, no genetic structure was found. Our results support the hypothesis that intensity of SGS increases from the species core to periphery as a result of decreased within-population gene flow related to low plant density. Our findings also show that predominant self-pollination and highly localized seed dispersal lead to SGS at a very fine scale, but only if plant density is not too high.

  13. A criterion of orthogonality on the assumption and restrictions in subgrid-scale modelling of turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Fang, L. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China); Sun, X.Y. [LMP, Ecole Centrale de Pékin, Beihang University, Beijing 100191 (China); Liu, Y.W., E-mail: liuyangwei@126.com [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Co-Innovation Center for Advanced Aero-Engine, Beihang University, Beijing 100191 (China)

    2016-12-09

    In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology. - Highlights: • The concepts of assumption and restriction in the SGS modelling procedure are defined. • A criterion of orthogonality on the assumption and restrictions is derived. • Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion.

  14. Spatial Scales of Genetic Structure in Free-Standing and Strangler Figs (Ficus, Moraceae Inhabiting Neotropical Forests.

    Directory of Open Access Journals (Sweden)

    Katrin Heer

    Full Text Available Wind-borne pollinating wasps (Agaonidae can transport fig (Ficus sp., Moraceae pollen over enormous distances (> 100 km. Because of their extensive breeding areas, Neotropical figs are expected to exhibit weak patterns of genetic structure at local and regional scales. We evaluated genetic structure at the regional to continental scale (Panama, Costa Rica, and Peru for the free-standing fig species Ficus insipida. Genetic differentiation was detected only at distances > 300 km (Jost´s Dest = 0.68 ± 0.07 & FST = 0.30 ± 0.03 between Mesoamerican and Amazonian sites and evidence for phylogeographic structure (RST>>permuted RST was only significant in comparisons between Central and South America. Further, we assessed local scale spatial genetic structure (SGS, d ≤ 8 km in Panama and developed an agent-based model parameterized with data from F. insipida to estimate minimum pollination distances, which determine the contribution of pollen dispersal on SGS. The local scale data for F. insipida was compared to SGS data collected for an additional free-standing fig, F. yoponensis (subgenus Pharmacosycea, and two species of strangler figs, F. citrifolia and F. obtusifolia (subgenus Urostigma sampled in Panama. All four species displayed significant SGS (mean Sp = 0.014 ± 0.012. Model simulations indicated that most pollination events likely occur at distances > > 1 km, largely ruling out spatially limited pollen dispersal as the determinant of SGS in F. insipida and, by extension, the other fig species. Our results are consistent with the view that Ficus develops fine-scale SGS primarily as a result of localized seed dispersal and/or clumped seedling establishment despite extensive long-distance pollen dispersal. We discuss several ecological and life history factors that could have species- or subgenus-specific impacts on the genetic structure of Neotropical figs.

  15. Subgrid-scale turbulence in shock-boundary layer flows

    Science.gov (United States)

    Jammalamadaka, Avinash; Jaberi, Farhad

    2015-04-01

    Data generated by direct numerical simulation (DNS) for a Mach 2.75 zero-pressure gradient turbulent boundary layer interacting with shocks of different intensities are used for a priori analysis of subgrid-scale (SGS) turbulence and various terms in the compressible filtered Navier-Stokes equations. The numerical method used for DNS is based on a hybrid scheme that uses a non-dissipative central scheme in the shock-free turbulent regions and a robust monotonicity-preserving scheme in the shock regions. The behavior of SGS stresses and their components, namely Leonard, Cross and Reynolds components, is examined in various regions of the flow for different shock intensities and filter widths. The backscatter in various regions of the flow is found to be significant only instantaneously, while the ensemble-averaged statistics indicate no significant backscatter. The budgets for the SGS kinetic energy equation are examined for a better understanding of shock-tubulence interactions at the subgrid level and also with the aim of providing useful information for one-equation LES models. A term-by-term analysis of SGS terms in the filtered total energy equation indicate that while each term in this equation is significant by itself, the net contribution by all of them is relatively small. This observation is consistent with our a posteriori analysis.

  16. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    Science.gov (United States)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  17. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    Science.gov (United States)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that

  18. Impact of Subgrid Scale Models and Heat Loss on Large Eddy Simulations of a Premixed Jet Burner Using Flamelet-Generated Manifolds

    Science.gov (United States)

    Hernandez Perez, Francisco E.; Im, Hong G.; Lee, Bok Jik; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, L. Philip H.

    2017-11-01

    Large eddy simulations (LES) of a turbulent premixed jet flame in a confined chamber are performed employing the flamelet-generated manifold (FGM) method for tabulation of chemical kinetics and thermochemical properties, as well as the OpenFOAM framework for computational fluid dynamics. The burner has been experimentally studied by Lammel et al. (2011) and features an off-center nozzle, feeding a preheated lean methane-air mixture with an equivalence ratio of 0.71 and mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the FGM tabulation via burner-stabilized flamelets and the subgrid-scale (SGS) turbulence-chemistry interaction is modeled via presumed filtered density functions. The impact of heat loss inclusion as well as SGS modeling for both the SGS stresses and SGS variance of progress variable on the numerical results is investigated. Comparisons of the LES results against measurements show a significant improvement in the prediction of temperature when heat losses are incorporated into FGM. While further enhancements in the LES results are accomplished by using SGS models based on transported quantities and/or dynamically computed coefficients as compared to the Smagorinsky model, heat loss inclusion is more relevant. This research was sponsored by King Abdullah University of Science and Technology (KAUST) and made use of computational resources at KAUST Supercomputing Laboratory.

  19. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  20. Higher fine-scale genetic structure in peripheral than in core populations of a long-lived and mixed-mating conifer - eastern white cedar (Thuja occidentalis L.)

    Science.gov (United States)

    2012-01-01

    Background Fine-scale or spatial genetic structure (SGS) is one of the key genetic characteristics of plant populations. Several evolutionary and ecological processes and population characteristics influence the level of SGS within plant populations. Higher fine-scale genetic structure may be expected in peripheral than core populations of long-lived forest trees, owing to the differences in the magnitude of operating evolutionary and ecological forces such as gene flow, genetic drift, effective population size and founder effects. We addressed this question using eastern white cedar (Thuja occidentalis) as a model species for declining to endangered long-lived tree species with mixed-mating system. Results We determined the SGS in two core and two peripheral populations of eastern white cedar from its Maritime Canadian eastern range using six nuclear microsatellite DNA markers. Significant SGS ranging from 15 m to 75 m distance classes was observed in the four studied populations. An analysis of combined four populations revealed significant positive SGS up to the 45 m distance class. The mean positive significant SGS observed in the peripheral populations was up to six times (up to 90 m) of that observed in the core populations (15 m). Spatial autocorrelation coefficients and correlograms of single and sub-sets of populations were statistically significant. The extent of within-population SGS was significantly negatively correlated with all genetic diversity parameters. Significant heterogeneity of within-population SGS was observed for 0-15 m and 61-90 m between core and peripheral populations. Average Sp, and gene flow distances were higher in peripheral (Sp = 0.023, σg = 135 m) than in core (Sp = 0.014, σg = 109 m) populations. However, the mean neighborhood size was higher in the core (Nb = 82) than in the peripheral (Nb = 48) populations. Conclusion Eastern white cedar populations have significant fine-scale genetic structure at short distances. Peripheral

  1. Higher fine-scale genetic structure in peripheral than in core populations of a long-lived and mixed-mating conifer - eastern white cedar (Thuja occidentalis L.

    Directory of Open Access Journals (Sweden)

    Pandey Madhav

    2012-04-01

    Full Text Available Abstract Background Fine-scale or spatial genetic structure (SGS is one of the key genetic characteristics of plant populations. Several evolutionary and ecological processes and population characteristics influence the level of SGS within plant populations. Higher fine-scale genetic structure may be expected in peripheral than core populations of long-lived forest trees, owing to the differences in the magnitude of operating evolutionary and ecological forces such as gene flow, genetic drift, effective population size and founder effects. We addressed this question using eastern white cedar (Thuja occidentalis as a model species for declining to endangered long-lived tree species with mixed-mating system. Results We determined the SGS in two core and two peripheral populations of eastern white cedar from its Maritime Canadian eastern range using six nuclear microsatellite DNA markers. Significant SGS ranging from 15 m to 75 m distance classes was observed in the four studied populations. An analysis of combined four populations revealed significant positive SGS up to the 45 m distance class. The mean positive significant SGS observed in the peripheral populations was up to six times (up to 90 m of that observed in the core populations (15 m. Spatial autocorrelation coefficients and correlograms of single and sub-sets of populations were statistically significant. The extent of within-population SGS was significantly negatively correlated with all genetic diversity parameters. Significant heterogeneity of within-population SGS was observed for 0-15 m and 61-90 m between core and peripheral populations. Average Sp, and gene flow distances were higher in peripheral (Sp = 0.023, σg = 135 m than in core (Sp = 0.014, σg = 109 m populations. However, the mean neighborhood size was higher in the core (Nb = 82 than in the peripheral (Nb = 48 populations. Conclusion Eastern white cedar populations have significant fine-scale genetic structure at short

  2. Study of subgrid-scale velocity models for reacting and nonreacting flows

    Science.gov (United States)

    Langella, I.; Doan, N. A. K.; Swaminathan, N.; Pope, S. B.

    2018-05-01

    A study is conducted to identify advantages and limitations of existing large-eddy simulation (LES) closures for the subgrid-scale (SGS) kinetic energy using a database of direct numerical simulations (DNS). The analysis is conducted for both reacting and nonreacting flows, different turbulence conditions, and various filter sizes. A model, based on dissipation and diffusion of momentum (LD-D model), is proposed in this paper based on the observed behavior of four existing models. Our model shows the best overall agreements with DNS statistics. Two main investigations are conducted for both reacting and nonreacting flows: (i) an investigation on the robustness of the model constants, showing that commonly used constants lead to a severe underestimation of the SGS kinetic energy and enlightening their dependence on Reynolds number and filter size; and (ii) an investigation on the statistical behavior of the SGS closures, which suggests that the dissipation of momentum is the key parameter to be considered in such closures and that dilatation effect is important and must be captured correctly in reacting flows. Additional properties of SGS kinetic energy modeling are identified and discussed.

  3. Higher fine-scale genetic structure in peripheral than in core populations of a long-lived and mixed-mating conifer--eastern white cedar (Thuja occidentalis L.).

    Science.gov (United States)

    Pandey, Madhav; Rajora, Om P

    2012-04-05

    Fine-scale or spatial genetic structure (SGS) is one of the key genetic characteristics of plant populations. Several evolutionary and ecological processes and population characteristics influence the level of SGS within plant populations. Higher fine-scale genetic structure may be expected in peripheral than core populations of long-lived forest trees, owing to the differences in the magnitude of operating evolutionary and ecological forces such as gene flow, genetic drift, effective population size and founder effects. We addressed this question using eastern white cedar (Thuja occidentalis) as a model species for declining to endangered long-lived tree species with mixed-mating system. We determined the SGS in two core and two peripheral populations of eastern white cedar from its Maritime Canadian eastern range using six nuclear microsatellite DNA markers. Significant SGS ranging from 15 m to 75 m distance classes was observed in the four studied populations. An analysis of combined four populations revealed significant positive SGS up to the 45 m distance class. The mean positive significant SGS observed in the peripheral populations was up to six times (up to 90 m) of that observed in the core populations (15 m). Spatial autocorrelation coefficients and correlograms of single and sub-sets of populations were statistically significant. The extent of within-population SGS was significantly negatively correlated with all genetic diversity parameters. Significant heterogeneity of within-population SGS was observed for 0-15 m and 61-90 m between core and peripheral populations. Average Sp, and gene flow distances were higher in peripheral (Sp = 0.023, σg = 135 m) than in core (Sp = 0.014, σg = 109 m) populations. However, the mean neighborhood size was higher in the core (Nb = 82) than in the peripheral (Nb = 48) populations. Eastern white cedar populations have significant fine-scale genetic structure at short distances. Peripheral populations have several

  4. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.; Lee, Bok Jik; Im, Hong G.; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, Philip H.

    2017-01-01

    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  5. Large Eddy Simulations of a Premixed Jet Combustor Using Flamelet-Generated Manifolds: Effects of Heat Loss and Subgrid-Scale Models

    KAUST Repository

    Hernandez Perez, Francisco E.

    2017-01-05

    Large eddy simulations of a turbulent premixed jet flame in a confined chamber were conducted using the flamelet-generated manifold technique for chemistry tabulation. The configuration is characterized by an off-center nozzle having an inner diameter of 10 mm, supplying a lean methane-air mixture with an equivalence ratio of 0.71 and a mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the manifold via burner-stabilized flamelets and the subgrid-scale (SGS) turbulencechemistry interaction is modeled via presumed probability density functions. Comparisons between numerical results and measured data show that a considerable improvement in the prediction of temperature is achieved when heat losses are included in the manifold, as compared to the adiabatic one. Additional improvement in the temperature predictions is obtained by incorporating radiative heat losses. Moreover, further enhancements in the LES predictions are achieved by employing SGS models based on transport equations, such as the SGS turbulence kinetic energy equation with dynamic coefficients. While the numerical results display good agreement up to a distance of 4 nozzle diameters downstream of the nozzle exit, the results become less satisfactory along the downstream, suggesting that further improvements in the modeling are required, among which a more accurate model for the SGS variance of progress variable can be relevant.

  6. Development and Validation of the Body-Focused Shame and Guilt Scale

    Science.gov (United States)

    Weingarden, Hilary; Renshaw, Keith D.; Tangney, June P.; Wilhelm, Sabine

    2015-01-01

    Body shame is described as central in clinical literature on body dysmorphic disorder (BDD). However, empirical investigations of body shame within BDD are rare. One potential reason for the scarcity of such research may be that existing measures of body shame focus on eating and weight-based content. Within BDD, however, body shame likely focuses more broadly on shame felt in response to perceived appearance flaws in one’s body parts. We describe the development and validation of the Body-Focused Shame and Guilt Scale (BF-SGS), a measure of BDD-relevant body shame, across two studies: a two time-point study of undergraduates, and a follow-up study in two Internet-recruited clinical samples (BDD, obsessive compulsive disorder) and healthy controls. Across both studies, the BF-SGS shame subscale demonstrated strong reliability and construct validity, with Study 2 providing initial clinical norms. PMID:26640760

  7. Estimation of turbulence dissipation rate by Large eddy PIV method in an agitated vessel

    Directory of Open Access Journals (Sweden)

    Kysela Bohuš

    2015-01-01

    Full Text Available The distribution of turbulent kinetic energy dissipation rate is important for design of mixing apparatuses in chemical industry. Generally used experimental methods of velocity measurements for measurement in complex geometries of an agitated vessel disallow measurement in resolution of small scales close to turbulence dissipation ones. Therefore, Particle image velocity (PIV measurement method improved by large eddy Ply approach was used. Large eddy PIV method is based on modeling of smallest eddies by a sub grid scale (SGS model. This method is similar to numerical calculations using Large Eddy Simulation (LES and the same SGS models are used. In this work the basic Smagorinsky model was employed and compared with power law approximation. Time resolved PIV data were processed by Large Eddy PIV approach and the obtained results of turbulent kinetic dissipation rate were compared in selected points for several operating conditions (impeller speed, operating liquid viscosity.

  8. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    Science.gov (United States)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  9. A priori study of subgrid-scale features in turbulent Rayleigh-Bénard convection

    Science.gov (United States)

    Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.

    2017-10-01

    At the crossroad between flow topology analysis and turbulence modeling, a priori studies are a reliable tool to understand the underlying physics of the subgrid-scale (SGS) motions in turbulent flows. In this paper, properties of the SGS features in the framework of a large-eddy simulation are studied for a turbulent Rayleigh-Bénard convection (RBC). To do so, data from direct numerical simulation (DNS) of a turbulent air-filled RBC in a rectangular cavity of aspect ratio unity and π spanwise open-ended distance are used at two Rayleigh numbers R a ∈{1 08,1 010 } [Dabbagh et al., "On the evolution of flow topology in turbulent Rayleigh-Bénard convection," Phys. Fluids 28, 115105 (2016)]. First, DNS at Ra = 108 is used to assess the performance of eddy-viscosity models such as QR, Wall-Adapting Local Eddy-viscosity (WALE), and the recent S3PQR-models proposed by Trias et al. ["Building proper invariants for eddy-viscosity subgrid-scale models," Phys. Fluids 27, 065103 (2015)]. The outcomes imply that the eddy-viscosity modeling smoothes the coarse-grained viscous straining and retrieves fairly well the effect of the kinetic unfiltered scales in order to reproduce the coherent large scales. However, these models fail to approach the exact evolution of the SGS heat flux and are incapable to reproduce well the further dominant rotational enstrophy pertaining to the buoyant production. Afterwards, the key ingredients of eddy-viscosity, νt, and eddy-diffusivity, κt, are calculated a priori and revealed positive prevalent values to maintain a turbulent wind essentially driven by the mean buoyant force at the sidewalls. The topological analysis suggests that the effective turbulent diffusion paradigm and the hypothesis of a constant turbulent Prandtl number are only applicable in the large-scale strain-dominated areas in the bulk. It is shown that the bulk-dominated rotational structures of vortex-stretching (and its synchronous viscous dissipative structures) hold

  10. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    Science.gov (United States)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The

  11. Taller de SGS, en Wasserburg, Alemania

    Directory of Open Access Journals (Sweden)

    von Seidlein, P. C.

    1970-05-01

    Full Text Available The program on which this project was planned included the following requirements: — 1,000 m2 devoted to production space, — 500 m2 for office space, — a bar and canteen close to the production zone, — a number of additional zones where technical and social activities could be practised. The further condition was imposed that the various zones should be so designed that they could be later enlarged. The resulting project meets the above specifications and extends horizontally, along a single floor level. This has reduced the cost, and improved the communications between the various zones.En el programa que sirvió de base para la redacción de este proyecto figuraba que la nueva construcción debía disponer de: 1.000 m2 destinados a zona de producción; 500 m2 a oficinas; un bar-cantina inmediato a la zona de producción; una serie de locales en los que se pudieran desarrollar actividades técnicas y sociales. Además, se sentaba también la premisa de que los diferentes espacios debían permitir la posibilidad de futuras ampliaciones. La edificación erigida, cumplimentando todos estos requisitos, ha sido desarrollada «horizontalmente», a base de una planta, con lo que se ha conseguido un menor costo y una mayor facilidad en las comunicaciones, y contactos entre las diversas zonas.

  12. Final Technical Report. Project Boeing SGS

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Thomas E. [The Boeing Company, Seattle, WA (United States)

    2014-12-31

    Boeing and its partner, PJM Interconnection, teamed to bring advanced “defense-grade” technologies for cyber security to the US regional power grid through demonstration in PJM’s energy management environment. Under this cooperative project with the Department of Energy, Boeing and PJM have developed and demonstrated a host of technologies specifically tailored to the needs of PJM and the electric sector as a whole. The team has demonstrated to the energy industry a combination of processes, techniques and technologies that have been successfully implemented in the commercial, defense, and intelligence communities to identify, mitigate and continuously monitor the cyber security of critical systems. Guided by the results of a Cyber Security Risk-Based Assessment completed in Phase I, the Boeing-PJM team has completed multiple iterations through the Phase II Development and Phase III Deployment phases. Multiple cyber security solutions have been completed across a variety of controls including: Application Security, Enhanced Malware Detection, Security Incident and Event Management (SIEM) Optimization, Continuous Vulnerability Monitoring, SCADA Monitoring/Intrusion Detection, Operational Resiliency, Cyber Range simulations and hands on cyber security personnel training. All of the developed and demonstrated solutions are suitable for replication across the electric sector and/or the energy sector as a whole. Benefits identified include; Improved malware and intrusion detection capability on critical SCADA networks including behavioral-based alerts resulting in improved zero-day threat protection; Improved Security Incident and Event Management system resulting in better threat visibility, thus increasing the likelihood of detecting a serious event; Improved malware detection and zero-day threat response capability; Improved ability to systematically evaluate and secure in house and vendor sourced software applications; Improved ability to continuously monitor and maintain secure configuration of network devices resulting in reduced vulnerabilities for potential exploitation; Improved overall cyber security situational awareness through the integration of multiple discrete security technologies into a single cyber security reporting console; Improved ability to maintain the resiliency of critical systems in the face of a targeted cyber attack of other significant event; Improved ability to model complex networks for penetration testing and advanced training of cyber security personnel

  13. National-scale estimation of gross forest aboveground carbon loss: a case study of the Democratic Republic of the Congo

    International Nuclear Information System (INIS)

    Tyukavina, A; Potapov, P V; Turubanova, S A; Hansen, M C; Stehman, S V; Baccini, A; Goetz, S J; Laporte, N T; Houghton, R A

    2013-01-01

    Recent advances in remote sensing enable the mapping and monitoring of carbon stocks without relying on extensive in situ measurements. The Democratic Republic of the Congo (DRC) is among the countries where national forest inventories (NFI) are either non-existent or out of date. Here we demonstrate a method for estimating national-scale gross forest aboveground carbon (AGC) loss and associated uncertainties using remotely sensed-derived forest cover loss and biomass carbon density data. Lidar data were used as a surrogate for NFI plot measurements to estimate carbon stocks and AGC loss based on forest type and activity data derived using time-series multispectral imagery. Specifically, DRC forest type and loss from the FACET (Forêts d’Afrique Centrale Evaluées par Télédétection) product, created using Landsat data, were related to carbon data derived from the Geoscience Laser Altimeter System (GLAS). Validation data for FACET forest area loss were created at a 30-m spatial resolution and compared to the 60-m spatial resolution FACET map. We produced two gross AGC loss estimates for the DRC for the last decade (2000–2010): a map-scale estimate (53.3 ± 9.8 Tg C yr −1 ) accounting for whole-pixel classification errors in the 60-m resolution FACET forest cover change product, and a sub-grid estimate (72.1 ± 12.7 Tg C yr −1 ) that took into account 60-m cells that experienced partial forest loss. Our sub-grid forest cover and AGC loss estimates, which included smaller-scale forest disturbances, exceed published assessments. Results raise the issue of scale in forest cover change mapping and validation, and subsequent impacts on remotely sensed carbon stock change estimation, particularly for smallholder dominated systems such as the DRC. (letter)

  14. Estimation of viscous dissipative stresses induced by a mechanical heart valve using PIV data.

    Science.gov (United States)

    Li, Chi-Pei; Lo, Chi-Wen; Lu, Po-Chien

    2010-03-01

    Among the clinical complications of mechanical heart valves (MHVs), hemolysis was previously thought to result from Reynolds stresses in turbulent flows. A more recent hypothesis suggests viscous dissipative stresses at spatial scales similar in size to red blood cells may be related to hemolysis in MHVs, but the resolution of current instrumentation is insufficient to measure the smallest eddy sizes. We studied the St. Jude Medical (SJM) 27 mm valve in the aortic position of a pulsatile circulatory mock loop under physiologic conditions with particle image velocimetry (PIV). Assuming a dynamic equilibrium assumption between the resolved and sub-grid-scale (SGS) energy flux, the SGS energy flux was calculated from the strain rate tensor computed from the resolved velocity fields and the SGS stress was determined by the Smagorinsky model, from which the turbulence dissipation rate and then the viscous dissipative stresses were estimated. Our results showed Reynolds stresses up to 80 N/m2 throughout the cardiac cycle, and viscous dissipative stresses below 12 N/m2. The viscous dissipative stresses remain far below the threshold of red blood cell hemolysis, but could potentially damage platelets, implying the need for further study in the phenomenon of MHV hemolytic complications.

  15. Between-site differences in the scale of dispersal and gene flow in red oak.

    Directory of Open Access Journals (Sweden)

    Emily V Moran

    Full Text Available Nut-bearing trees, including oaks (Quercus spp., are considered to be highly dispersal limited, leading to concerns about their ability to colonize new sites or migrate in response to climate change. However, estimating seed dispersal is challenging in species that are secondarily dispersed by animals, and differences in disperser abundance or behavior could lead to large spatio-temporal variation in dispersal ability. Parentage and dispersal analyses combining genetic and ecological data provide accurate estimates of current dispersal, while spatial genetic structure (SGS can shed light on past patterns of dispersal and establishment.In this study, we estimate seed and pollen dispersal and parentage for two mixed-species red oak populations using a hierarchical bayesian approach. We compare these results to those of a genetic ML parentage model. We also test whether observed patterns of SGS in three size cohorts are consistent with known site history and current dispersal patterns. We find that, while pollen dispersal is extensive at both sites, the scale of seed dispersal differs substantially. Parentage results differ between models due to additional data included in bayesian model and differing genotyping error assumptions, but both indicate between-site dispersal differences. Patterns of SGS in large adults, small adults, and seedlings are consistent with known site history (farmed vs. selectively harvested, and with long-term differences in seed dispersal. This difference is consistent with predator/disperser satiation due to higher acorn production at the low-dispersal site. While this site-to-site variation results in substantial differences in asymptotic spread rates, dispersal for both sites is substantially lower than required to track latitudinal temperature shifts.Animal-dispersed trees can exhibit considerable spatial variation in seed dispersal, although patterns may be surprisingly constant over time. However, even under

  16. Improving the representation of river-groundwater interactions in land surface modeling at the regional scale: Observational evidence and parameterization applied in the Community Land Model

    KAUST Repository

    Zampieri, Matteo

    2012-02-01

    Groundwater is an important component of the hydrological cycle, included in many land surface models to provide a lower boundary condition for soil moisture, which in turn plays a key role in the land-vegetation-atmosphere interactions and the ecosystem dynamics. In regional-scale climate applications land surface models (LSMs) are commonly coupled to atmospheric models to close the surface energy, mass and carbon balance. LSMs in these applications are used to resolve the momentum, heat, water and carbon vertical fluxes, accounting for the effect of vegetation, soil type and other surface parameters, while lack of adequate resolution prevents using them to resolve horizontal sub-grid processes. Specifically, LSMs resolve the large-scale runoff production associated with infiltration excess and sub-grid groundwater convergence, but they neglect the effect from loosing streams to groundwater. Through the analysis of observed data of soil moisture obtained from the Oklahoma Mesoscale Network stations and land surface temperature derived from MODIS we provide evidence that the regional scale soil moisture and surface temperature patterns are affected by the rivers. This is demonstrated on the basis of simulations from a land surface model (i.e., Community Land Model - CLM, version 3.5). We show that the model cannot reproduce the features of the observed soil moisture and temperature spatial patterns that are related to the underlying mechanism of reinfiltration of river water to groundwater. Therefore, we implement a simple parameterization of this process in CLM showing the ability to reproduce the soil moisture and surface temperature spatial variabilities that relate to the river distribution at regional scale. The CLM with this new parameterization is used to evaluate impacts of the improved representation of river-groundwater interactions on the simulated water cycle parameters and the surface energy budget at the regional scale. © 2011 Elsevier B.V.

  17. Increased fire frequency promotes stronger spatial genetic structure and natural selection at regional and local scales in Pinus halepensis Mill.

    Science.gov (United States)

    Budde, Katharina B; González-Martínez, Santiago C; Navascués, Miguel; Burgarella, Concetta; Mosca, Elena; Lorenzo, Zaida; Zabal-Aguirre, Mario; Vendramin, Giovanni G; Verdú, Miguel; Pausas, Juli G; Heuertz, Myriam

    2017-04-01

    The recurrence of wildfires is predicted to increase due to global climate change, resulting in severe impacts on biodiversity and ecosystem functioning. Recurrent fires can drive plant adaptation and reduce genetic diversity; however, the underlying population genetic processes have not been studied in detail. In this study, the neutral and adaptive evolutionary effects of contrasting fire regimes were examined in the keystone tree species Pinus halepensis Mill. (Aleppo pine), a fire-adapted conifer. The genetic diversity, demographic history and spatial genetic structure were assessed at local (within-population) and regional scales for populations exposed to different crown fire frequencies. Eight natural P. halepensis stands were sampled in the east of the Iberian Peninsula, five of them in a region exposed to frequent crown fires (HiFi) and three of them in an adjacent region with a low frequency of crown fires (LoFi). Samples were genotyped at nine neutral simple sequence repeats (SSRs) and at 251 single nucleotide polymorphisms (SNPs) from coding regions, some of them potentially important for fire adaptation. Fire regime had no effects on genetic diversity or demographic history. Three high-differentiation outlier SNPs were identified between HiFi and LoFi stands, suggesting fire-related selection at the regional scale. At the local scale, fine-scale spatial genetic structure (SGS) was overall weak as expected for a wind-pollinated and wind-dispersed tree species. HiFi stands displayed a stronger SGS than LoFi stands at SNPs, which probably reflected the simultaneous post-fire recruitment of co-dispersed related seeds. SNPs with exceptionally strong SGS, a proxy for microenvironmental selection, were only reliably identified under the HiFi regime. An increasing fire frequency as predicted due to global change can promote increased SGS with stronger family structures and alter natural selection in P. halepensis and in plants with similar life history traits

  18. Scaling of energy deposition in fast ignition targets

    International Nuclear Information System (INIS)

    Welch, Dale R.; Slutz, Stephen A.; Mehlhorn, Thomas Alan; Campbell, Robert B.

    2005-01-01

    We examine the scaling to ignition of the energy deposition of laser generated electrons in compressed fast ignition cores. Relevant cores have densities of several hundred g/cm 3 , with a few keV initial temperature. As the laser intensities increase approaching ignition systems, on the order of a few 10 21 W/cm 2 , the hot electron energies expected to approach 100MeV. Most certainly anomalous processes must play a role in the energy transfer, but the exact nature of these processes, as well as a practical way to model them, remain open issues. Traditional PIC explicit methods are limited to low densities on current and anticipated computing platforms, so the study of relevant parameter ranges has received so far little attention. We use LSP to examine a relativistic electron beam (presumed generated from a laser plasma interaction) of legislated energy and angular distribution is injected into a 3D block of compressed DT. Collective effects will determine the stopping, most likely driven by magnetic field filamentation. The scaling of the stopping as a function of block density and temperature, as well as hot electron current and laser intensity is presented. Sub-grid models may be profitably used and degenerate effects included in the solution of this problem.

  19. Workshop on Human Activity at Scale in Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Melissa R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Coletti, Mark A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kennedy, Joseph H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sujithkumar S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-26

    Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limited schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.

  20. THOR: A New Higher-Order Closure Assumed PDF Subgrid-Scale Parameterization; Evaluation and Application to Low Cloud Feedbacks

    Science.gov (United States)

    Firl, G. J.; Randall, D. A.

    2013-12-01

    The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been

  1. Application of a new concept for multi-scale interfacial structures to the dam-break case with an obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Hänsch, Susann, E-mail: s.haensch@hzdr.de; Lucas, Dirk; Höhne, Thomas; Krepper, Eckhard

    2014-11-15

    Highlights: • A concept for modeling transitions between different gaseous morphologies is presented. • The Eulerian multi-field model includes dispersed and continuous gas phases. • Interfacial transfer models are found considering free surfaces within MUSIG framework. • A new source term for sub-grid waves and instabilities is introduced. - Abstract: New results of a generalized concept developed for the simulation of two-phase flows with multi-scale interfacial structures are presented in this paper. By extending the inhomogeneus Multiple Size Group-model, the concept enables transitions between dispersed and continuous gas morphologies, including the appearance and evanescence of one of these particular gas phases. Adequate interfacial transfer formulations, which are consistent with such an approach, are introduced for interfacial area density and drag. A new drag-formulation considers shear stresses occurring within the free surface area. The application of the concept to a collapsing water column demonstrates the breakup of continuous gas into a polydispersed phase forming different bubble sizes underneath the free surface. Thus, both resolved free surface structures as well as the entrainment of bubbles and their coalescence and breakup underneath the surface can be described in the same time. The simulations have been performed with the CFD-code CFX 14.0 and will be compared with experimental images. The paper will further investigate the possible improvement of such free surface simulations by including sub-grid information about small waves and instabilities at the free surface. A comparison of the results will be used for a discussion of possible new mass transfer models between filtered free surface areas and dispersed bubble size groups as part of the future work.

  2. A dynamic global-coefficient mixed subgrid-scale model for large-eddy simulation of turbulent flows

    International Nuclear Information System (INIS)

    Singh, Satbir; You, Donghyun

    2013-01-01

    Highlights: ► A new SGS model is developed for LES of turbulent flows in complex geometries. ► A dynamic global-coefficient SGS model is coupled with a scale-similarity model. ► Overcome some of difficulties associated with eddy-viscosity closures. ► Does not require averaging or clipping of the model coefficient for stabilization. ► The predictive capability is demonstrated in a number of turbulent flow simulations. -- Abstract: A dynamic global-coefficient mixed subgrid-scale eddy-viscosity model for large-eddy simulation of turbulent flows in complex geometries is developed. In the present model, the subgrid-scale stress is decomposed into the modified Leonard stress, cross stress, and subgrid-scale Reynolds stress. The modified Leonard stress is explicitly computed assuming a scale similarity, while the cross stress and the subgrid-scale Reynolds stress are modeled using the global-coefficient eddy-viscosity model. The model coefficient is determined by a dynamic procedure based on the global-equilibrium between the subgrid-scale dissipation and the viscous dissipation. The new model relieves some of the difficulties associated with an eddy-viscosity closure, such as the nonalignment of the principal axes of the subgrid-scale stress tensor and the strain rate tensor and the anisotropy of turbulent flow fields, while, like other dynamic global-coefficient models, it does not require averaging or clipping of the model coefficient for numerical stabilization. The combination of the global-coefficient eddy-viscosity model and a scale-similarity model is demonstrated to produce improved predictions in a number of turbulent flow simulations

  3. Large Eddy Simulation (LES for IC Engine Flows

    Directory of Open Access Journals (Sweden)

    Kuo Tang-Wei

    2013-10-01

    Full Text Available Numerical computations are carried out using an engineering-level Large Eddy Simulation (LES model that is provided by a commercial CFD code CONVERGE. The analytical framework and experimental setup consist of a single cylinder engine with Transparent Combustion Chamber (TCC under motored conditions. A rigorous working procedure for comparing and analyzing the results from simulation and high speed Particle Image Velocimetry (PIV experiments is documented in this work. The following aspects of LES are analyzed using this procedure: number of cycles required for convergence with adequate accuracy; effect of mesh size, time step, sub-grid-scale (SGS turbulence models and boundary condition treatments; application of the proper orthogonal decomposition (POD technique.

  4. Thermodynamics, maximum power, and the dynamics of preferential river flow structures at the continental scale

    Directory of Open Access Journals (Sweden)

    A. Kleidon

    2013-01-01

    Full Text Available The organization of drainage basins shows some reproducible phenomena, as exemplified by self-similar fractal river network structures and typical scaling laws, and these have been related to energetic optimization principles, such as minimization of stream power, minimum energy expenditure or maximum "access". Here we describe the organization and dynamics of drainage systems using thermodynamics, focusing on the generation, dissipation and transfer of free energy associated with river flow and sediment transport. We argue that the organization of drainage basins reflects the fundamental tendency of natural systems to deplete driving gradients as fast as possible through the maximization of free energy generation, thereby accelerating the dynamics of the system. This effectively results in the maximization of sediment export to deplete topographic gradients as fast as possible and potentially involves large-scale feedbacks to continental uplift. We illustrate this thermodynamic description with a set of three highly simplified models related to water and sediment flow and describe the mechanisms and feedbacks involved in the evolution and dynamics of the associated structures. We close by discussing how this thermodynamic perspective is consistent with previous approaches and the implications that such a thermodynamic description has for the understanding and prediction of sub-grid scale organization of drainage systems and preferential flow structures in general.

  5. Stochastic four-way coupling of gas-solid flows for Large Eddy Simulations

    Science.gov (United States)

    Curran, Thomas; Denner, Fabian; van Wachem, Berend

    2017-11-01

    The interaction of solid particles with turbulence has for long been a topic of interest for predicting the behavior of industrially relevant flows. For the turbulent fluid phase, Large Eddy Simulation (LES) methods are widely used for their low computational cost, leaving only the sub-grid scales (SGS) of turbulence to be modelled. Although LES has seen great success in predicting the behavior of turbulent single-phase flows, the development of LES for turbulent gas-solid flows is still in its infancy. This contribution aims at constructing a model to describe the four-way coupling of particles in an LES framework, by considering the role particles play in the transport of turbulent kinetic energy across the scales. Firstly, a stochastic model reconstructing the sub-grid velocities for the particle tracking is presented. Secondly, to solve particle-particle interaction, most models involve a deterministic treatment of the collisions. We finally introduce a stochastic model for estimating the collision probability. All results are validated against fully resolved DNS-DPS simulations. The final goal of this contribution is to propose a global stochastic method adapted to two-phase LES simulation where the number of particles considered can be significantly increased. Financial support from PetroBras is gratefully acknowledged.

  6. 15.SFPS "Ieņēmumi no līgumiem ar klientiem"un 16.SFPS "Noma" prasības un to salīdzinājums ar iepriekšējiem SFPS/SGS un LR likumdošanu

    OpenAIRE

    Dābola, Iveta

    2016-01-01

    Maģistra darba temats 15. SFPS “Ieņēmumi no līgumiem ar klientiem” un 16. SFPS “Noma” prasības un to salīdzinājums ar iepriekšējiem SFPS/SGS un LR likumdošanu. Maģistra darba mērķis - balstoties uz 15. SFPS “Ieņēmumi no līgumiem ar klientiem” un 16. SFPS “Noma” standartu prasību izpēti un analīzi, novērtēt Latvijas likumdošanas atbilstību minēto standartu prasībām un sniegt priekšlikumus konkrētu jautājumu risināšanai Latvijas kontekstā. Maģistra darbā izpētīti LR normatīvie regulējumi un sta...

  7. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  8. CFD analysis of bubble microlayer and growth in subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Owoeye, Eyitayo James, E-mail: msgenius10@ufl.edu; Schubring, DuWanye, E-mail: dlschubring@ufl.edu

    2016-08-01

    Highlights: • A new LES-microlayer model is introduced. • Analogous to the unresolved SGS in LES, analysis of bubble microlayer was performed. • The thickness of bubble microlayer was computed at both steady and transient states. • The macroscale two-phase behavior was captured with VOF coupled with AMR. • Numerical validations were performed for both the micro- and macro-region analyses. - Abstract: A numerical study of single bubble growth in turbulent subcooled flow boiling was carried out. The macro- and micro-regions of the bubble were analyzed by introducing a LES-microlayer model. Analogous to the unresolved sub-grid scale (SGS) in LES, a microlayer analysis was performed to capture the unresolved thermal scales for the micro-region heat transfer by deriving equations for the microlayer thickness at steady and transient states. The phase change at the macro-region was based on Volume-of-Fluid (VOF) interface tracking method coupled with adaptive mesh refinement (AMR). Large Eddy Simulation (LES) was used to model the turbulence characteristics. The numerical model was validated with multiple experimental data from the open literature. This study includes parametric variations that cover the operating conditions of boiling water reactor (BWR) and pressurized water reactor (PWR). The numerical model was used to study the microlayer thickness, growth rate, dynamics, and distortion of the bubble.

  9. Isolating Numerical Error Effects in LES Using DNS-Derived Sub-Grid Closures

    Science.gov (United States)

    Edoh, Ayaboe; Karagozian, Ann

    2017-11-01

    The prospect of employing an explicitly-defined filter in Large-Eddy Simulations (LES) provides the opportunity to reduce the interaction of numerical/modeling errors and offers the chance to carry out grid-converged assessments, important for model development. By utilizing a quasi a priori evaluation method - wherein the LES is assisted by closures derived from a fully-resolved computation - it then becomes possible to understand the combined impacts of filter construction (e.g., filter width, spectral sharpness) and discretization choice on the solution accuracy. The present work looks at calculations of the compressible LES Navier-Stokes system and considers discrete filtering formulations in conjunction with high-order finite differencing schemes. Accuracy of the overall method construction is compared to a consistently-filtered exact solution, and lessons are extended to a posteriori (i.e., non-assisted) evaluations. Supported by ERC, Inc. (PS150006) and AFOSR (Dr. Chiping Li).

  10. Large eddy simulation of transitional flow in an idealized stenotic blood vessel: evaluation of subgrid scale models.

    Science.gov (United States)

    Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H

    2014-07-01

    In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.

  11. multi-scale data assimilation approaches and error characterisation applied to the inverse modelling of atmospheric constituent emission fields

    International Nuclear Information System (INIS)

    Koohkan, Mohammad Reza

    2012-01-01

    Data assimilation in geophysical sciences aims at optimally estimating the state of the system or some parameters of the system's physical model. To do so, data assimilation needs three types of information: observations and background information, a physical/numerical model, and some statistical description that prescribes uncertainties to each component of the system. In my dissertation, new methodologies of data assimilation are used in atmospheric chemistry and physics: the joint use of a 4D-Var with a sub-grid statistical model to consistently account for representativeness errors, accounting for multiple scale in the BLUE estimation principle, and a better estimation of prior errors using objective estimation of hyper-parameters. These three approaches will be specifically applied to inverse modelling problems focusing on the emission fields of tracers or pollutants. First, in order to estimate the emission inventories of carbon monoxide over France, in-situ stations which are impacted by the representativeness errors are used. A sub-grid model is introduced and coupled with a 4D-Var to reduce the representativeness error. Indeed, the results of inverse modelling showed that the 4D-Var routine was not fit to handle the representativeness issues. The coupled data assimilation system led to a much better representation of the CO concentration variability, with a significant improvement of statistical indicators, and more consistent estimation of the CO emission inventory. Second, the evaluation of the potential of the IMS (International Monitoring System) radionuclide network is performed for the inversion of an accidental source. In order to assess the performance of the global network, a multi-scale adaptive grid is optimised using a criterion based on degrees of freedom for the signal (DFS). The results show that several specific regions remain poorly observed by the IMS network. Finally, the inversion of the surface fluxes of Volatile Organic Compounds

  12. Scale Pretesting

    Science.gov (United States)

    Howard, Matt C.

    2018-01-01

    Scale pretests analyze the suitability of individual scale items for further analysis, whether through judging their face validity, wording concerns, and/or other aspects. The current article reviews scale pretests, separated by qualitative and quantitative methods, in order to identify the differences, similarities, and even existence of the…

  13. Development of a new dynamic turbulent model, applications to two-dimensional and plane parallel flows

    International Nuclear Information System (INIS)

    Laval, Jean Philippe

    1999-01-01

    We developed a turbulent model based on asymptotic development of the Navier-Stokes equations within the hypothesis of non-local interactions at small scales. This model provides expressions of the turbulent Reynolds sub-grid stresses via estimates of the sub-grid velocities rather than velocities correlations as is usually done. The model involves the coupling of two dynamical equations: one for the resolved scales of motions, which depends upon the Reynolds stresses generated by the sub-grid motions, and one for the sub-grid scales of motions, which can be used to compute the sub-grid Reynolds stresses. The non-locality of interaction at sub-grid scales allows to model their evolution with a linear inhomogeneous equation where the forcing occurs via the energy cascade from resolved to sub-grid scales. This model was solved using a decomposition of sub-grid scales on Gabor's modes and implemented numerically in 2D with periodic boundary conditions. A particles method (PIC) was used to compute the sub-grid scales. The results were compared with results of direct simulations for several typical flows. The model was also applied to plane parallel flows. An analytical study of the equations allows a description of mean velocity profiles in agreement with experimental results and theoretical results based on the symmetries of the Navier-Stokes equation. Possible applications and improvements of the model are discussed in the conclusion. (author) [fr

  14. Maslowian Scale.

    Science.gov (United States)

    Falk, C.; And Others

    The development of the Maslowian Scale, a method of revealing a picture of one's needs and concerns based on Abraham Maslow's levels of self-actualization, is described. This paper also explains how the scale is supported by the theories of L. Kohlberg, C. Rogers, and T. Rusk. After a literature search, a list of statements was generated…

  15. Advanced Scale Conditioning Agent (ASCA) Applications: 2012 Experience Update

    International Nuclear Information System (INIS)

    Little, Michael-J.; Varrin, Robert-D.; Pellman, Aaron-T.; Kreider, Marc A.

    2012-09-01

    ASCAs are a group of dilute chemical treatments for removing deposited corrosion products from the secondary side of PWR steam generators (SGs). Each ASCA formulation is customized to achieve plant-specific goals that can include: - Partial dissolution and structural modification of the tube scale present on free span surfaces through full bundle treatment, lowering the deposit loading and enhancing SG thermal performance levels through creation of a scale structure marked by increased boiling efficiency, - Softening and partial removal of deposits present in the broached flow holes in the tube support plates, reducing the risks of level oscillations and flow-induced vibration (FIV), - Chemical removal of copper from tube scale and tube sheet deposits, reducing the risk of rapid tube corrosion caused by the oxidized conditions promoted by some copper species, and - Dissolution of hardness species from consolidated top-of-tube sheet (TTS) collars to enhance collar removal through water-jetting and other mechanical cleaning techniques. Regardless of the cleaning objectives for a particular plant, all ASCA processes are designed to minimize corrosion, waste disposal costs, and impact on the outage schedule. To date, about 40 ASCA applications have been carried out in four (4) countries. This paper provides an update of the industry experience gained during these applications, including results demonstrating the ability of ASCA processes to meet the goals outlined above. Experience at multiple units, including several repeat ASCA applications, has demonstrated significant heat-transfer benefits (i.e., steam pressure increases of up to 1-2 bar (15-30 psi)). ASCA applications also regularly achieve significant reductions in TSP blockage (i.e., up to 30% absolute increases in available flow area in broached flow holes) and have been successful in eliminating level oscillations caused by excessive broached-hole blockage. (authors)

  16. "Application, evaluation and sensitivity analysis of the coupled WRF-CMAQ system from regional to urban scales"

    Science.gov (United States)

    Appel, W.; Gilliam, R. C.; Mathur, R.; Roselle, S. J.; Pleim, J. E.; Hogrefe, C.; Pouliot, G.

    2017-12-01

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science chemical transport model (CTM) capable of simulating the emission, transport and fate of numerous air pollutants. Similarly, the Weather Research and Forecasting (WRF) model is a state-of-the-science meteorological model capable of simulating meteorology at many scales (e.g. global to urban). The coupled WRF-CMAQ system integrates these two models in a "two-way" configuration which allows feedback effects between the chemical (e.g. aerosols) and physical (e.g. solar radiation) states of the atmosphere. In addition, the coupled modeling system allows for more frequent communication between the CTM and meteorological model than is typically done in uncoupled WRF-CMAQ simulations. The goal of this modeling exercise is to assess the ability of the coupled WRF-CMAQ system at fine-scales (e.g. 4km to 1km) through comparison with high space and time resolution field measurements, and comparing those results to the traditional regional scale (e.g. 12km) simulation. This work will specifically examine several fine-scale simulations over the Eastern United States and the Baltimore, MD/Washington D.C. region for 2011, with special emphasis on the period of the DISCOVERAQ field campaign. In addition to evaluating the model performance at the various scales, the impact of the more frequent time coupling of the CTM and meteorology, aerosol feedback effects and lightning generated NO at the finer spatial resolutions will be assessed. The effect of simulating sub-grid clouds using several different options (i.e. explicit, parameterized or assimilated) will also be examined, since clouds are particularly important as they can have a large impact on both the meteorology (beyond the clouds themselves) and air quality, and are notoriously difficult to simulate accurately.

  17. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    Science.gov (United States)

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  18. Framing scales and scaling frames

    NARCIS (Netherlands)

    van Lieshout, M.; Dewulf, A.; Aarts, N.; Termeer, K.

    2009-01-01

    Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment

  19. Pollen-mediated gene flow and fine-scale spatial genetic structure in Olea europaea subsp. europaea var. sylvestris.

    Science.gov (United States)

    Beghè, D; Piotti, A; Satovic, Z; de la Rosa, R; Belaj, A

    2017-03-01

    Wild olive ( Olea europaea subsp. europaea var. sylvestris ) is important from an economic and ecological point of view. The effects of anthropogenic activities may lead to the genetic erosion of its genetic patrimony, which has high value for breeding programmes. In particular, the consequences of the introgression from cultivated stands are strongly dependent on the extent of gene flow and therefore this work aims at quantitatively describing contemporary gene flow patterns in wild olive natural populations. The studied wild population is located in an undisturbed forest, in southern Spain, considered one of the few extant hotspots of true oleaster diversity. A total of 225 potential father trees and seeds issued from five mother trees were genotyped by eight microsatellite markers. Levels of contemporary pollen flow, in terms of both pollen immigration rates and within-population dynamics, were measured through paternity analyses. Moreover, the extent of fine-scale spatial genetic structure (SGS) was studied to assess the relative importance of seed and pollen dispersal in shaping the spatial distribution of genetic variation. The results showed that the population under study is characterized by a high genetic diversity, a relatively high pollen immigration rate (0·57), an average within-population pollen dispersal of about 107 m and weak but significant SGS up to 40 m. The population is a mosaic of several intermingled genetic clusters that is likely to be generated by spatially restricted seed dispersal. Moreover, wild oleasters were found to be self-incompatible and preferential mating between some genotypes was revealed. Knowledge of the within-population genetic structure and gene flow dynamics will lead to identifying possible strategies aimed at limiting the effect of anthropogenic activities and improving breeding programmes for the conservation of olive tree forest genetic resources. © The Author 2016. Published by Oxford University Press on behalf

  20. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    Science.gov (United States)

    Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-10-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.

  1. Scaling down

    Directory of Open Access Journals (Sweden)

    Ronald L Breiger

    2015-11-01

    Full Text Available While “scaling up” is a lively topic in network science and Big Data analysis today, my purpose in this essay is to articulate an alternative problem, that of “scaling down,” which I believe will also require increased attention in coming years. “Scaling down” is the problem of how macro-level features of Big Data affect, shape, and evoke lower-level features and processes. I identify four aspects of this problem: the extent to which findings from studies of Facebook and other Big-Data platforms apply to human behavior at the scale of church suppers and department politics where we spend much of our lives; the extent to which the mathematics of scaling might be consistent with behavioral principles, moving beyond a “universal” theory of networks to the study of variation within and between networks; and how a large social field, including its history and culture, shapes the typical representations, interactions, and strategies at local levels in a text or social network.

  2. KNO scaling

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.; )

    2001-01-01

    A correct version of the KNO scaling of multiplicity distributions is discussed in detail. Some assertions on KNO-scaling violation based on the misinterpretation of experimental data behavior are analyzed. An accurate comparison with experiment is presented for the distributions of negative particles in e + e - annihilation at √S = 3 - 161 GeV, in inelastic pp interactions at √S = 2.4 - 62 GeV and in nucleus-nucleus interactions at p lab = 4.5 - 520 GeV/c per nucleon. The p-bar p data at √S 546 GeV are considered [ru

  3. Scaling satan.

    Science.gov (United States)

    Wilson, K M; Huff, J L

    2001-05-01

    The influence on social behavior of beliefs in Satan and the nature of evil has received little empirical study. Elaine Pagels (1995) in her book, The Origin of Satan, argued that Christians' intolerance toward others is due to their belief in an active Satan. In this study, more than 200 college undergraduates completed the Manitoba Prejudice Scale and the Attitudes Toward Homosexuals Scale (B. Altemeyer, 1988), as well as the Belief in an Active Satan Scale, developed by the authors. The Belief in an Active Satan Scale demonstrated good internal consistency and temporal stability. Correlational analyses revealed that for the female participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men and intolerance toward ethnic minorities. For the male participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men but was not significantly related to intolerance toward ethnic minorities. Results of this research showed that it is possible to meaningfully measure belief in an active Satan and that such beliefs may encourage intolerance toward others.

  4. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    Science.gov (United States)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-08-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over

  5. Nuclear scales

    International Nuclear Information System (INIS)

    Friar, J.L.

    1998-01-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the π-γ force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted

  6. Nuclear scales

    Energy Technology Data Exchange (ETDEWEB)

    Friar, J.L.

    1998-12-01

    Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the {pi}-{gamma} force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.

  7. Spatiotemporal Variability of Turbulence Kinetic Energy Budgets in the Convective Boundary Layer over Both Simple and Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Raj K. [Pacific Northwest National Laboratory, Richland, Washington; Berg, Larry K. [Pacific Northwest National Laboratory, Richland, Washington; Pekour, Mikhail [Pacific Northwest National Laboratory, Richland, Washington; Shaw, William J. [Pacific Northwest National Laboratory, Richland, Washington; Kosovic, Branko [National Center for Atmospheric Research, Boulder, Colorado; Mirocha, Jeffrey D. [Lawrence Livermore National Laboratory, Livermore, California; Ennis, Brandon L. [Sandia National Laboratories, Albuquerque, New Mexico

    2017-12-01

    The assumption of sub-grid scale (SGS) horizontal homogeneity within a model grid cell, which forms the basis of SGS turbulence closures used by mesoscale models, becomes increasingly tenuous as grid spacing is reduced to a few kilometers or less, such as in many emerging high-resolution applications. Herein, we use the turbulence kinetic energy (TKE) budget equation to study the spatio-temporal variability in two types of terrain—complex (Columbia Basin Wind Energy Study [CBWES] site, north-eastern Oregon) and flat (ScaledWind Farm Technologies [SWiFT] site, west Texas) using the Weather Research and Forecasting (WRF) model. In each case six-nested domains (three domains each for mesoscale and large-eddy simulation [LES]) are used to downscale the horizontal grid spacing from 10 km to 10 m using the WRF model framework. The model output was used to calculate the values of the TKE budget terms in vertical and horizontal planes as well as the averages of grid cells contained in the four quadrants (a quarter area) of the LES domain. The budget terms calculated along the planes and the mean profile of budget terms show larger spatial variability at CBWES site than at the SWiFT site. The contribution of the horizontal derivative of the shear production term to the total production shear was found to be 45% and 15% of the total shear, at the CBWES and SWiFT sites, respectively, indicating that the horizontal derivatives applied in the budget equation should not be ignored in mesoscale model parameterizations, especially for cases with complex terrain with <10 km scale.

  8. A high-order Petrov-Galerkin method for the Boltzmann transport equation

    International Nuclear Information System (INIS)

    Pain, C.C.; Candy, A.S.; Piggott, M.D.; Buchan, A.; Eaton, M.D.; Goddard, A.J.H.; Oliveira, C.R.E. de

    2005-01-01

    We describe a new Petrov-Galerkin method using high-order terms to introduce dissipation in a residual-free formulation. The method is developed following both a Taylor series analysis and a variational principle, and the result has much in common with traditional Petrov-Galerkin, Self Adjoint Angular Flux (SAAF) and Even Parity forms of the Boltzmann transport equation. In addition, we consider the subtleties in constructing appropriate boundary conditions. In sub-grid scale (SGS) modelling of fluids the advantages of high-order dissipation are well known. Fourth-order terms, for example, are commonly used as a turbulence model with uniform dissipation. They have been shown to have superior properties to SGS models based upon second-order dissipation or viscosity. Even higher-order forms of dissipation (e.g. 16.-order) can offer further advantages, but are only easily realised by spectral methods because of the solution continuity requirements that these higher-order operators demand. Higher-order operators are more effective, bringing a higher degree of representation to the solution locally. Second-order operators, for example, tend to relax the solution to a linear variation locally, whereas a high-order operator will tend to relax the solution to a second-order polynomial locally. The form of the dissipation is also important. For example, the dissipation may only be applied (as it is in this work) in the streamline direction. While for many problems, for example Large Eddy Simulation (LES), simply adding a second or fourth-order dissipation term is a perfectly satisfactory SGS model, it is well known that a consistent residual-free formulation is required for radiation transport problems. This motivated the consideration of a new Petrov-Galerkin method that is residual-free, but also benefits from the advantageous features that SGS modelling introduces. We close with a demonstration of the advantages of this new discretization method over standard Petrov

  9. Molecular scale

    Directory of Open Access Journals (Sweden)

    Christopher H. Childers

    2016-03-01

    Full Text Available This manuscript demonstrates the molecular scale cure rate dependence of di-functional epoxide based thermoset polymers cured with amines. A series of cure heating ramp rates were used to determine the influence of ramp rate on the glass transition temperature (Tg and sub-Tg transitions and the average free volume hole size in these systems. The networks were comprised of 3,3′-diaminodiphenyl sulfone (33DDS and diglycidyl ether of bisphenol F (DGEBF and were cured at ramp rates ranging from 0.5 to 20 °C/min. Differential scanning calorimetry (DSC and NIR spectroscopy were used to explore the cure ramp rate dependence of the polymer network growth, whereas broadband dielectric spectroscopy (BDS and free volume hole size measurements were used to interrogate networks’ molecular level structural variations upon curing at variable heating ramp rates. It was found that although the Tg of the polymer matrices was similar, the NIR and DSC measurements revealed a strong correlation for how these networks grow in relation to the cure heating ramp rate. The free volume analysis and BDS results for the cured samples suggest differences in the molecular architecture of the matrix polymers due to cure heating rate dependence.

  10. Wall-resolved Large Eddy Simulation of a flow through a square-edged orifice in a round pipe at Re = 25,000

    Energy Technology Data Exchange (ETDEWEB)

    Benhamadouche, S., E-mail: sofiane.benhamadouche@edf.fr; Arenas, M.; Malouf, W.J.

    2017-02-15

    Highlights: • Wall-resolved LES can predict the flow through a square-edged orifice at Re = 25,000. • LES results are compared with the available experimental data and ISO 5167-2. • Pressure loss and discharge coefficients are in very good agreement with ISO 5167-2. • The present wall-resolved LES could be used as reference data for RANS validation. - Abstract: The orifice plate is a pressure differential device frequently used for flow measurements in pipes across different industries. The present study demonstrates the accuracy obtainable using a wall-resolved Large Eddy Simulation (LES) approach to predict the velocity, the Reynolds stresses, the pressure loss and the discharge coefficient for a flow through a square-edged orifice in a round pipe at a Reynolds number of 25,000. The ratio of the orifice diameter to the pipe diameter is β = 0.62, and the ratio of the orifice thickness to the pipe diameter is 0.11. The mesh is sized using refinement criteria at the wall and preliminary RANS results to ensure that the solution is resolved beyond an estimated Taylor micro-scale. The inlet condition is simulated using a recycling method, and the LES is run with a dynamic Smagorinsky sub-grid scale (SGS) model. The sensitivity to the SGS model and to the pressure–velocity coupling is shown to be small in the present study. The LES is compared with the available experimental data and ISO 5167-2. In general, the LES shows good agreement with the velocity from the experimental data. The profiles of the Reynolds stresses are similar, but an offset is observed in the diagonal stresses. The pressure loss and discharge coefficients are shown to be in very good agreement with the predictions of ISO 5167-2. Therefore, the wall-resolved LES is shown to be highly accurate in simulating the flow across a square-edged orifice.

  11. Effect of turbulent model closure and type of inlet boundary condition on a Large Eddy Simulation of a non-reacting jet with co-flow stream

    International Nuclear Information System (INIS)

    Payri, Raul; López, J. Javier; Martí-Aldaraví, Pedro; Giraldo, Jhoan S.

    2016-01-01

    Highlights: • LES in a non-reacting jet with co-flow is performed with OpenFoam. • Smagorinsky (SMAG) and One Equation Eddy (OEE) approaches are compared. • A turbulent pipe is used to generate and map coherent inlet turbulence structure. • Fluctuating inlet boundary condition requires much less computational cost. - Abstract: In this paper, the behavior and turbulence structure of a non-reacting jet with a co-flow stream is described by means of Large Eddy Simulations (LES) carried out with the computational tool OpenFoam. In order to study the influence of the sub-grid scale (SGS) model on the main flow statistics, Smagorinsky (SMAG) and One Equation Eddy (OEE) approaches are used to model the smallest scales involved in the turbulence of the jet. The impact of cell size and turbulent inlet boundary condition in resulting velocity profiles is analyzed as well. Four different tasks have been performed to accomplish these objectives. Firstly, the simulation of a turbulent pipe, which is necessary to generate and map coherent turbulence structure into the inlet of the non-reacting jet domain. Secondly, a structured mesh based on hexahedrons has been built for the jet and its co-flow. The third task consists on performing four different simulations. In those, mapping statistics from the turbulent pipe is compared with the use of fluctuating inlet boundary condition available in OpenFoam; OEE and SMAG approaches are contrasted; and the effect of changing cell size is investigated. Finally, as forth task, the obtained results are compared with experimental data. As main conclusions of this comparison, it has been proved that the fluctuating boundary condition requires much less computational cost, but some inaccuracies were found close to the nozzle. Also, both SGS models are capable to simulate this kind of jets with a co-flow stream with exactitude.

  12. Wall-resolved Large Eddy Simulation of a flow through a square-edged orifice in a round pipe at Re = 25,000

    International Nuclear Information System (INIS)

    Benhamadouche, S.; Arenas, M.; Malouf, W.J.

    2017-01-01

    Highlights: • Wall-resolved LES can predict the flow through a square-edged orifice at Re = 25,000. • LES results are compared with the available experimental data and ISO 5167-2. • Pressure loss and discharge coefficients are in very good agreement with ISO 5167-2. • The present wall-resolved LES could be used as reference data for RANS validation. - Abstract: The orifice plate is a pressure differential device frequently used for flow measurements in pipes across different industries. The present study demonstrates the accuracy obtainable using a wall-resolved Large Eddy Simulation (LES) approach to predict the velocity, the Reynolds stresses, the pressure loss and the discharge coefficient for a flow through a square-edged orifice in a round pipe at a Reynolds number of 25,000. The ratio of the orifice diameter to the pipe diameter is β = 0.62, and the ratio of the orifice thickness to the pipe diameter is 0.11. The mesh is sized using refinement criteria at the wall and preliminary RANS results to ensure that the solution is resolved beyond an estimated Taylor micro-scale. The inlet condition is simulated using a recycling method, and the LES is run with a dynamic Smagorinsky sub-grid scale (SGS) model. The sensitivity to the SGS model and to the pressure–velocity coupling is shown to be small in the present study. The LES is compared with the available experimental data and ISO 5167-2. In general, the LES shows good agreement with the velocity from the experimental data. The profiles of the Reynolds stresses are similar, but an offset is observed in the diagonal stresses. The pressure loss and discharge coefficients are shown to be in very good agreement with the predictions of ISO 5167-2. Therefore, the wall-resolved LES is shown to be highly accurate in simulating the flow across a square-edged orifice.

  13. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    2016-06-01

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is the inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.

  14. Multi-scale enhancement of climate prediction over land by improving the model sensitivity to vegetation variability

    Science.gov (United States)

    Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2017-12-01

    Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in

  15. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    Science.gov (United States)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  16. Large Eddy Simulation of Turbulence Modification and Particle Dispersion in a Fully-Developed Pipe Flow

    Science.gov (United States)

    Rani, Sarma; Pratap Vanka, Surya

    1999-11-01

    A LES study of the modification of turbulence in a fully-developed turbulent pipe flow by dispersed heavy particles at Re_τ = 360 is presented. A 64 (radial) x 64 (azimuthal) x 128 (axial) grid has been used. An Eulerian-Lagrangian approach has been used for treating the continuous and the dispersed phases respectively. The particle equation of motion included only the drag force. Three different LES models are used in the continuous fluid simulation: (i) A “No-Model” LES (coarse-grid DNS) (ii) Smagorinsky’s model and (iii) Schumann’s model . The motivation behind employing the Schumann’s model is to study the impact of sub-grid-scale fluctuations on the particle motion and their (SGS fluctuations) modulation, in turn, by the particles. The effect of particles on fluid turbulence is investigated by tracking 100000 particles of different diameters. Our studies confirm the preferential concentration of particles in the near wall region. It is observed that the inclusion of two-way coupling reduces the preferential concentration of particles. In addition, it was found that two-way coupling attenuates the fluid turbulence. However, we expect the above trends to differ depending upon the particle diameter, volumetric and mass fractions. The effect of SGS fluctuations on the particle dispersion and turbulence modulation is also being investigated. Other relevant statistics for the continuous and the dispersed phases are collected for the cases of one-way and two-way coupling. These statistics are compared to study the modulation of turbulence by the particles.

  17. Integrative monitoring of water storage variations at the landscape-scale with an iGrav superconducting gravimeter in a field enclosure

    Science.gov (United States)

    Guntner, A.; Reich, M.; Mikolaj, M.; Creutzfeldt, B.; Schroeder, S.; Wziontek, H.

    2017-12-01

    In spite of the fundamental role of the landscape water balance for the Earth's water and energy cycles, monitoring the water balance and related storage dynamics beyond the point scale is notoriously difficult due to the multitude of flow and storage processes and their spatial heterogeneity. We present the first outdoor deployment of an iGrav superconducting gravimeter (SG) in a minimized field enclosure on a wet-temperate grassland site for integrative monitoring of water storage changes. It is shown that the system performs similarly precise as SGs that have hitherto been deployed in observatory buildings, but with higher sensitivity to hydrological variations in the surroundings of the instrument. Gravity variations observed by the field setup are almost independent of the depth below the terrain surface where water storage changes occur, and thus the field SG system directly observes the total water storage change in an integrative way. We provide a framework to single out the water balance components actual evapotranspiration and lateral subsurface discharge from the gravity time series on annual to daily time scales. With about 99% and 85% of the gravity signal originating within a radius of 4000 and 200 meter around the instrument, respectively, the setup paves the road towards gravimetry as a continuous hydrological field monitoring technique for water storage dynamics at the landscape scale.

  18. A parametric study of quasi-2D LES on Low-Reynolds-number transitional flows past an airfoil

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, W.; Xu, H.; Khalid, M. [National Research Council (NRC), Inst. for Aerospace Research (IAR), Ottawa, Ontario (Canada)]. E-mail: Weixing.Yuan@nrc-cnrc.gc.ca

    2004-07-01

    Low-Reynolds-number aerodynamic performance of small sized air vehicles is an area of increasing interest. In this study, we investigate low-Reynolds-number flows past an SD7003 airfoil to understand substantial viscous features of laminar separation and transitional flow followed by the intractable behavior of reattachment. In order to satisfy the three-dimensional (3D) requirement of the code, a simple '3D wing' is constructed from a two-dimensional (2D) airfoil and only four grid points are used in the spanwise direction. A parametric study of quasi-2D LES on the low-Reynolds-number airfoil flows at Re=60000 is performed. Effects of grid resolution and sub-grid scale (SGS) models are investigated. Although three-dimensional effects cannot be accurately captured, the quasi-2D LES calculations do reveal some important flow characteristics such as leading edge laminar separation and vortex shedding from the primary laminar separation bubble on the low-Reynolds-number airfoil. (author)

  19. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  20. The main building of the 'Societe generale de surveillance (SGS)'. Transparency and modernity.; Le siege de la SGS. Transparence et modernite.

    Energy Technology Data Exchange (ETDEWEB)

    Jaques, A.

    2000-07-01

    The construction of the new building that will shelter the direction of the 'Societe generale de surveillance' in Geneva (Switzerland) is presented. It is mainly based on two priorities: transparency and minimum environmental impact. The building, realised with a metallic structure, possesses totally transparent facades composed of a double thermally insulating exterior pane and a simple movable interior pane. In order to respect the urban environment where the building sits, the engineers had to avoid any combustion heater and cooling towers on the roof. Considering the lake proximity, a reversible heat pump has been chosen as a source of energy for the air-conditioning system, the lake water being the cold source. Two large tanks containing hot water and ice contribute to reduce the temperature fluctuations and limit the temperature of the heated water returned to the lake when working in cooling mode.

  1. Scaling of Metabolic Scaling within Physical Limits

    Directory of Open Access Journals (Sweden)

    Douglas S. Glazier

    2014-10-01

    Full Text Available Both the slope and elevation of scaling relationships between log metabolic rate and log body size vary taxonomically and in relation to physiological or developmental state, ecological lifestyle and environmental conditions. Here I discuss how the recently proposed metabolic-level boundaries hypothesis (MLBH provides a useful conceptual framework for explaining and predicting much, but not all of this variation. This hypothesis is based on three major assumptions: (1 various processes related to body volume and surface area exert state-dependent effects on the scaling slope for metabolic rate in relation to body mass; (2 the elevation and slope of metabolic scaling relationships are linked; and (3 both intrinsic (anatomical, biochemical and physiological and extrinsic (ecological factors can affect metabolic scaling. According to the MLBH, the diversity of metabolic scaling relationships occurs within physical boundary limits related to body volume and surface area. Within these limits, specific metabolic scaling slopes can be predicted from the metabolic level (or scaling elevation of a species or group of species. In essence, metabolic scaling itself scales with metabolic level, which is in turn contingent on various intrinsic and extrinsic conditions operating in physiological or evolutionary time. The MLBH represents a “meta-mechanism” or collection of multiple, specific mechanisms that have contingent, state-dependent effects. As such, the MLBH is Darwinian in approach (the theory of natural selection is also meta-mechanistic, in contrast to currently influential metabolic scaling theory that is Newtonian in approach (i.e., based on unitary deterministic laws. Furthermore, the MLBH can be viewed as part of a more general theory that includes other mechanisms that may also affect metabolic scaling.

  2. Flux scaling: Ultimate regime

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Flux scaling: Ultimate regime. With the Nusselt number and the mixing length scales, we get the Nusselt number and Reynolds number (w'd/ν) scalings: and or. and. scaling expected to occur at extremely high Ra Rayleigh-Benard convection. Get the ultimate regime ...

  3. To scale or not to scale

    DEFF Research Database (Denmark)

    Svendsen, Morten Bo Søndergaard; Christensen, Emil Aputsiaq Flindt; Steffensen, John Fleng

    2017-01-01

    Conventionally, dynamic energy budget (DEB) models operate with animals that have maintenance rates scaling with their body volume, and assimilation rates scaling with body surface area. However, when applying such criteria for the individual in a population level model, the emergent behaviour...

  4. Atlantic Salmon Scale Measurements

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Scales are collected annually from smolt trapping operations in Maine as wellas other sampling opportunities (e.g. marine surveys, fishery sampling etc.). Scale...

  5. Application of large-scale sequencing to marker discovery in plants

    Indian Academy of Sciences (India)

    2012-10-15

    Oct 15, 2012 ... mate-pair libraries (large insert libraries), RNA-Seq data, reduced ... range of different applications for SGS have been developed and applied to marker ..... duced by human selection for desirable grain qualities. A total of 399 ...

  6. Concepts of scale

    NARCIS (Netherlands)

    Padt, F.J.G.; Arts, B.J.M.

    2014-01-01

    This chapter provides some clarity to the scale debate. It bridges a variety of approaches, definitions and jargons used in various disciplines in order to provide common ground for a concept of scale as a basis for scale-sensitive governance of the environment. The chapter introduces the concept of

  7. Physical modelling of interactions between interfaces and turbulence; Modelisation physique des interactions entre interfaces et turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Toutant, A

    2006-12-15

    The complex interactions between interfaces and turbulence strongly impact the flow properties. Unfortunately, Direct Numerical Simulations (DNS) have to entail a number of degrees of freedom proportional to the third power of the Reynolds number to correctly describe the flow behaviour. This extremely hard constraint makes it impossible to use DNS for industrial applications. Our strategy consists in using and improving DNS method in order to develop the Interfaces and Sub-grid Scales concept. ISS is a two-phase equivalent to the single-phase Large Eddy Simulation (LES) concept. The challenge of ISS is to integrate the two-way coupling phenomenon into sub-grid models. Applying a space filter, we have exhibited correlations or sub-grid terms that require closures. We have shown that, in two-phase flows, the presence of a discontinuity leads to specific sub-grid terms. Comparing the maximum of the norm of the sub-grid terms with the maximum of the norm of the advection tensor, we have found that sub-grid terms related to interfacial forces and viscous effect are negligible. Consequently, in the momentum balance, only the sub-grid terms related to inertia have to be closed. Thanks to a priori tests performed on several DNS data, we demonstrate that the scale similarity hypothesis, reinterpreted near discontinuity, provides sub-grid models that take into account the two-way coupling phenomenon. These models correspond to the first step of our work. Indeed, in this step, interfaces are smooth and, interactions between interfaces and turbulence occur in a transition zone where each physical variable varies sharply but continuously. The next challenge has been to determine the jump conditions across the sharp equivalent interface corresponding to the sub-grid models of the transition zone. We have used the matched asymptotic expansion method to obtain the jump conditions. The first tests on the velocity of the sharp equivalent interface are very promising (author)

  8. Optimal renormalization scales and commensurate scale relations

    International Nuclear Information System (INIS)

    Brodsky, S.J.; Lu, H.J.

    1996-01-01

    Commensurate scale relations relate observables to observables and thus are independent of theoretical conventions, such as the choice of intermediate renormalization scheme. The physical quantities are related at commensurate scales which satisfy a transitivity rule which ensures that predictions are independent of the choice of an intermediate renormalization scheme. QCD can thus be tested in a new and precise way by checking that the observables track both in their relative normalization and in their commensurate scale dependence. For example, the radiative corrections to the Bjorken sum rule at a given momentum transfer Q can be predicted from measurements of the e+e - annihilation cross section at a corresponding commensurate energy scale √s ∝ Q, thus generalizing Crewther's relation to non-conformal QCD. The coefficients that appear in this perturbative expansion take the form of a simple geometric series and thus have no renormalon divergent behavior. The authors also discuss scale-fixed relations between the threshold corrections to the heavy quark production cross section in e+e - annihilation and the heavy quark coupling α V which is measurable in lattice gauge theory

  9. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  10. Scaling of differential equations

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    The book serves both as a reference for various scaled models with corresponding dimensionless numbers, and as a resource for learning the art of scaling. A special feature of the book is the emphasis on how to create software for scaled models, based on existing software for unscaled models. Scaling (or non-dimensionalization) is a mathematical technique that greatly simplifies the setting of input parameters in numerical simulations. Moreover, scaling enhances the understanding of how different physical processes interact in a differential equation model. Compared to the existing literature, where the topic of scaling is frequently encountered, but very often in only a brief and shallow setting, the present book gives much more thorough explanations of how to reason about finding the right scales. This process is highly problem dependent, and therefore the book features a lot of worked examples, from very simple ODEs to systems of PDEs, especially from fluid mechanics. The text is easily accessible and exam...

  11. Landscape-scale water balance monitoring with an iGrav superconducting gravimeter in a field enclosure

    Science.gov (United States)

    Güntner, Andreas; Reich, Marvin; Mikolaj, Michal; Creutzfeldt, Benjamin; Schroeder, Stephan; Wziontek, Hartmut

    2017-06-01

    In spite of the fundamental role of the landscape water balance for the Earth's water and energy cycles, monitoring the water balance and its components beyond the point scale is notoriously difficult due to the multitude of flow and storage processes and their spatial heterogeneity. Here, we present the first field deployment of an iGrav superconducting gravimeter (SG) in a minimized enclosure for long-term integrative monitoring of water storage changes. Results of the field SG on a grassland site under wet-temperate climate conditions were compared to data provided by a nearby SG located in the controlled environment of an observatory building. The field system proves to provide gravity time series that are similarly precise as those of the observatory SG. At the same time, the field SG is more sensitive to hydrological variations than the observatory SG. We demonstrate that the gravity variations observed by the field setup are almost independent of the depth below the terrain surface where water storage changes occur (contrary to SGs in buildings), and thus the field SG system directly observes the total water storage change, i.e., the water balance, in its surroundings in an integrative way. We provide a framework to single out the water balance components actual evapotranspiration and lateral subsurface discharge from the gravity time series on annual to daily timescales. With about 99 and 85 % of the gravity signal due to local water storage changes originating within a radius of 4000 and 200 m around the instrument, respectively, this setup paves the road towards gravimetry as a continuous hydrological field-monitoring technique at the landscape scale.

  12. Landscape-scale water balance monitoring with an iGrav superconducting gravimeter in a field enclosure

    Directory of Open Access Journals (Sweden)

    A. Güntner

    2017-06-01

    Full Text Available In spite of the fundamental role of the landscape water balance for the Earth's water and energy cycles, monitoring the water balance and its components beyond the point scale is notoriously difficult due to the multitude of flow and storage processes and their spatial heterogeneity. Here, we present the first field deployment of an iGrav superconducting gravimeter (SG in a minimized enclosure for long-term integrative monitoring of water storage changes. Results of the field SG on a grassland site under wet–temperate climate conditions were compared to data provided by a nearby SG located in the controlled environment of an observatory building. The field system proves to provide gravity time series that are similarly precise as those of the observatory SG. At the same time, the field SG is more sensitive to hydrological variations than the observatory SG. We demonstrate that the gravity variations observed by the field setup are almost independent of the depth below the terrain surface where water storage changes occur (contrary to SGs in buildings, and thus the field SG system directly observes the total water storage change, i.e., the water balance, in its surroundings in an integrative way. We provide a framework to single out the water balance components actual evapotranspiration and lateral subsurface discharge from the gravity time series on annual to daily timescales. With about 99 and 85 % of the gravity signal due to local water storage changes originating within a radius of 4000 and 200 m around the instrument, respectively, this setup paves the road towards gravimetry as a continuous hydrological field-monitoring technique at the landscape scale.

  13. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  14. Scale and scaling in agronomy and environmental sciences

    Science.gov (United States)

    Scale is of paramount importance in environmental studies, engineering, and design. The unique course covers the following topics: scale and scaling, methods and theories, scaling in soils and other porous media, scaling in plants and crops; scaling in landscapes and watersheds, and scaling in agro...

  15. Scale of Critical Thinking

    OpenAIRE

    Semerci, Nuriye

    2000-01-01

    The main purpose of this study is to develop the scale for critical thinking. The Scale of Critical Thinking was applied to 200 student. In this scale, there are total 55 items, four of which are negative and 51 of which are positive. The KMO (Kaiser-Meyer-Olkin) value is 0.75, the Bartlett test value is 7145.41, and the Cronbach Alpha value is 0.90.

  16. Fractal Characteristics Analysis of Blackouts in Interconnected Power Grid

    DEFF Research Database (Denmark)

    Wang, Feng; Li, Lijuan; Li, Canbing

    2018-01-01

    The power failure models are a key to understand the mechanism of large scale blackouts. In this letter, the similarity of blackouts in interconnected power grids (IPGs) and their sub-grids is discovered by the fractal characteristics analysis to simplify the failure models of the IPG. The distri......The power failure models are a key to understand the mechanism of large scale blackouts. In this letter, the similarity of blackouts in interconnected power grids (IPGs) and their sub-grids is discovered by the fractal characteristics analysis to simplify the failure models of the IPG....... The distribution characteristics of blackouts in various sub-grids are demonstrated based on the Kolmogorov-Smirnov (KS) test. The fractal dimensions (FDs) of the IPG and its sub-grids are then obtained by using the KS test and the maximum likelihood estimation (MLE). The blackouts data in China were used...

  17. Feasibility study on applicability of direct contact heat transfer SGs or FBRs

    International Nuclear Information System (INIS)

    Kinoshita, Izumi; Nishi, Yoshihisa; Furuya, Masahiro

    1997-01-01

    As a candidate of an innovative steam generator for fast breeder reactors, heat exchanger with direct contact heat transfer between melting alloy and water was proposed. The objectives of this study are to obtain the technical feasibility of this concept, to evaluate the heat transfer characteristics of direct contact heat transfer and to estimate the size and volume of this SG. Followings are main results. (1) In the case of sodium tube failure, it is considered that steam and water will not enter into the primary sodium under appropriate countermeasures. (2) Under the condition of temperature and pressure of SG for FBRs, the phenomenon such as vapor explosion is not take place in this SG concept. (3) as a result of material compatibility test and analysis, it is considered that 9Cr-1Mo steel and 21/4cr-1Mo steel will be a candidate structural material. (4) It is considered that the production of oxides by the chemical reaction between melting alloy and water is mitigated by dissolving hydrogen gas in feed water. (5) The fundamental direct contact heat transfer characteristics between a melting alloy and water is obtained in following two regions. One is the evaporating region and the other is the superheating region. The effect of the system pressure on the heat transfer characteristics and the required degree of superheat of a melting alloy above the water saturation temperature are evaluated during direct contact heat transfer experiments by injecting water into a high temperature melting alloy. (6) Due to the high heat transfer performance of direct contact heat transfer, it is found that compact steam generation section will be expected. However, because of the characteristics of direct contact heat exchanger, achievement of high efficiency was difficult. In order to make a good use of this SG concept, improvement of efficiency is necessary. (author)

  18. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  19. Pre-Kindergarten Scale.

    Science.gov (United States)

    Flynn, Tim

    This 25-item scale for rating prekindergarten children concerns personal and cognitive skills. Directions for using the scale are provided. Personal skills include personal hygiene, communication skills, eating habits, relationships with the teacher, peer relations, and personal behavior. Cognitive skills rated are verbal skills, object…

  20. Scales of Progress

    Science.gov (United States)

    Jung, Lee Ann

    2018-01-01

    What is Goal Attainment Scaling? In this article, Lee Ann Jung defines it as a way to measure a student's progress toward an individualized goal. Instead of measuring a skill at a set time (for instance, on a test or other assignment), Goal Attainment Scaling tracks the steps a student takes over the course of a year in a targeted skill. Together,…

  1. Magnetron injection gun scaling

    International Nuclear Information System (INIS)

    Lawson, W.

    1988-01-01

    Existing analytic design equations for magnetron injection guns (MIG's) are approximated to obtain a set of scaling laws. The constraints are chosen to examine the maximum peak power capabilities of MIG's. The scaling laws are compared with exact solutions of the design equations and are supported by MIG simulations

  2. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  3. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  4. Scales and erosion

    Science.gov (United States)

    There is a need to develop scale explicit understanding of erosion to overcome existing conceptual and methodological flaws in our modelling methods currently applied to understand the process of erosion, transport and deposition at the catchment scale. These models need to be based on a sound under...

  5. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  6. Allometric Scaling in Biology

    Science.gov (United States)

    Banavar, Jayanth

    2009-03-01

    The unity of life is expressed not only in the universal basis of inheritance and energetics at the molecular level, but also in the pervasive scaling of traits with body size at the whole-organism level. More than 75 years ago, Kleiber and Brody and Proctor independently showed that the metabolic rates, B, of mammals and birds scale as the three-quarter power of their mass, M. Subsequent studies showed that most biological rates and times scale as M-1/4 and M^1/4 respectively, and that these so called quarter-power scaling relations hold for a variety of organisms, from unicellular prokaryotes and eukaryotes to trees and mammals. The wide applicability of Kleiber's law, across the 22 orders of magnitude of body mass from minute bacteria to giant whales and sequoias, raises the hope that there is some simple general explanation that underlies the incredible diversity of form and function. We will present a general theoretical framework for understanding the relationship between metabolic rate, B, and body mass, M. We show how the pervasive quarter-power biological scaling relations arise naturally from optimal directed resource supply systems. This framework robustly predicts that: 1) whole organism power and resource supply rate, B, scale as M^3/4; 2) most other rates, such as heart rate and maximal population growth rate scale as M-1/4; 3) most biological times, such as blood circulation time and lifespan, scale as M^1/4; and 4) the average velocity of flow through the network, v, such as the speed of blood and oxygen delivery, scales as M^1/12. Our framework is valid even when there is no underlying network. Our theory is applicable to unicellular organisms as well as to large animals and plants. This work was carried out in collaboration with Amos Maritan along with Jim Brown, John Damuth, Melanie Moses, Andrea Rinaldo, and Geoff West.

  7. Small scale optics

    CERN Document Server

    Yupapin, Preecha

    2013-01-01

    The behavior of light in small scale optics or nano/micro optical devices has shown promising results, which can be used for basic and applied research, especially in nanoelectronics. Small Scale Optics presents the use of optical nonlinear behaviors for spins, antennae, and whispering gallery modes within micro/nano devices and circuits, which can be used in many applications. This book proposes a new design for a small scale optical device-a microring resonator device. Most chapters are based on the proposed device, which uses a configuration know as a PANDA ring resonator. Analytical and nu

  8. Scale-relativistic cosmology

    International Nuclear Information System (INIS)

    Nottale, Laurent

    2003-01-01

    The principle of relativity, when it is applied to scale transformations, leads to the suggestion of a generalization of fundamental dilations laws. These new special scale-relativistic resolution transformations involve log-Lorentz factors and lead to the occurrence of a minimal and of a maximal length-scale in nature, which are invariant under dilations. The minimal length-scale, that replaces the zero from the viewpoint of its physical properties, is identified with the Planck length l P , and the maximal scale, that replaces infinity, is identified with the cosmic scale L=Λ -1/2 , where Λ is the cosmological constant.The new interpretation of the Planck scale has several implications for the structure and history of the early Universe: we consider the questions of the origin, of the status of physical laws at very early times, of the horizon/causality problem and of fluctuations at recombination epoch.The new interpretation of the cosmic scale has consequences for our knowledge of the present universe, concerning in particular Mach's principle, the large number coincidence, the problem of the vacuum energy density, the nature and the value of the cosmological constant. The value (theoretically predicted ten years ago) of the scaled cosmological constant Ω Λ =0.75+/-0.15 is now supported by several different experiments (Hubble diagram of Supernovae, Boomerang measurements, gravitational lensing by clusters of galaxies).The scale-relativity framework also allows one to suggest a solution to the missing mass problem, and to make theoretical predictions of fundamental energy scales, thanks to the interpretation of new structures in scale space: fractal/classical transitions as Compton lengths, mass-coupling relations and critical value 4π 2 of inverse couplings. Among them, we find a structure at 3.27+/-0.26x10 20 eV, which agrees closely with the observed highest energy cosmic rays at 3.2+/-0.9x10 20 eV, and another at 5.3x10 -3 eV, which corresponds to the

  9. Beyond KNO scaling

    International Nuclear Information System (INIS)

    Hegyi, S.

    1998-01-01

    A generalization of the Koba-Nielsen-Olesen scaling law of the multiplicity distributions P(n) is presented. It consists of a change in the normalization point of P(n) compensated by a suitable change in the renormalized parameters and a rescaling. The iterative repetition of the transformation yields the sequence of higher-order moment distributions of P(n). Each member of this sequence may exhibit data collapsing behavior in case of violation of the original KNO scaling hypothesis. It is shown that the iterative procedure can be viewed as varying the collision energy, i.e. the moment distributions of P(n) can represent the pattern of pre-asymptotic KNO scaling violation. The fixed points of the iteration will be determined and a consistency test based on Feynman scaling is to be given. (author)

  10. Understanding scaling laws

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1986-01-01

    Accelerator scaling laws how they can be generated, and how they are used are discussed. A scaling law is a relation between machine parameters and beam parameters. An alternative point of view is that a scaling law is an imposed relation between the equations of motion and the initial conditions. The relation between the parameters is obtained by requiring the beam to be matched. (A beam is said to be matched if the phase-space distribution function is a function of single-particle invariants of the motion.) Because of this restriction, the number of independent parameters describing the system is reduced. Using simple models for bunched- and unbunched-beam situations. Scaling laws are shown to determine the general behavior of beams in accelerators. Such knowledge is useful in design studies for new machines such as high-brightness linacs. The simple model presented shows much of the same behavior as a more detailed RFQ model

  11. Scale-Dependent Grasp

    OpenAIRE

    Kaneko, Makoto; Shirai, Tatsuya; Tsuji, Toshio

    2000-01-01

    This paper discusses the scale-dependent grasp.Suppose that a human approaches an object initially placed on atable and finally achieves an enveloping grasp. Under such initialand final conditions, he (or she) unconsciously changes the graspstrategy according to the size of objects, even though they havesimilar geometry. We call the grasp planning the scale-dependentgrasp. We find that grasp patterns are also changed according tothe surface friction and the geometry of cross section in additi...

  12. Fast ignition breakeven scaling

    International Nuclear Information System (INIS)

    Slutz, Stephen A.; Vesey, Roger Alan

    2005-01-01

    A series of numerical simulations have been performed to determine scaling laws for fast ignition break even of a hot spot formed by energetic particles created by a short pulse laser. Hot spot break even is defined to be when the fusion yield is equal to the total energy deposited in the hot spot through both the initial compression and the subsequent heating. In these simulations, only a small portion of a previously compressed mass of deuterium-tritium fuel is heated on a short time scale, i.e., the hot spot is tamped by the cold dense fuel which surrounds it. The hot spot tamping reduces the minimum energy required to obtain break even as compared to the situation where the entire fuel mass is heated, as was assumed in a previous study [S. A. Slutz, R. A. Vesey, I. Shoemaker, T. A. Mehlhorn, and K. Cochrane, Phys. Plasmas 7, 3483 (2004)]. The minimum energy required to obtain hot spot break even is given approximately by the scaling law E T = 7.5(ρ/100) -1.87 kJ for tamped hot spots, as compared to the previously reported scaling of E UT = 15.3(ρ/100) -1.5 kJ for untamped hotspots. The size of the compressed fuel mass and the focusability of the particles generated by the short pulse laser determines which scaling law to use for an experiment designed to achieve hot spot break even

  13. Scales of gravity

    International Nuclear Information System (INIS)

    Dvali, Gia; Kolanovic, Marko; Nitti, Francesco; Gabadadze, Gregory

    2002-01-01

    We propose a framework in which the quantum gravity scale can be as low as 10 -3 eV. The key assumption is that the standard model ultraviolet cutoff is much higher than the quantum gravity scale. This ensures that we observe conventional weak gravity. We construct an explicit brane-world model in which the brane-localized standard model is coupled to strong 5D gravity of infinite-volume flat extra space. Because of the high ultraviolet scale, the standard model fields generate a large graviton kinetic term on the brane. This kinetic term 'shields' the standard model from the strong bulk gravity. As a result, an observer on the brane sees weak 4D gravity up to astronomically large distances beyond which gravity becomes five dimensional. Modeling quantum gravity above its scale by the closed string spectrum we show that the shielding phenomenon protects the standard model from an apparent phenomenological catastrophe due to the exponentially large number of light string states. The collider experiments, astrophysics, cosmology and gravity measurements independently point to the same lower bound on the quantum gravity scale, 10 -3 eV. For this value the model has experimental signatures both for colliders and for submillimeter gravity measurements. Black holes reveal certain interesting properties in this framework

  14. Universities scale like cities.

    Directory of Open Access Journals (Sweden)

    Anthony F J van Raan

    Full Text Available Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.

  15. Universities scale like cities.

    Science.gov (United States)

    van Raan, Anthony F J

    2013-01-01

    Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.

  16. Child Development Program Evaluation Scale.

    Science.gov (United States)

    Fiene, Richard J.

    The Child Development Program Evaluation Scale (CDPES) is actually two scales in one, a licensing scale and a quality scale. Licensing predictor items have been found to predict overall compliance of child day care centers with state regulations in four states. Quality scale items have been found to predict the overall quality of child day care…

  17. The INES scale

    International Nuclear Information System (INIS)

    2014-01-01

    This document presents the International Nuclear Event Scale (INES) which has been created to classify nuclear and radiological events in terms of severity. This scale comprises eight levels from 0 to 7, from a slight but noticeable shift with respect to nominal operation to a major accident. Criteria used for incident and accident classification are indicated; they are based on consequences outside the site, consequences within the site, degradation of in-depth defence. The benefit and weaknesses of this scale are briefly indicated. The major concerned actors are the IAEA, the NEA and the ASN. Some key figures are given (number of declared events and incidents), and a ranking of the main nuclear events is proposed with a brief description of the event: Chernobyl, Fukushima, Kyshtym, Three Mile Island, Sellafield, Tokaimura, Saint-Laurent-des-Eaux. Countries which have adopted INES are indicated, as well as the number of incidents reports in France to the ASN

  18. Wavelets, vibrations and scalings

    CERN Document Server

    Meyer, Yves

    1997-01-01

    Physicists and mathematicians are intensely studying fractal sets of fractal curves. Mandelbrot advocated modeling of real-life signals by fractal or multifractal functions. One example is fractional Brownian motion, where large-scale behavior is related to a corresponding infrared divergence. Self-similarities and scaling laws play a key role in this new area. There is a widely accepted belief that wavelet analysis should provide the best available tool to unveil such scaling laws. And orthonormal wavelet bases are the only existing bases which are structurally invariant through dyadic dilations. This book discusses the relevance of wavelet analysis to problems in which self-similarities are important. Among the conclusions drawn are the following: 1) A weak form of self-similarity can be given a simple characterization through size estimates on wavelet coefficients, and 2) Wavelet bases can be tuned in order to provide a sharper characterization of this self-similarity. A pioneer of the wavelet "saga", Meye...

  19. No-Scale Inflation

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V.; Olive, Keith A.

    2016-01-01

    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on $R + R^2$ gravity, with a tilted spectrum of scalar perturbations: $n_s \\sim 0.96$, and small values of the tensor-to-scalar perturbation ratio $r < 0.1$, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  20. No-scale inflation

    Science.gov (United States)

    Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.

    2016-05-01

    Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on R+{R}2 gravity, with a tilted spectrum of scalar perturbations: {n}s∼ 0.96, and small values of the tensor-to-scalar perturbation ratio r\\lt 0.1, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.

  1. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  2. Finite size scaling theory

    International Nuclear Information System (INIS)

    Rittenberg, V.

    1983-01-01

    Fischer's finite-size scaling describes the cross over from the singular behaviour of thermodynamic quantities at the critical point to the analytic behaviour of the finite system. Recent extensions of the method--transfer matrix technique, and the Hamiltonian formalism--are discussed in this paper. The method is presented, with equations deriving scaling function, critical temperature, and exponent v. As an application of the method, a 3-states Hamiltonian with Z 3 global symmetry is studied. Diagonalization of the Hamiltonian for finite chains allows one to estimate the critical exponents, and also to discover new phase transitions at lower temperatures. The critical points lambda, and indices v estimated for finite-scaling are given

  3. Spatial ecology across scales.

    Science.gov (United States)

    Hastings, Alan; Petrovskii, Sergei; Morozov, Andrew

    2011-04-23

    The international conference 'Models in population dynamics and ecology 2010: animal movement, dispersal and spatial ecology' took place at the University of Leicester, UK, on 1-3 September 2010, focusing on mathematical approaches to spatial population dynamics and emphasizing cross-scale issues. Exciting new developments in scaling up from individual level movement to descriptions of this movement at the macroscopic level highlighted the importance of mechanistic approaches, with different descriptions at the microscopic level leading to different ecological outcomes. At higher levels of organization, different macroscopic descriptions of movement also led to different properties at the ecosystem and larger scales. New developments from Levy flight descriptions to the incorporation of new methods from physics and elsewhere are revitalizing research in spatial ecology, which will both increase understanding of fundamental ecological processes and lead to tools for better management.

  4. X and Y scaling

    International Nuclear Information System (INIS)

    West, G.B.

    1988-01-01

    Although much of the intuition for interpreting the high energy data as scattering from structureless constituents came from nuclear physics (and to a lesser extent atomic physics) virtually no data existed for nuclear targets in the non-relativistic regime until relatively recently. It is therefore not so surprising that,in site of the fact that the basic nuclear physics has been well understood for a very long time, the corresponding non-relativistic scaling law was not written down until after the relativistic one,relevant to particle physics, had been explored. Of course, to the extent that these scaling laws simply reflect quasi-elastic scattering of the probe from the constituents, they contain little new physics once the nature of the constitutents is known and understood. On the other hand, deviations from scaling represent corrections to the impulse approximation and can reflect important dynamical and coherent features of the system. Furthermore, as will be discussed in detail here, the scaling curve itself represents the single particle momentum distribution of constituents inside the target. It is therefore prudent to plot the data in terms of a suitable scaling variable since this immediately focuses attention on the dominant physics. Extraneous physics, such as Rutherford scattering in the case of electrons, or magnetic scattering in the case of thermal neutrons is factored out and the use of a scaling variable (such as y) automatically takes into account the fact that the target is a bound state of well-defined constituents. In this talk I shall concentrate almost entirely on non-relativistic systems. Although the formalism applies equally well to both electron scattering from nuclei and thermal neutron scattering from liquids, I shall, because of my background, usually be thinking of the former. On the other hand I shall completely ignore spin considerations so, ironically, the results actually apply more to the latter case!

  5. Elders Health Empowerment Scale

    Science.gov (United States)

    2014-01-01

    Introduction: Empowerment refers to patient skills that allow them to become primary decision-makers in control of daily self-management of health problems. As important the concept as it is, particularly for elders with chronic diseases, few available instruments have been validated for use with Spanish speaking people. Objective: Translate and adapt the Health Empowerment Scale (HES) for a Spanish-speaking older adults sample and perform its psychometric validation. Methods: The HES was adapted based on the Diabetes Empowerment Scale-Short Form. Where "diabetes" was mentioned in the original tool, it was replaced with "health" terms to cover all kinds of conditions that could affect health empowerment. Statistical and Psychometric Analyses were conducted on 648 urban-dwelling seniors. Results: The HES had an acceptable internal consistency with a Cronbach's α of 0.89. The convergent validity was supported by significant Pearson's Coefficient correlations between the HES total and item scores and the General Self Efficacy Scale (r= 0.77), Swedish Rheumatic Disease Empowerment Scale (r= 0.69) and Making Decisions Empowerment Scale (r= 0.70). Construct validity was evaluated using item analysis, half-split test and corrected item to total correlation coefficients; with good internal consistency (α> 0.8). The content validity was supported by Scale and Item Content Validity Index of 0.98 and 1.0, respectively. Conclusions: HES had acceptable face validity and reliability coefficients; which added to its ease administration and users' unbiased comprehension, could set it as a suitable tool in evaluating elder's outpatient empowerment-based medical education programs. PMID:25767307

  6. Do singing-ground surveys reflect american woodcock abundance in the western Great Lakes region?

    Science.gov (United States)

    Matthew R. Nelson,; Andersen, David E.

    2013-01-01

    The Singing-ground Survey (SGS) is the primary monitoring tool used to assess population status and trends of American woodcock (Scolopax minor). Like most broad-scale surveys, the SGS cannot be directly validated because there are no independent estimates of abundance of displaying male American woodcock at an appropriate spatial scale. Furthermore, because locations of individual SGS routes have generally remained stationary since the SGS was standardized in 1968, it is not known whether routes adequately represent the landscapes they were intended to represent. To indirectly validate the SGS, we evaluated whether 1) counts of displaying male American woodcock on SGS routes related to land-cover types known to be related to American woodcock abundance, 2) changes in counts of displaying male American woodcock through time were related to changes in land cover along SGS routes, and 3) land-cover type composition along SGS routes was similar to land-cover type composition of the surrounding landscape. In Wisconsin and Minnesota, USA, counts along SGS routes reflected known American woodcock-habitat relations. Increases in the number of woodcock heard along SGS routes over a 13-year period in Wisconsin were related to increasing amounts of early successional forest, decreasing amounts of mature forest, and increasing dispersion and interspersion of cover types. Finally, the cover types most strongly associated with American woodcock abundance were represented along SGS routes in proportion to their composition of the broader landscape. Taken together, these results suggest that in the western Great Lakes region, the SGS likely provides a reliable tool for monitoring relative abundance and population trends of breeding, male American woodcock.

  7. Scaling law systematics

    International Nuclear Information System (INIS)

    Pfirsch, D.; Duechs, D.F.

    1985-01-01

    A number of statistical implications of empirical scaling laws in form of power products obtained by linear regression are analysed. The sensitivity of the error against a change of exponents is described by a sensitivity factor and the uncertainty of predictions by a ''range of predictions factor''. Inner relations in the statistical material is discussed, as well as the consequences of discarding variables.A recipe is given for the computations to be done. The whole is exemplified by considering scaling laws for the electron energy confinement time of ohmically heated tokamak plasmas. (author)

  8. Tokamak confinement scaling laws

    International Nuclear Information System (INIS)

    Connor, J.

    1998-01-01

    The scaling of energy confinement with engineering parameters, such as plasma current and major radius, is important for establishing the size of an ignited fusion device. Tokamaks exhibit a variety of modes of operation with different confinement properties. At present there is no adequate first principles theory to predict tokamak energy confinement and the empirical scaling method is the preferred approach to designing next step tokamaks. This paper reviews a number of robust theoretical concepts, such as dimensional analysis and stability boundaries, which provide a framework for characterising and understanding tokamak confinement and, therefore, generate more confidence in using empirical laws for extrapolation to future devices. (author)

  9. Rolling at small scales

    DEFF Research Database (Denmark)

    Nielsen, Kim L.; Niordson, Christian F.; Hutchinson, John W.

    2016-01-01

    The rolling process is widely used in the metal forming industry and has been so for many years. However, the process has attracted renewed interest as it recently has been adapted to very small scales where conventional plasticity theory cannot accurately predict the material response. It is well....... Metals are known to be stronger when large strain gradients appear over a few microns; hence, the forces involved in the rolling process are expected to increase relatively at these smaller scales. In the present numerical analysis, a steady-state modeling technique that enables convergence without...

  10. Scaling up Telemedicine

    DEFF Research Database (Denmark)

    Christensen, Jannie Kristine Bang; Nielsen, Jeppe Agger; Gustafsson, Jeppe

    through negotiating, mobilizing coalitions, and legitimacy building. To illustrate and further develop this conceptualization, we build on insights from a longitudinal case study (2008-2014) and provide a rich empirical account of how a Danish telemedicine pilot was transformed into a large......-scale telemedicine project through simultaneous translation and theorization efforts in a cross-sectorial, politicized social context. Although we focus on upscaling as a bottom up process (from pilot to large scale), we argue that translation and theorization, and associated political behavior occurs in a broader...

  11. SCALE system driver

    International Nuclear Information System (INIS)

    Petrie, L.M.

    1984-01-01

    The SCALE driver was designed to allow implementation of a modular code system consisting of control modules, which determine the calculation path, and functional modules, which perform the basic calculations. The user can either select a control module and have that module determine the execution path, or the user can select functional modules directly by input

  12. Scaling violation in QCD

    International Nuclear Information System (INIS)

    Furmanski, W.

    1981-08-01

    The effects of scaling violation in QCD are discussed in the perturbative scheme, based on the factorization of mass singularities in the light-like gauge. Some recent applications including the next-to-leading corrections are presented (large psub(T) scattering, numerical analysis of the leptoproduction data). A proposal is made for extending the method on the higher twist sector. (author)

  13. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  14. Civic Engagement Scale

    Directory of Open Access Journals (Sweden)

    Amy Doolittle

    2013-07-01

    Full Text Available This study reports on the development and validation of the Civic Engagement Scale (CES. This scale is developed to be easily administered and useful to educators who are seeking to measure the attitudes and behaviors that have been affected by a service-learning experience. This instrument was administered as a validation study in a purposive sample of social work and education majors at three universities (N = 513 with a return of 354 (69%. After the reliability and validity analysis was completed, the Attitude subscale was left with eight items and a Cronbach’s alpha level of .91. The Behavior subscale was left with six items and a Cronbach’s alpha level of .85. Principal component analysis indicated a two-dimensional scale with high loadings on both factors (mean factor loading for the attitude factor = .79, and mean factor loading for the behavior factor = .77. These results indicate that the CES is strong enough to recommend its use in educational settings. Preliminary use has demonstrated that this scale will be useful to researchers seeking to better understand the relationship of attitudes and behaviors with civic engagement in the service-learning setting. The primary limitations of this research are that the sample was limited to social work and education majors who were primarily White (n = 312, 88.1% and female (n = 294, 83.1%. Therefore, further research would be needed to generalize this research to other populations.

  15. Difficulty scaling through incongruity

    NARCIS (Netherlands)

    Lankveld, van G.; Spronck, P.; Rauterberg, G.W.M.; Mateas, M.; Darken, C.

    2008-01-01

    In this paper we discuss our work on using the incongruity measure from psychological literature to scale the difficulty level of a game online to the capabilities of the human player. Our approach has been implemented in a small game called Glove.

  16. Symbolic Multidimensional Scaling

    NARCIS (Netherlands)

    P.J.F. Groenen (Patrick); Y. Terada

    2015-01-01

    markdownabstract__Abstract__ Multidimensional scaling (MDS) is a technique that visualizes dissimilarities between pairs of objects as distances between points in a low dimensional space. In symbolic MDS, a dissimilarity is not just a value but can represent an interval or even a histogram. Here,

  17. Cardinal scales for health evaluation

    DEFF Research Database (Denmark)

    Harvey, Charles; Østerdal, Lars Peter Raahave

    2010-01-01

    Policy studies often evaluate health for an individual or for a population by using measurement scales that are ordinal scales or expected-utility scales. This paper develops scales of a different type, commonly called cardinal scales, that measure changes in health. Also, we argue that cardinal...... scales provide a meaningful and useful means of evaluating health policies. Thus, we develop a means of using the perspective of early neoclassical welfare economics as an alternative to ordinalist and expected-utility perspectives....

  18. SCALE Code System

    Energy Technology Data Exchange (ETDEWEB)

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE

  19. Evolution of Scale Worms

    DEFF Research Database (Denmark)

    Gonzalez, Brett Christopher

    ) caves, and the interstitium, recovering six monophyletic clades within Aphroditiformia: Acoetidae, Aphroditidae, Eulepethidae, Iphionidae, Polynoidae, and Sigalionidae (inclusive of the former ‘Pisionidae’ and ‘Pholoidae’), respectively. Tracing of morphological character evolution showed a high degree...... of adaptability and convergent evolution between relatively closely related scale worms. While some morphological and behavioral modifications in cave polynoids reflected troglomorphism, other modifications like eye loss were found to stem from a common ancestor inhabiting the deep sea, further corroborating...... the deep sea ancestry of scale worm cave fauna. In conclusion, while morphological characterization across Aphroditiformia appears deceptively easy due to the presence of elytra, convergent evolution during multiple early radiations across wide ranging habitats have confounded our ability to reconstruct...

  20. Multiple time scale dynamics

    CERN Document Server

    Kuehn, Christian

    2015-01-01

    This book provides an introduction to dynamical systems with multiple time scales. The approach it takes is to provide an overview of key areas, particularly topics that are less available in the introductory form.  The broad range of topics included makes it accessible for students and researchers new to the field to gain a quick and thorough overview. The first of its kind, this book merges a wide variety of different mathematical techniques into a more unified framework. The book is highly illustrated with many examples and exercises and an extensive bibliography. The target audience of this  book are senior undergraduates, graduate students as well as researchers interested in using the multiple time scale dynamics theory in nonlinear science, either from a theoretical or a mathematical modeling perspective. 

  1. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  2. Dissolution of sulfate scales

    Energy Technology Data Exchange (ETDEWEB)

    Hen, J.

    1991-11-26

    This patent describes a composition for the removal of sulfate scale from surfaces. It comprises: an aqueous solution of about 0.1 to 1.0 molar concentration of an aminopolycarboxylic acid (APCA) containing 1 to 4 amino groups or a salt thereof, and about 0.1 to 1.0 molar concentration of a second component which is diethylenetriaminepenta (methylenephosphonic acid) (DTPMP) or a salt thereof, or aminotri (methylenephosphonic acid) (ATMP) or a salt thereof as an internal phase enveloped by a hydrocarbon membrane phase which is itself emulsified in an external aqueous phase, the hydrocarbon membrane phase continuing a complexing agent weaker for the cations of the sulfate scale than the APCA and DTPMP or ATMP, any complexing agent for the cations in the external aqueous phase being weaker than that in the hydrocarbon membrane phase.

  3. Density scaling for multiplets

    International Nuclear Information System (INIS)

    Nagy, A

    2011-01-01

    Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.

  4. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  5. Urban Scaling in Europe

    Science.gov (United States)

    2016-03-16

    the structural funds for regional development and cohesion . Until recently, several systems of territorial units have coexisted in European...for European MAs versus population. See text and figures 1–7, electronic supplementary material, figures S1–S8 for additional details and electronic...scale as expected, although with wide confidence intervals (table 1). The urbanized area of Spanish cities appears superlinear, contrary to theory

  6. Scaling up Copy Detection

    OpenAIRE

    Li, Xian; Dong, Xin Luna; Lyons, Kenneth B.; Meng, Weiyi; Srivastava, Divesh

    2015-01-01

    Recent research shows that copying is prevalent for Deep-Web data and considering copying can significantly improve truth finding from conflicting values. However, existing copy detection techniques do not scale for large sizes and numbers of data sources, so truth finding can be slowed down by one to two orders of magnitude compared with the corresponding techniques that do not consider copying. In this paper, we study {\\em how to improve scalability of copy detection on structured data}. Ou...

  7. Beyond the Planck Scale

    International Nuclear Information System (INIS)

    Giddings, Steven B.

    2009-01-01

    I outline motivations for believing that important quantum gravity effects lie beyond the Planck scale at both higher energies and longer distances and times. These motivations arise in part from the study of ultra-high energy scattering, and also from considerations in cosmology. I briefly summarize some inferences about such ultra-planckian physics, and clues we might pursue towards the principles of a more fundamental theory addressing the known puzzles and paradoxes of quantum gravity.

  8. Accurate scaling on multiplicity

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.

    1989-01-01

    The commonly used formula of KNO scaling P n =Ψ(n/ ) for descrete distributions (multiplicity distributions) is shown to contradict mathematically the condition ΣP n =1. The effect is essential even at ISR energies. A consistent generalization of the concept of similarity for multiplicity distributions is obtained. The multiplicity distributions of negative particles in PP and also e + e - inelastic interactions are similar over the whole studied energy range. Collider data are discussed. 14 refs.; 8 figs

  9. Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines

    Energy Technology Data Exchange (ETDEWEB)

    Fingersh, Lee J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Loth, Eric [University of Virginia; Kaminski, Meghan [University of Virginia; Qin, Chao [University of Virginia; Griffith, D. Todd [Sandia National Laboratories

    2017-06-09

    A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3 wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.

  10. The ''invisible'' radioactive scale

    International Nuclear Information System (INIS)

    Bjoernstad, T.; Ramsoey, T.

    1999-04-01

    Production and up-concentration of naturally occurring radioactive material (NORM) in the petroleum industry has attracted steadily increasing attention during the last 15 years. Most production engineers today associate this radioactivity with precipitates (scales) and sludges in production tubing, pumps, valves, separators, settling tanks etc., wherever water is being transported or treated. 226 Ra and 228 Ra are the most well known radioactive constituents in scale. Surprisingly little known is the radioactive contamination by 210 Pb and progeny 210 Bi and 210 Po. These are found in combination with 226 Ra in ordinary scale, often in layer of non-radioactive metallic lead in water transportation systems, but also in pure gas and condensate handling systems ''unsupported'' by 226 Ra, but due to transportation and decay of the noble gas 222 Rn in NG/LNG. This latter contamination may be rather thin, in some cases virtually invisible. When, in addition, the radiation energies are low enough for not being detectable on the equipment outer surface, its existence has for most people in the industry been a secret. The report discusses transportation and deposition mechanisms, detection methods and provides some examples of measured results from the North Sea on equipment sent for maintenance. It is concluded that a regular measurement program for this type of contamination should be mandatory under all dismantling processes of transportation and fluid handling equipment for fluids and gases offshore and onshore

  11. Micro-Scale Thermoacoustics

    Science.gov (United States)

    Offner, Avshalom; Ramon, Guy Z.

    2016-11-01

    Thermoacoustic phenomena - conversion of heat to acoustic oscillations - may be harnessed for construction of reliable, practically maintenance-free engines and heat pumps. Specifically, miniaturization of thermoacoustic devices holds great promise for cooling of micro-electronic components. However, as devices size is pushed down to micro-meter scale it is expected that non-negligible slip effects will exist at the solid-fluid interface. Accordingly, new theoretical models for thermoacoustic engines and heat pumps were derived, accounting for a slip boundary condition. These models are essential for the design process of micro-scale thermoacoustic devices that will operate under ultrasonic frequencies. Stability curves for engines - representing the onset of self-sustained oscillations - were calculated with both no-slip and slip boundary conditions, revealing improvement in the performance of engines with slip at the resonance frequency range applicable for micro-scale devices. Maximum achievable temperature differences curves for thermoacoustic heat pumps were calculated, revealing the negative effect of slip on the ability to pump heat up a temperature gradient. The authors acknowledge the support from the Nancy and Stephen Grand Technion Energy Program (GTEP).

  12. H2@Scale Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, Mark

    2017-07-12

    'H2@Scale' is a concept based on the opportunity for hydrogen to act as an intermediate between energy sources and uses. Hydrogen has the potential to be used like the primary intermediate in use today, electricity, because it too is fungible. This presentation summarizes the H2@Scale analysis efforts performed during the first third of 2017. Results of technical potential uses and supply options are summarized and show that the technical potential demand for hydrogen is 60 million metric tons per year and that the U.S. has sufficient domestic resources to meet that demand. A high level infrastructure analysis is also presented that shows an 85% increase in energy on the grid if all hydrogen is produced from grid electricity. However, a preliminary spatial assessment shows that supply is sufficient in most counties across the U.S. The presentation also shows plans for analysis of the economic potential for the H2@Scale concept. Those plans involve developing supply and demand curves for potential hydrogen generation options and as compared to other options for use of that hydrogen.

  13. Analysis of Sub-Grid Boundary-Layer Processes Observed by the P-3 Doppler Wind Lidar in Support of the Western Pacific Tropical Cyclone Structure 2008 Experiment

    Science.gov (United States)

    2012-02-02

    characteristics in the low-level region of intense hurricanes Allen (1980) and Hugo (1989). Mon Wea, Rev, 139, 1447-1462. 9 PUBLICATIONS Refereed Journals...experiment that involved USAF Hurricane Hunter C-130s, the Navy’s P-3, the German Falcon aircraft and the Taiwanese DOTSTAR. The P-3 was equipped with... hurricane research with airborne 8 DWLs for the next 5 years. All of this airborne DWL activity is being done with the expectation of under flying the

  14. Computational model for turbulent flow around a grid spacer with mixing vane

    International Nuclear Information System (INIS)

    Tsutomu Ikeno; Takeo Kajishima

    2005-01-01

    Turbulent mixing coefficient and pressure drop are important factors in subchannel analysis to predict onset of DNB. However, universal correlations are difficult since these factors are significantly affected by the geometry of subchannel and a grid spacer with mixing vane. Therefore, we propose a computational model to estimate these factors. Computational model: To represent the effect of geometry of grid spacer in computational model, we applied a large eddy simulation (LES) technique in couple with an improved immersed-boundary method. In our previous work (Ikeno, et al., NURETH-10), detailed properties of turbulence in subchannel were successfully investigated by developing the immersed boundary method in LES. In this study, additional improvements are given: new one-equation dynamic sub-grid scale (SGS) model is introduced to account for the complex geometry without any artificial modification; the higher order accuracy is maintained by consistent treatment for boundary conditions for velocity and pressure. NUMERICAL TEST AND DISCUSSION: Turbulent mixing coefficient and pressure drop are affected strongly by the arrangement and inclination of mixing vane. Therefore, computations are carried out for each of convolute and periodic arrangements, and for each of 30 degree and 20 degree inclinations. The difference in turbulent mixing coefficient due to these factors is reasonably predicted by our method. (An example of this numerical test is shown in Fig. 1.) Turbulent flow of the problem includes unsteady separation behind the mixing vane and vortex shedding in downstream. Anisotropic distribution of turbulent stress is also appeared in rod gap. Therefore, our computational model has advantage for assessing the influence of arrangement and inclination of mixing vane. By coarser computational mesh, one can screen several candidates for spacer design. Then, by finer mesh, more quantitative analysis is possible. By such a scheme, we believe this method is useful

  15. High-accuracy CFD prediction methods for fluid and structure temperature fluctuations at T-junction for thermal fatigue evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-07-15

    Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are

  16. Scaling Big Data Cleansing

    KAUST Repository

    Khayyat, Zuhair

    2017-07-31

    Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel

  17. Magnetic Scaling in Superconductors

    International Nuclear Information System (INIS)

    Lawrie, I.D.

    1997-01-01

    The Ginzburg-Landau-Wilson superconductor in a magnetic field B is considered in the approximation that magnetic-field fluctuations are neglected. A formulation of perturbation theory is presented in which multiloop calculations fully retaining all Landau levels are tractable. A 2-loop calculation shows that, near the zero-field critical point, the singular part of the free energy scales as F sing ∼ |t| 2-α F(B|t| -2ν ), where ν is the coherence-length exponent emdash a result which has hitherto been assumed on purely dimensional grounds. copyright 1997 The American Physical Society

  18. Scaling CouchDB

    CERN Document Server

    Holt, Bradley

    2011-01-01

    This practical guide offers a short course on scaling CouchDB to meet the capacity needs of your distributed application. Through a series of scenario-based examples, this book lets you explore several methods for creating a system that can accommodate growth and meet expected demand. In the process, you learn about several tools that can help you with replication, load balancing, clustering, and load testing and monitoring. Apply performance tips for tuning your databaseReplicate data, using Futon and CouchDB's RESTful interfaceDistribute CouchDB's workload through load balancingLearn option

  19. Scaling in quantum gravity

    Directory of Open Access Journals (Sweden)

    J. Ambjørn

    1995-07-01

    Full Text Available The 2-point function is the natural object in quantum gravity for extracting critical behavior: The exponential falloff of the 2-point function with geodesic distance determines the fractal dimension dH of space-time. The integral of the 2-point function determines the entropy exponent γ, i.e. the fractal structure related to baby universes, while the short distance behavior of the 2-point function connects γ and dH by a quantum gravity version of Fisher's scaling relation. We verify this behavior in the case of 2d gravity by explicit calculation.

  20. Moment magnitude scale

    Energy Technology Data Exchange (ETDEWEB)

    Hanks, T.C.; Kanamori, H.

    1979-05-10

    The nearly conincident forms of the relations between seismic moment M/sub o/ and the magnitudes M/sub L/, M/sub s/, and M/sub w/ imply a moment magnitude scale M=2/3 log M/sub o/-10.7 which is uniformly valid for 3< or approx. =M/sub L/< or approx. = 7, 5 < or approx. =M/sub s/< or approx. =7 1/2 and M/sub w/> or approx. = 7 1/2.

  1. Scales on the scalp

    Directory of Open Access Journals (Sweden)

    Jamil A

    2013-05-01

    Full Text Available A five-year-old boy presented with a six-week history of scales, flaking and crusting of the scalp. He had mild pruritus but no pain. He did not have a history of atopy and there were no pets at home. Examination of the scalp showed thick, yellowish dry crusts on the vertex and parietal areas and the hair was adhered to the scalp in clumps. There was non-scarring alopecia and mild erythema (Figure 1 & 2. There was no cervical or occipital lymphadenopathy. The patient’s nails and skin in other parts of the body were normal.

  2. A problem of scale

    International Nuclear Information System (INIS)

    Harrison, L.

    1991-01-01

    Small scale wind energy conversion is finding it even more difficult to realise its huge potential market than grid connected wind power. One of the main reasons for this is that its technical development is carried out in isolated parts of the world with little opportunity for technology transfer: small scale wind energy converters (SWECS) are not born of one technology, but have been evolved for different purposes; as a result, the SWECS community has no powerful lobbying force speaking with one voice to promote the technology. There are three distinct areas of application for SWECS, water pumping for domestic and livestock water supplies, irrigation, drainage etc., where no other mechanical means of power is available or viable, battery charging for lighting, TV, radio, and telecommunications in areas far from a grid or road system, and wind-diesel systems, mainly for use on islands where supply of diesel oil is possible, but costly. An attempt is being made to found an association to support the widespread implementation of SWECS and to promote their implementation. It is intended for Wind Energy for Rural Areas to have a permanent secretariat, based in Holland. (AB)

  3. The Unintentional Procrastination Scale.

    Science.gov (United States)

    Fernie, Bruce A; Bharucha, Zinnia; Nikčević, Ana V; Spada, Marcantonio M

    2017-01-01

    Procrastination refers to the delay or postponement of a task or decision and is often conceptualised as a failure of self-regulation. Recent research has suggested that procrastination could be delineated into two domains: intentional and unintentional. In this two-study paper, we aimed to develop a measure of unintentional procrastination (named the Unintentional Procrastination Scale or the 'UPS') and test whether this would be a stronger marker of psychopathology than intentional and general procrastination. In Study 1, a community sample of 139 participants completed a questionnaire that consisted of several items pertaining to unintentional procrastination that had been derived from theory, previous research, and clinical experience. Responses were subjected to a principle components analysis and assessment of internal consistency. In Study 2, a community sample of 155 participants completed the newly developed scale, along with measures of general and intentional procrastination, metacognitions about procrastination, and negative affect. Data from the UPS were subjected to confirmatory factor analysis and revised accordingly. The UPS was then validated using correlation and regression analyses. The six-item UPS possesses construct and divergent validity and good internal consistency. The UPS appears to be a stronger marker of psychopathology than the pre-existing measures of procrastination used in this study. Results from the regression models suggest that both negative affect and metacognitions about procrastination differentiate between general, intentional, and unintentional procrastination. The UPS is brief, has good psychometric properties, and has strong associations with negative affect, suggesting it has value as a research and clinical tool.

  4. Scaling MongoDB

    CERN Document Server

    Chodorow, Kristina

    2011-01-01

    Create a MongoDB cluster that will to grow to meet the needs of your application. With this short and concise book, you'll get guidelines for setting up and using clusters to store a large volume of data, and learn how to access the data efficiently. In the process, you'll understand how to make your application work with a distributed database system. Scaling MongoDB will help you: Set up a MongoDB cluster through shardingWork with a cluster to query and update dataOperate, monitor, and backup your clusterPlan your application to deal with outages By following the advice in this book, you'l

  5. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  6. On Scale and Fields

    DEFF Research Database (Denmark)

    Kadish, David

    2017-01-01

    This paper explores thematic parallels between artistic and agricultural practices in the postwar period to establish a link to media art and cultural practices that are currently emerging in urban agriculture. Industrial agriculture has roots in the post-WWII abundance of mechanical and chemical...... equipment and research. These systems are highly mechanically efficient. With minimal physical labour, they extract ever staggering crop yields from ever poorer soils in shifting climatic conditions. However, the fact of mechanical efficiency is used to mask a set of problems with industrial......-scale agricultural systems that range from spreading pests and diseases to poor global distribution of concentrated regional food wealth. That the conversion of vegetatively diverse farmland into monochromatic fields was popularized at the same time as the arrival of colour field paintings like Barnett Newman...

  7. ScaleUp America Communities

    Data.gov (United States)

    Small Business Administration — SBA’s new ScaleUp America Initiative is designed to help small firms with high potential “scale up” and grow their businesses so that they will provide more jobs and...

  8. MULTIPLE SCALES FOR SUSTAINABLE RESULTS

    Science.gov (United States)

    This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...

  9. Absolute flux scale for radioastronomy

    International Nuclear Information System (INIS)

    Ivanov, V.P.; Stankevich, K.S.

    1986-01-01

    The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized

  10. Northeast Snowfall Impact Scale (NESIS)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — While the Fujita and Saffir-Simpson Scales characterize tornadoes and hurricanes respectively, there is no widely used scale to classify snowstorms. The Northeast...

  11. Scale setting in lattice QCD

    International Nuclear Information System (INIS)

    Sommer, Rainer

    2014-02-01

    The principles of scale setting in lattice QCD as well as the advantages and disadvantages of various commonly used scales are discussed. After listing criteria for good scales, I concentrate on the main presently used ones with an emphasis on scales derived from the Yang-Mills gradient flow. For these I discuss discretisation errors, statistical precision and mass effects. A short review on numerical results also brings me to an unpleasant disagreement which remains to be explained.

  12. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  13. Scale setting in lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Sommer, Rainer [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2014-02-15

    The principles of scale setting in lattice QCD as well as the advantages and disadvantages of various commonly used scales are discussed. After listing criteria for good scales, I concentrate on the main presently used ones with an emphasis on scales derived from the Yang-Mills gradient flow. For these I discuss discretisation errors, statistical precision and mass effects. A short review on numerical results also brings me to an unpleasant disagreement which remains to be explained.

  14. Scale issues in tourism development

    Science.gov (United States)

    Sinji Yang; Lori Pennington-Gray; Donald F. Holecek

    1998-01-01

    Proponents of Alternative Tourism overwhelmingly believe that alternative forms of tourism development need to be small in scale. Inasmuch as tourists' demand has great power to shape the market, the issues surrounding the tourism development scale deserve further consideration. This paper discusses the implications and effects of the tourism development scale on...

  15. Scaling up: Assessing social impacts at the macro-scale

    International Nuclear Information System (INIS)

    Schirmer, Jacki

    2011-01-01

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  16. Scaling of structural failure

    Energy Technology Data Exchange (ETDEWEB)

    Bazant, Z.P. [Northwestern Univ., Evanston, IL (United States); Chen, Er-Ping [Sandia National Lab., Albuquerque, NM (United States)

    1997-01-01

    This article attempts to review the progress achieved in the understanding of scaling and size effect in the failure of structures. Particular emphasis is placed on quasibrittle materials for which the size effect is complicated. Attention is focused on three main types of size effects, namely the statistical size effect due to randomness of strength, the energy release size effect, and the possible size effect due to fractality of fracture or microcracks. Definitive conclusions on the applicability of these theories are drawn. Subsequently, the article discusses the application of the known size effect law for the measurement of material fracture properties, and the modeling of the size effect by the cohesive crack model, nonlocal finite element models and discrete element models. Extensions to compression failure and to the rate-dependent material behavior are also outlined. The damage constitutive law needed for describing a microcracked material in the fracture process zone is discussed. Various applications to quasibrittle materials, including concrete, sea ice, fiber composites, rocks and ceramics are presented.

  17. SPACE BASED INTERCEPTOR SCALING

    Energy Technology Data Exchange (ETDEWEB)

    G. CANAVAN

    2001-02-01

    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.

  18. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  19. Tipping the scales.

    Science.gov (United States)

    1998-12-01

    In the US, the October 1998 murder of a physician who performed abortions was an outward manifestation of the insidious battle against legal abortion being waged by radical Christian social conservatives seeking to transform the US democracy into a theocracy. This movement has been documented in a publication entitled, "Tipping the Scales: The Christian Right's Legal Crusade Against Choice" produced as a result of a 4-year investigation conducted by The Center for Reproductive Law and Policy. This publication describes how these fundamentalists have used sophisticated legal, lobbying, and communication strategies to further their goals of challenging the separation of church and state, opposing family planning and sexuality education that is not based solely on abstinence, promoting school prayer, and restricting homosexual rights. The movement has resulted in the introduction of more than 300 anti-abortion bills in states, 50 of which have passed in 23 states. Most Christian fundamentalist groups provide free legal representation to abortion clinic terrorists, and some groups solicit women to bring specious malpractice claims against providers. Sophisticated legal tactics are used by these groups to remove the taint of extremism and mask the danger posed to US constitutional principles being posed by "a well-financed and zealous brand of radical lawyers and their supporters."

  20. The Bereavement Guilt Scale.

    Science.gov (United States)

    Li, Jie; Stroebe, Magaret; Chan, Cecilia L W; Chow, Amy Y M

    2017-06-01

    The rationale, development, and validation of the Bereavement Guilt Scale (BGS) are described in this article. The BGS was based on a theoretically developed, multidimensional conceptualization of guilt. Part 1 describes the generation of the item pool, derived from in-depth interviews, and review of the scientific literature. Part 2 details statistical analyses for further item selection (Sample 1, N = 273). Part 3 covers the psychometric properties of the emergent-BGS (Sample 2, N = 600, and Sample 3, N = 479). Confirmatory factor analysis indicated that a five-factor model fit the data best. Correlations of BGS scores with depression, anxiety, self-esteem, self-forgiveness, and mode of death were consistent with theoretical predictions, supporting the construct validity of the measure. The internal consistency and test-retest reliability were also supported. Thus, initial testing or examination suggests that the BGS is a valid tool to assess multiple components of bereavement guilt. Further psychometric testing across cultures is recommended.

  1. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  2. Minimum Efficient Scale (MES) and preferred scale of container terminals

    OpenAIRE

    Kaselimi, Evangelia N.; Notteboom, Theo E.; Pallis, Athanasios A.; Farrell, Sheila

    2011-01-01

    Abstract: The decision on the scale of a port terminal affects the terminals managerial, operational and competitive position in all the phases of its life. It also affects competition structures in the port in which the terminal is operating, and has a potential impact on other terminals. Port authorities and terminal operators need to know the scale of the terminal when engaging in concession agreements. In economic theory the scale of a plant/firm is typically defined in relation to the Mi...

  3. The Scales of Injustice

    Directory of Open Access Journals (Sweden)

    Charles Blattberg

    2008-02-01

    Full Text Available This paper criticises four major approaches to criminal law – consequentialism, retributivism, abolitionism, and “mixed” pluralism – each of which, in its own fashion, affirms the celebrated emblem of the “scales of justice.” The argument is that there is a better way of dealing with the tensions that often arise between the various legal purposes than by merely balancing them against each other. It consists, essentially, of striving to genuinely reconcile those purposes, a goal which is shown to require taking a new, “patriotic” approach to law. Le présent article porte une critique à quatre approches majeures en droit pénal : le conséquentialisme, le rétributivisme, l’abolitionnisme et le pluralisme « mixte. » Toutes ces approches se rangent, chacune à leur manière, sous le célèbre emblème des « échelles de justice. » L’argument est qu’il existe une meilleure façon de faire face aux tensions qui opposent les multiples objectifs judiciaires plutôt que de comparer le poids des uns contre le poids des autres. Il s’agit essentiellement de s’efforcer à réaliser une authentique réconciliation de ces objectifs. Il apparaîtra que pour y parvenir il est nécessaire d’avoir recours à une nouvelle approche du droit, une approche précisément « patriotique. »

  4. Industrial scale gene synthesis.

    Science.gov (United States)

    Notka, Frank; Liss, Michael; Wagner, Ralf

    2011-01-01

    The most recent developments in the area of deep DNA sequencing and downstream quantitative and functional analysis are rapidly adding a new dimension to understanding biochemical pathways and metabolic interdependencies. These increasing insights pave the way to designing new strategies that address public needs, including environmental applications and therapeutic inventions, or novel cell factories for sustainable and reconcilable energy or chemicals sources. Adding yet another level is building upon nonnaturally occurring networks and pathways. Recent developments in synthetic biology have created economic and reliable options for designing and synthesizing genes, operons, and eventually complete genomes. Meanwhile, high-throughput design and synthesis of extremely comprehensive DNA sequences have evolved into an enabling technology already indispensable in various life science sectors today. Here, we describe the industrial perspective of modern gene synthesis and its relationship with synthetic biology. Gene synthesis contributed significantly to the emergence of synthetic biology by not only providing the genetic material in high quality and quantity but also enabling its assembly, according to engineering design principles, in a standardized format. Synthetic biology on the other hand, added the need for assembling complex circuits and large complexes, thus fostering the development of appropriate methods and expanding the scope of applications. Synthetic biology has also stimulated interdisciplinary collaboration as well as integration of the broader public by addressing socioeconomic, philosophical, ethical, political, and legal opportunities and concerns. The demand-driven technological achievements of gene synthesis and the implemented processes are exemplified by an industrial setting of large-scale gene synthesis, describing production from order to delivery. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Scaling Effects on Materials Tribology: From Macro to Micro Scale.

    Science.gov (United States)

    Stoyanov, Pantcho; Chromik, Richard R

    2017-05-18

    The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale.

  6. Dynamic critical behaviour and scaling

    International Nuclear Information System (INIS)

    Oezoguz, B.E.

    2001-01-01

    Traditionally the scaling is the property of dynamical systems at thermal equilibrium. In second order phase transitions scaling behaviour is due to the infinite correlation length around the critical point. In first order phase transitions however, the correlation length remains finite and a different type of scaling can be observed. For first order phase transitions all singularities are governed by the volume of the system. Recently, a different type of scaling, namely dynamic scaling has attracted attention in second order phase transitions. In dynamic scaling, when a system prepared at high temperature is quenched to the critical temperature, it exhibits scaling behaviour. Dynamic scaling has been applied to various spin systems and the validity of the arguments are shown. Firstly, in this thesis project the dynamic scaling is applied to 4-dimensional using spin system which exhibits second order phase transition with mean-field critical indices. Secondly, it is shown that although the dynamic is quite different, first order phase transitions also has a different type of dynamic scaling

  7. Plague and Climate: Scales Matter

    Science.gov (United States)

    Ben Ari, Tamara; Neerinckx, Simon; Gage, Kenneth L.; Kreppel, Katharina; Laudisoit, Anne; Leirs, Herwig; Stenseth, Nils Chr.

    2011-01-01

    Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts) of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations. PMID:21949648

  8. H2@Scale Workshop Report

    Energy Technology Data Exchange (ETDEWEB)

    Pivovar, Bryan

    2017-03-31

    Final report from the H2@Scale Workshop held November 16-17, 2016, at the National Renewable Energy Laboratory in Golden, Colorado. The U.S. Department of Energy's National Renewable Energy Laboratory hosted a technology workshop to identify the current barriers and research needs of the H2@Scale concept. H2@Scale is a concept regarding the potential for wide-scale impact of hydrogen produced from diverse domestic resources to enhance U.S. energy security and enable growth of innovative technologies and domestic industries. Feedback received from a diverse set of stakeholders at the workshop will guide the development of an H2@Scale roadmap for research, development, and early stage demonstration activities that can enable hydrogen as an energy carrier at a national scale.

  9. Scaling structure loads for SMA

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam [KEPCO ENC, Yongin (Korea, Republic of)

    2012-10-15

    When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies.

  10. Scaling structure loads for SMA

    International Nuclear Information System (INIS)

    Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam

    2012-01-01

    When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies

  11. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  12. Contact kinematics of biomimetic scales

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Ranajay; Ebrahimi, Hamid; Vaziri, Ashkan, E-mail: vaziri@coe.neu.edu [Department of Mechanical and Industrial Engineering, Northeastern University, Boston, Massachusetts 02115 (United States)

    2014-12-08

    Dermal scales, prevalent across biological groups, considerably boost survival by providing multifunctional advantages. Here, we investigate the nonlinear mechanical effects of biomimetic scale like attachments on the behavior of an elastic substrate brought about by the contact interaction of scales in pure bending using qualitative experiments, analytical models, and detailed finite element (FE) analysis. Our results reveal the existence of three distinct kinematic phases of operation spanning linear, nonlinear, and rigid behavior driven by kinematic interactions of scales. The response of the modified elastic beam strongly depends on the size and spatial overlap of rigid scales. The nonlinearity is perceptible even in relatively small strain regime and without invoking material level complexities of either the scales or the substrate.

  13. Drift Scale THM Model

    International Nuclear Information System (INIS)

    Rutqvist, J.

    2004-01-01

    This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because a sufficient amount of water must be available within a

  14. Scale symmetry and virial theorem

    International Nuclear Information System (INIS)

    Westenholz, C. von

    1978-01-01

    Scale symmetry (or dilatation invariance) is discussed in terms of Noether's Theorem expressed in terms of a symmetry group action on phase space endowed with a symplectic structure. The conventional conceptual approach expressing invariance of some Hamiltonian under scale transformations is re-expressed in alternate form by infinitesimal automorphisms of the given symplectic structure. That is, the vector field representing scale transformations leaves the symplectic structure invariant. In this model, the conserved quantity or constant of motion related to scale symmetry is the virial. It is shown that the conventional virial theorem can be derived within this framework

  15. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  16. Natural Scales in Geographical Patterns

    Science.gov (United States)

    Menezes, Telmo; Roth, Camille

    2017-04-01

    Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.

  17. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  18. Wind Farm parametrization in the mesoscale model WRF

    DEFF Research Database (Denmark)

    Volker, Patrick; Badger, Jake; Hahmann, Andrea N.

    2012-01-01

    , but are parametrized as another sub-grid scale process. In order to appropriately capture the wind farm wake recovery and its direction, two properties are important, among others, the total energy extracted by the wind farm and its velocity deficit distribution. In the considered parametrization the individual...... the extracted force is proportional to the turbine area interfacing a grid cell. The sub-grid scale wake expansion is achieved by adding turbulence kinetic energy (proportional to the extracted power) to the flow. The validity of both wind farm parametrizations has been verified against observational data. We...... turbines produce a thrust dependent on the background velocity. For the sub-grid scale velocity deficit, the entrainment from the free atmospheric flow into the wake region, which is responsible for the expansion, is taken into account. Furthermore, since the model horizontal distance is several times...

  19. Wake effects of large offshore wind farms on the mesoscale atmosphere

    DEFF Research Database (Denmark)

    Volker, Patrick; Badger, Jake; Hahmann, Andrea N.

    to the fact that its typical horizontal grid spacing is on the order of 2km, the energy extracted by the turbine, as well as the wake development inside the turbine-containing grid-cells, are not described explicitly, but are parametrized as another sub-grid scale process. In order to appropriately capture...... the wind farm wake recovery and its direction, two properties are important, the total energy extracted by the wind farm and its velocity deficit distribution. In the considered parametrization the individual turbines apply a thrust dependent on a local sub grid scale velocity, which is influenced...... by the up-stream turbines. For the sub-grid scale velocity deficit, the entrainment from the free atmospheric flow into the wake region, is taken into account. Furthermore, since the model horizontal distance is several times larger then the turbine diameter, it has been assumed that the generated...

  20. Collider Scaling and Cost Estimation

    International Nuclear Information System (INIS)

    Palmer, R.B.

    1986-01-01

    This paper deals with collider cost and scaling. The main points of the discussion are the following ones: 1) scaling laws and cost estimation: accelerating gradient requirements, total stored RF energy considerations, peak power consideration, average power consumption; 2) cost optimization; 3) Bremsstrahlung considerations; 4) Focusing optics: conventional, laser focusing or super disruption. 13 refs

  1. Voice, Schooling, Inequality, and Scale

    Science.gov (United States)

    Collins, James

    2013-01-01

    The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…

  2. Spiritual Competency Scale: Further Analysis

    Science.gov (United States)

    Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.

    2015-01-01

    This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC…

  3. Scaling as an Organizational Method

    DEFF Research Database (Denmark)

    Papazu, Irina Maria Clara Hansen; Nelund, Mette

    2018-01-01

    Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....

  4. The Assertiveness Scale for Children.

    Science.gov (United States)

    Peeler, Elizabeth; Rimmer, Susan M.

    1981-01-01

    Described an assertiveness scale for children developed to assess four dimensions of assertiveness across three categories of interpersonal situations. The scale was administered to elementary and middle school children (N=609) and readministered to students (N=164) to assess test-retest reliability. Test-retest reliability was low while internal…

  5. On the Geologic Time Scale

    NARCIS (Netherlands)

    Gradstein, F.M.; Ogg, J.G.; Hilgen, F.J.

    2012-01-01

    This report summarizes the international divisions and ages in the Geologic Time Scale, published in 2012 (GTS2012). Since 2004, when GTS2004 was detailed, major developments have taken place that directly bear and have considerable impact on the intricate science of geologic time scaling. Precam

  6. A Scale of Mobbing Impacts

    Science.gov (United States)

    Yaman, Erkan

    2012-01-01

    The aim of this research was to develop the Mobbing Impacts Scale and to examine its validity and reliability analyses. The sample of study consisted of 509 teachers from Sakarya. In this study construct validity, internal consistency, test-retest reliabilities and item analysis of the scale were examined. As a result of factor analysis for…

  7. Znamenny scale – fait accompli?

    Directory of Open Access Journals (Sweden)

    Alexei Yaropolov

    2010-12-01

    Full Text Available The author addresses one of the most sensitive topics of Znamenny chant: its scale. He tries to restore the “burned bridges” between ideographic and staff-notation of the Chant as he redefines and essentially generalizes the concept of scale as such. The possibility to artificially construct ad hoc many scales sounding sometimes very similar to the scale suggested by the modern staff-notation is a serious argument to regard the “staff-notation based” deciphering from the 17th century (dvoeznamenniki a pure game of chance. The constructed scales, presented in the paper, are different to “keyboard diatonica” and from one another and are never subject to the unified theorizing (unified nomenclature of degrees, etc. Critically commented is the practice to uncontrollably use the trivial pitch-symbols for deciphering, which ipso facto makes the probabilistic steps of unknown scales look as the ill-founded deviations from the diatonic scale steps, which are currently in use in the common musical education. This practice hinders the chance to acknowledge the right of the remote musical culture to rest on foundations that can be formulated both positively and explicitly, all the more so, as the usage of paleographic signs looks rather consistent. The resemblances and differences between musical cultures may be treated more liberally since no scale is seen a norm. The author is based on the writings of Russian musicologist and organologist Felix Raudonikas.

  8. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  9. Gravitation on large scales

    Science.gov (United States)

    Giraud, E.

    A sample of dwarf and spiral galaxies with extended rotation curves is analysed, assuming that the fraction of dark matter is small. The objective of the paper is to prepare a framework for a theory, based on fundamental principles, that would give fits of the same quality as the phenomenology of dark halos. The following results are obtained: 1) The geodesics of massive systems with low density (Class I galaxies) can be described by the metric ds^2 = b^{-1}(r)dr^2 - b(r)dt^2 + r^2 dOmega^2 where b(r) = 1 - {2 over c^2}({{GM} over r} + gamma_f M^{1/2}) In this expression Gamma_f is a new fundamental constant which has been deduced from rotation curves of galaxies with circular velocity V_c^2 >= 2 {{GM} over r} for all r 2) The above metric is deduced from the conformal invariant metric ds^2 = B^{-1}(r)dr^2 - B(r)dt^2 + r^2 dOmega^2 where B(r) = 1 - {2 over c^2}({{GM} over r} + Gamma_f M^{1/2} + {1 over 3} {Gamma_f^2 over G}r) through a linear transform, u, of the linear special group SL(2, R) 3) The term {2 over c^2}Gamma_f M^{1/2} accounts for the difference between the observed rotation velocity and the Newtonian velocity. The term {2 over {3c^2}}{Gamma_f^2 over G}r is interpreted as a scale invariance between systems of different masses and sizes. 4) The metric B is a vacuum solution around a mass M deduced from the least action principle applied to the unique action I_a = -2 a int (-g)^{1/2} [R_{mu kappa}R^{ mu kappa} - 1/3(Ralphaalpha)^2] dx^4 built with the conformal Weyl tensor 5) For galaxies such that there is a radius, r_0, at which {{GM} over r_0} = Gamma M^{1/2} (Class II), the term Gamma M^{1/2} might be confined by the Newtonian potential yielding stationary solutions. 6) The analysed rotation curves of Class II galaxies are indeed well described with metrics of the form b(r) = 1 - {2 over c^2}({{GM} over r} + (n + 1) Gamma_0 M^{1/2}) where n is an integer and Gamma_0 = {1 over the square root of 3}Gamma_f 7) The effective potential is determined and

  10. Dynamic inequalities on time scales

    CERN Document Server

    Agarwal, Ravi; Saker, Samir

    2014-01-01

    This is a monograph devoted to recent research and results on dynamic inequalities on time scales. The study of dynamic inequalities on time scales has been covered extensively in the literature in recent years and has now become a major sub-field in pure and applied mathematics. In particular, this book will cover recent results on integral inequalities, including Young's inequality, Jensen's inequality, Holder's inequality, Minkowski's inequality, Steffensen's inequality, Hermite-Hadamard inequality and Čebyšv's inequality. Opial type inequalities on time scales and their extensions with weighted functions, Lyapunov type inequalities, Halanay type inequalities for dynamic equations on time scales, and Wirtinger type inequalities on time scales and their extensions will also be discussed here in detail.

  11. Entanglement scaling in lattice systems

    Energy Technology Data Exchange (ETDEWEB)

    Audenaert, K M R [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom); Cramer, M [QOLS, Blackett Laboratory, Imperial College London, Prince Consort Road, London SW7 2BW (United Kingdom); Eisert, J [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom); Plenio, M B [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom)

    2007-05-15

    We review some recent rigorous results on scaling laws of entanglement properties in quantum many body systems. More specifically, we study the entanglement of a region with its surrounding and determine its scaling behaviour with its size for systems in the ground and thermal states of bosonic and fermionic lattice systems. A theorem connecting entanglement between a region and the rest of the lattice with the surface area of the boundary between the two regions is presented for non-critical systems in arbitrary spatial dimensions. The entanglement scaling in the field limit exhibits a peculiar difference between fermionic and bosonic systems. In one-spatial dimension a logarithmic divergence is recovered for both bosonic and fermionic systems. In two spatial dimensions in the setting of half-spaces however we observe strict area scaling for bosonic systems and a multiplicative logarithmic correction to such an area scaling in fermionic systems. Similar questions may be posed and answered in classical systems.

  12. GUT Scale Fermion Mass Ratios

    International Nuclear Information System (INIS)

    Spinrath, Martin

    2014-01-01

    We present a series of recent works related to group theoretical factors from GUT symmetry breaking which lead to predictions for the ratios of quark and lepton Yukawa couplings at the unification scale. New predictions for the GUT scale ratios y μ /y s , y τ /y b and y t /y b in particular are shown and compared to experimental data. For this comparison it is important to include possibly large supersymmetric threshold corrections. Due to this reason the structure of the fermion masses at the GUT scale depends on TeV scale physics and makes GUT scale physics testable at the LHC. We also discuss how this new predictions might lead to predictions for mixing angles by discussing the example of the recently measured last missing leptonic mixing angle θ 13 making this new class of GUT models also testable in neutrino experiments

  13. Statistical data and results obtained on irradiated transistors 2N.2221 Sesco and 2N.2907 SGS; Donnees de fiabilite et resultats statistiques obtenus sur des transistors 2n.2221 Sesco et 2n.2907 SGS irradies

    Energy Technology Data Exchange (ETDEWEB)

    Blin, A; Le Ber, J

    1966-07-01

    This document provides results obtained on many samples of transistors irradiated in the laboratories of the Institut of Nuclear Physic of Lyon. The physical aspects of the irradiation,the statistical aspects of the study and the reliability under irradiation have been studied, but the accent is done on the statistical analysis. (A.L.B.)

  14. Convergent Validity of Four Innovativeness Scales.

    Science.gov (United States)

    Goldsmith, Ronald E.

    1986-01-01

    Four scales of innovativeness were administered to two samples of undergraduate students: the Open Processing Scale, Innovativeness Scale, innovation subscale of the Jackson Personality Inventory, and Kirton Adaption-Innovation Inventory. Intercorrelations indicated the scales generally exhibited convergent validity. (GDC)

  15. Cardiac Depression Scale: Mokken scaling in heart failure patients

    Directory of Open Access Journals (Sweden)

    Ski Chantal F

    2012-11-01

    Full Text Available Abstract Background There is a high prevalence of depression in patients with heart failure (HF that is associated with worsening prognosis. The value of using a reliable and valid instrument to measure depression in this population is therefore essential. We validated the Cardiac Depression Scale (CDS in heart failure patients using a model of ordinal unidimensional measurement known as Mokken scaling. Findings We administered in face-to-face interviews the CDS to 603 patients with HF. Data were analysed using Mokken scale analysis. Items of the CDS formed a statistically significant unidimensional Mokken scale of low strength (H0.8. Conclusions The CDS has a hierarchy of items which can be interpreted in terms of the increasingly serious effects of depression occurring as a result of HF. Identifying an appropriate instrument to measure depression in patients with HF allows for early identification and better medical management.

  16. Scale-by-scale contributions to Lagrangian particle acceleration

    Science.gov (United States)

    Lalescu, Cristian C.; Wilczek, Michael

    2017-11-01

    Fluctuations on a wide range of scales in both space and time are characteristic of turbulence. Lagrangian particles, advected by the flow, probe these fluctuations along their trajectories. In an effort to isolate the influence of the different scales on Lagrangian statistics, we employ direct numerical simulations (DNS) combined with a filtering approach. Specifically, we study the acceleration statistics of tracers advected in filtered fields to characterize the smallest temporal scales of the flow. Emphasis is put on the acceleration variance as a function of filter scale, along with the scaling properties of the relevant terms of the Navier-Stokes equations. We furthermore discuss scaling ranges for higher-order moments of the tracer acceleration, as well as the influence of the choice of filter on the results. Starting from the Lagrangian tracer acceleration as the short time limit of the Lagrangian velocity increment, we also quantify the influence of filtering on Lagrangian intermittency. Our work complements existing experimental results on intermittency and accelerations of finite-sized, neutrally-buoyant particles: for the passive tracers used in our DNS, feedback effects are neglected such that the spatial averaging effect is cleanly isolated.

  17. Direct numerical simulation of turbulent velocity-, pressure- and temperature-fields in channel flows

    International Nuclear Information System (INIS)

    Goetzbach, G.

    1977-10-01

    For the simulation of non stationary, three-dimensional, turbulent flow- and temperature-fields in channel flows with constant properties a method is presented which is based on a finite difference scheme of the complete conservation equations for mass, momentum and enthalpie. The fluxes of momentum and heat within the grid cells are described by sub-grid scale models. The sub-grid scale model for momentum introduced here is for the first time applicable to small Reynolds-numbers, rather coarse grids, and channels with space dependent roughness distributions. (orig.) [de

  18. On inertial range scaling laws

    International Nuclear Information System (INIS)

    Bowman, J.C.

    1994-12-01

    Inertial-range scaling laws for two- and three-dimensional turbulence are re-examined within a unified framework. A new correction to Kolmogorov's k -5/3 scaling is derived for the energy inertial range. A related modification is found to Kraichnan's logarithmically corrected two-dimensional enstrophy cascade law that removes its unexpected divergence at the injection wavenumber. The significance of these corrections is illustrated with steady-state energy spectra from recent high-resolution closure computations. The results also underscore the asymptotic nature of inertial-range scaling laws. Implications for conventional numerical simulations are discussed

  19. Geometric scaling as traveling waves

    International Nuclear Information System (INIS)

    Munier, S.; Peschanski, R.

    2003-01-01

    We show the relevance of the nonlinear Fisher and Kolmogorov-Petrovsky-Piscounov (KPP) equation to the problem of high energy evolution of the QCD amplitudes. We explain how the traveling wave solutions of this equation are related to geometric scaling, a phenomenon observed in deep-inelastic scattering experiments. Geometric scaling is for the first time shown to result from an exact solution of nonlinear QCD evolution equations. Using general results on the KPP equation, we compute the velocity of the wave front, which gives the full high energy dependence of the saturation scale

  20. Straight scaling FFAG beam line

    International Nuclear Information System (INIS)

    Lagrange, J.-B.; Planche, T.; Yamakawa, E.; Uesugi, T.; Ishi, Y.; Kuriyama, Y.; Qin, B.; Okabe, K.; Mori, Y.

    2012-01-01

    Fixed field alternating gradient (FFAG) accelerators are recently subject to a strong revival. They are usually designed in a circular shape; however, it would be an asset to guide particles with no overall bend in this type of accelerator. An analytical development of a straight FFAG cell which keeps zero-chromaticity is presented here. A magnetic field law is thus obtained, called “straight scaling law”, and an experiment has been conducted to confirm this zero-chromatic law. A straight scaling FFAG prototype has been designed and manufactured, and horizontal phase advances of two different energies are measured. Results are analyzed to clarify the straight scaling law.

  1. Compositeness and the Fermi scale

    International Nuclear Information System (INIS)

    Peccei, R.D.

    1984-01-01

    The positive attitude adopted up to now, due to the non-observation of effects of substructure, is that the compositeness scale Λ must be large: Λ > or approx. 1 TeV. Such a large value of Λ gives rise to two theoretical problems which I examine here, namely: 1) What dynamics yields light composite quarks and leptons (msub(f) < < Λ) and 2) What relation does the compositeness scale Λ have with the Fermi scale Λsub(F) = (√2 Gsub(F))sup(-1/2) approx.= 250 GeV. (orig./HSI)

  2. Straight scaling FFAG beam line

    Science.gov (United States)

    Lagrange, J.-B.; Planche, T.; Yamakawa, E.; Uesugi, T.; Ishi, Y.; Kuriyama, Y.; Qin, B.; Okabe, K.; Mori, Y.

    2012-11-01

    Fixed field alternating gradient (FFAG) accelerators are recently subject to a strong revival. They are usually designed in a circular shape; however, it would be an asset to guide particles with no overall bend in this type of accelerator. An analytical development of a straight FFAG cell which keeps zero-chromaticity is presented here. A magnetic field law is thus obtained, called "straight scaling law", and an experiment has been conducted to confirm this zero-chromatic law. A straight scaling FFAG prototype has been designed and manufactured, and horizontal phase advances of two different energies are measured. Results are analyzed to clarify the straight scaling law.

  3. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  4. Stabilized High-order Galerkin Methods Based on a Parameter-free Dynamic SGS Model for LES

    Science.gov (United States)

    2015-01-01

    is classically referred to as the "Burgers" equation, the first time that it was introduced dates back to the 1915 work by Bateman [4]. 7 To proceed...34 [4] H. Bateman . Some recent researches on the motion of fluids. Mon. Wea. Rev., 43:163–170, 1915. [5] P. Benoit and S. Chi-Wang. On positivity

  5. Development and Marketing of Project Finance & Project Monitoring as New Services – The Case of SGS Zurich

    OpenAIRE

    Tizro, Behrouz

    2010-01-01

    The need to be present and invest in foreign markets beyond companies` own geographical borders necessitates strict supervision. This is not easy and of course not inexpensive for investing bodies to firstly decide reasonably on investments which would be feasible and assume financial undertakings and secondly have regular presence in their investment project and their location. Furthermore they may not have the necessary resources or required expertises within their own organizations to assi...

  6. Hidden scale invariance of metals

    DEFF Research Database (Denmark)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.

    2015-01-01

    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general “hidden” scale invariance...... of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were...... available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant...

  7. Parity at the Planck scale

    Science.gov (United States)

    Arzano, Michele; Gubitosi, Giulia; Magueijo, João

    2018-06-01

    We explore the possibility that well known properties of the parity operator, such as its idempotency and unitarity, might break down at the Planck scale. Parity might then do more than just swap right and left polarized states and reverse the sign of spatial momentum k: it might generate superpositions of right and left handed states, as well as mix momenta of different magnitudes. We lay down the general formalism, but also consider the concrete case of the Planck scale kinematics governed by κ-Poincaré symmetries, where some of the general features highlighted appear explicitly. We explore some of the observational implications for cosmological fluctuations. Different power spectra for right handed and left handed tensor modes might actually be a manifestation of deformed parity symmetry at the Planck scale. Moreover, scale-invariance and parity symmetry appear deeply interconnected.

  8. Scaling of graphene integrated circuits.

    Science.gov (United States)

    Bianchi, Massimiliano; Guerriero, Erica; Fiocco, Marco; Alberti, Ruggero; Polloni, Laura; Behnam, Ashkan; Carrion, Enrique A; Pop, Eric; Sordan, Roman

    2015-05-07

    The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing.

  9. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  10. Morocco - Small-Scale Fisheries

    Data.gov (United States)

    Millennium Challenge Corporation — The final performance evaluation roadmap for the Small-Scale Fisheries Project (PPA-MCC) is developed using a grid constructed around indicators relating to Project...

  11. Minimum scaling laws in tokamaks

    International Nuclear Information System (INIS)

    Zhang, Y.Z.; Mahajan, S.M.

    1986-10-01

    Scaling laws governing anomalous electron transport in tokamaks with ohmic and/or auxiliary heating are derived using renormalized Vlasov-Ampere equations for low frequency electromagnetic microturbulence. It is also shown that for pure auxiliary heating (or when auxiliary heating power far exceeds the ohmic power), the energy confinement time scales as tau/sub E/ ∼ P/sub inj//sup -1/3/, where P/sub inj/ is the injected power

  12. Scale issues in remote sensing

    CERN Document Server

    Weng, Qihao

    2014-01-01

    This book provides up-to-date developments, methods, and techniques in the field of GIS and remote sensing and features articles from internationally renowned authorities on three interrelated perspectives of scaling issues: scale in land surface properties, land surface patterns, and land surface processes. The book is ideal as a professional reference for practicing geographic information scientists and remote sensing engineers as well as a supplemental reading for graduate level students.

  13. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  14. Normalization of emotion control scale

    Directory of Open Access Journals (Sweden)

    Hojatoolah Tahmasebian

    2014-09-01

    Full Text Available Background: Emotion control skill teaches the individuals how to identify their emotions and how to express and control them in various situations. The aim of this study was to normalize and measure the internal and external validity and reliability of emotion control test. Methods: This standardization study was carried out on a statistical society, including all pupils, students, teachers, nurses and university professors in Kermanshah in 2012, using Williams’ emotion control scale. The subjects included 1,500 (810 females and 690 males people who were selected by stratified random sampling. Williams (1997 emotion control scale, was used to collect the required data. Emotional Control Scale is a tool for measuring the degree of control people have over their emotions. This scale has four subscales, including anger, depressed mood, anxiety and positive affect. The collected data were analyzed by SPSS software using correlation and Cronbach's alpha tests. Results: The results of internal consistency of the questionnaire reported by Cronbach's alpha indicated an acceptable internal consistency for emotional control scale, and the correlation between the subscales of the test and between the items of the questionnaire was significant at 0.01 confidence level. Conclusion: The validity of emotion control scale among the pupils, students, teachers, nurses and teachers in Iran has an acceptable range, and the test itemswere correlated with each other, thereby making them appropriate for measuring emotion control.

  15. The Menopause Rating Scale (MRS scale: A methodological review

    Directory of Open Access Journals (Sweden)

    Strelow Frank

    2004-09-01

    Full Text Available Abstract Background This paper compiles data from different sources to get a first comprehensive picture of psychometric and other methodological characteristics of the Menopause Rating Scale (MRS scale. The scale was designed and standardized as a self-administered scale to (a to assess symptoms/complaints of aging women under different conditions, (b to evaluate the severity of symptoms over time, and (c to measure changes pre- and postmenopause replacement therapy. The scale became widespread used (available in 10 languages. Method A large multinational survey (9 countries in 4 continents from 2001/ 2002 is the basis for in depth analyses on reliability and validity of the MRS. Additional small convenience samples were used to get first impressions about test-retest reliability. The data were centrally analyzed. Data from a postmarketing HRT study were used to estimate discriminative validity. Results Reliability measures (consistency and test-retest stability were found to be good across countries, although the sample size for test-retest reliability was small. Validity: The internal structure of the MRS across countries was astonishingly similar to conclude that the scale really measures the same phenomenon in symptomatic women. The sub-scores and total score correlations were high (0.7–0.9 but lower among the sub-scales (0.5–0.7. This however suggests that the subscales are not fully independent. Norm values from different populations were presented showing that a direct comparison between Europe and North America is possible, but caution recommended with comparisons of data from Latin America and Indonesia. But this will not affect intra-individual comparisons within clinical trials. The comparison with the Kupperman Index showed sufficiently good correlations, illustrating an adept criterion-oriented validity. The same is true for the comparison with the generic quality-of-life scale SF-36 where also a sufficiently close association

  16. Mokken scaling of the Myocardial Infarction Dimensional Assessment Scale (MIDAS).

    Science.gov (United States)

    Thompson, David R; Watson, Roger

    2011-02-01

    The purpose of this study was to examine the hierarchical and cumulative nature of the 35 items of the Myocardial Infarction Dimensional Assessment Scale (MIDAS), a disease-specific health-related quality of life measure. Data from 668 participants who completed the MIDAS were analysed using the Mokken Scaling Procedure, which is a computer program that searches polychotomous data for hierarchical and cumulative scales on the basis of a range of diagnostic criteria. Fourteen MIDAS items were retained in a Mokken scale and these items included physical activity, insecurity, emotional reaction and dependency items but excluded items related to diet, medication or side-effects. Item difficulty, in item response theory terms, ran from physical activity items (low difficulty) to insecurity, suggesting that the most severe quality of life effect of myocardial infarction is loneliness and isolation. Items from the MIDAS form a strong and reliable Mokken scale, which provides new insight into the relationship between items in the MIDAS and the measurement of quality of life after myocardial infarction. © 2010 Blackwell Publishing Ltd.

  17. Modelling the atmospheric dispersion of foot-and-mouth disease virus for emergency preparedness

    DEFF Research Database (Denmark)

    Sørensen, J.H.; Jensen, C.O.; Mikkelsen, T.

    2001-01-01

    A model system for simulating airborne spread of foot-and-mouth disease (FMD) is described. The system includes a virus production model and the local- and mesoscale atmospheric dispersion model RIMPUFF linked to the LINCOM local-scale Row model. LINCOM is used to calculate the sub-grid scale Row...

  18. The Torino Impact Hazard Scale

    Science.gov (United States)

    Binzel, Richard P.

    2000-04-01

    Newly discovered asteroids and comets have inherent uncertainties in their orbit determinations owing to the natural limits of positional measurement precision and the finite lengths of orbital arcs over which determinations are made. For some objects making predictable future close approaches to the Earth, orbital uncertainties may be such that a collision with the Earth cannot be ruled out. Careful and responsible communication between astronomers and the public is required for reporting these predictions and a 0-10 point hazard scale, reported inseparably with the date of close encounter, is recommended as a simple and efficient tool for this purpose. The goal of this scale, endorsed as the Torino Impact Hazard Scale, is to place into context the level of public concern that is warranted for any close encounter event within the next century. Concomitant reporting of the close encounter date further conveys the sense of urgency that is warranted. The Torino Scale value for a close approach event is based upon both collision probability and the estimated kinetic energy (collision consequence), where the scale value can change as probability and energy estimates are refined by further data. On the scale, Category 1 corresponds to collision probabilities that are comparable to the current annual chance for any given size impactor. Categories 8-10 correspond to certain (probability >99%) collisions having increasingly dire consequences. While close approaches falling Category 0 may be no cause for noteworthy public concern, there remains a professional responsibility to further refine orbital parameters for such objects and a figure of merit is suggested for evaluating such objects. Because impact predictions represent a multi-dimensional problem, there is no unique or perfect translation into a one-dimensional system such as the Torino Scale. These limitations are discussed.

  19. The Internet Gaming Disorder Scale.

    Science.gov (United States)

    Lemmens, Jeroen S; Valkenburg, Patti M; Gentile, Douglas A

    2015-06-01

    Recently, the American Psychiatric Association included Internet gaming disorder (IGD) in the appendix of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). The main aim of the current study was to test the reliability and validity of 4 survey instruments to measure IGD on the basis of the 9 criteria from the DSM-5: a long (27-item) and short (9-item) polytomous scale and a long (27-item) and short (9-item) dichotomous scale. The psychometric properties of these scales were tested among a representative sample of 2,444 Dutch adolescents and adults, ages 13-40 years. Confirmatory factor analyses demonstrated that the structural validity (i.e., the dimensional structure) of all scales was satisfactory. Both types of assessment (polytomous and dichotomous) were also reliable (i.e., internally consistent) and showed good criterion-related validity, as indicated by positive correlations with time spent playing games, loneliness, and aggression and negative correlations with self-esteem, prosocial behavior, and life satisfaction. The dichotomous 9-item IGD scale showed solid psychometric properties and was the most practical scale for diagnostic purposes. Latent class analysis of this dichotomous scale indicated that 3 groups could be discerned: normal gamers, risky gamers, and disordered gamers. On the basis of the number of people in this last group, the prevalence of IGD among 13- through 40-year-olds in the Netherlands is approximately 4%. If the DSM-5 threshold for diagnosis (experiencing 5 or more criteria) is applied, the prevalence of disordered gamers is more than 5%. (c) 2015 APA, all rights reserved).

  20. Fusion power economy of scale

    International Nuclear Information System (INIS)

    Dolan, T.J.

    1993-01-01

    In the next 50 yr, the world will need to develop hundreds of gigawatts of non-fossil-fuel energy sources for production of electricity and fuels. Nuclear fusion can probably provide much of the required energy economically, if large single-unit power plants are acceptable. Large power plants are more common than most people realize: There are already many multiple-unit power plants producing 2 to 5 GW(electric) at a single site. The cost of electricity (COE) from fusion energy is predicted to scale as COE ∼ COE 0 (P/P 0 ) -n , where P is the electrical power, the subscript zero denotes reference values, and the exponent n ∼ 0.36 to 0.7 in various designs. The validity ranges of these scalings are limited and need to be extended by future work. The fusion power economy of scale derives from four interrelated effects: improved operations and maintenance costs; scaling of equipment unit costs; a geometric effect that increases the mass power density; and reduction of the recirculating power fraction. Increased plasma size also relaxes the required confinement parameters: For the same neutron wall loading, larger tokamaks can use lower magnetic fields. Fossil-fuel power plants have a weaker economy of scale than fusion because the fuel costs constitute much of their COE. Solar and wind power plants consist of many small units, so they have little economy of scale. Fission power plants have a strong economy of scale but are unable to exploit it because the maximum unit size is limited by safety concerns. Large, steady-state fusion reactors generating 3 to 6 GW(electric) may be able to produce electricity for 4 to 5 cents/kW·h, which would be competitive with other future energy sources. 38 refs., 6 figs., 6 tabs

  1. Copper atomic-scale transistors.

    Science.gov (United States)

    Xie, Fangqing; Kavalenka, Maryna N; Röger, Moritz; Albrecht, Daniel; Hölscher, Hendrik; Leuthold, Jürgen; Schimmel, Thomas

    2017-01-01

    We investigated copper as a working material for metallic atomic-scale transistors and confirmed that copper atomic-scale transistors can be fabricated and operated electrochemically in a copper electrolyte (CuSO 4 + H 2 SO 4 ) in bi-distilled water under ambient conditions with three microelectrodes (source, drain and gate). The electrochemical switching-on potential of the atomic-scale transistor is below 350 mV, and the switching-off potential is between 0 and -170 mV. The switching-on current is above 1 μA, which is compatible with semiconductor transistor devices. Both sign and amplitude of the voltage applied across the source and drain electrodes ( U bias ) influence the switching rate of the transistor and the copper deposition on the electrodes, and correspondingly shift the electrochemical operation potential. The copper atomic-scale transistors can be switched using a function generator without a computer-controlled feedback switching mechanism. The copper atomic-scale transistors, with only one or two atoms at the narrowest constriction, were realized to switch between 0 and 1 G 0 ( G 0 = 2e 2 /h; with e being the electron charge, and h being Planck's constant) or 2 G 0 by the function generator. The switching rate can reach up to 10 Hz. The copper atomic-scale transistor demonstrates volatile/non-volatile dual functionalities. Such an optimal merging of the logic with memory may open a perspective for processor-in-memory and logic-in-memory architectures, using copper as an alternative working material besides silver for fully metallic atomic-scale transistors.

  2. Collaborative Research: Lagrangian Modeling of Dispersion in the Planetary Boundary Layer

    National Research Council Canada - National Science Library

    Weil, Jeffrey

    2003-01-01

    ...), using Lagrangian "particle" models coupled with large-eddy simulation (LES) fields. A one-particle model for the mean concentration field was enhanced by a theoretically improved treatment of the LES subgrid-scale (SGS) velocities...

  3. Resiliency Scale (RS): Scale Development, Reliability and Validity Study

    OpenAIRE

    GÜRGAN, Uğur

    2003-01-01

    The purpose of this study was to develop a new Resiliency Scale (RS) for Turkish samples. Various items from some major resiliency scales, most of them with some partial change, were collected and a pool of 228 items containing almost all possible resilience areas were obtained. This item-pool was administered to a college sample of 419. Resulting of analysis 50 item RS were obtained and administered to a new college sample of 112 participants. This second sample has also received the Rosenba...

  4. Scale Construction: Motivation and Relationship Scale in Education

    Directory of Open Access Journals (Sweden)

    Yunus Emre Demir

    2016-01-01

    Full Text Available The aim of this study is to analyze the validity and reliability of the Turkish version of Motivation and Relationship Scale (MRS, (Raufelder , Drury , Jagenow , Hoferichter & Bukowski , 2013.Participants were 526 students of secondary school. The results of confirmatory factor analysis described that the 21 items loaded three factor and the three-dimensional model was well fit (x2= 640.04, sd= 185, RMSEA= .068, NNFI= .90, CFI = .91, IFI=.91,SRMR=079, GFI= .90,AGFI=.87. Overall findings demonstrated that this scale is a valid and indicates that the adapted MRS is a valid instrument for measuring secondary school children’s motivation in Turkey.

  5. Scaling laws of Rydberg excitons

    Science.gov (United States)

    Heckötter, J.; Freitag, M.; Fröhlich, D.; Aßmann, M.; Bayer, M.; Semina, M. A.; Glazov, M. M.

    2017-09-01

    Rydberg atoms have attracted considerable interest due to their huge interaction among each other and with external fields. They demonstrate characteristic scaling laws in dependence on the principal quantum number n for features such as the magnetic field for level crossing or the electric field of dissociation. Recently, the observation of excitons in highly excited states has allowed studying Rydberg physics in cuprous oxide crystals. Fundamentally different insights may be expected for Rydberg excitons, as the crystal environment and associated symmetry reduction compared to vacuum give not only optical access to many more states within an exciton multiplet but also extend the Hamiltonian for describing the exciton beyond the hydrogen model. Here we study experimentally and theoretically the scaling of several parameters of Rydberg excitons with n , for some of which we indeed find laws different from those of atoms. For others we find identical scaling laws with n , even though their origin may be distinctly different from the atomic case. At zero field the energy splitting of a particular multiplet n scales as n-3 due to crystal-specific terms in the Hamiltonian, e.g., from the valence band structure. From absorption spectra in magnetic field we find for the first crossing of levels with adjacent principal quantum numbers a Br∝n-4 dependence of the resonance field strength, Br, due to the dominant paramagnetic term unlike for atoms for which the diamagnetic contribution is decisive, resulting in a Br∝n-6 dependence. By contrast, the resonance electric field strength shows a scaling as Er∝n-5 as for Rydberg atoms. Also similar to atoms with the exception of hydrogen we observe anticrossings between states belonging to multiplets with different principal quantum numbers at these resonances. The energy splittings at the avoided crossings scale roughly as n-4, again due to crystal specific features in the exciton Hamiltonian. The data also allow us to

  6. Divertor scaling laws for tokamaks

    International Nuclear Information System (INIS)

    Catto, P.J.; Krasheninnikov, S.I.; Connor, J.W.

    1997-01-01

    The breakdown of two body scaling laws is illustrated by using the two dimensional plasma code UEDGE coupled to an advanced Navier-Stokes neutrals transport package to model attached and detached regimes in a simplified geometry. Two body similarity scalings are used as benchmarks for runs retaining non-two body modifications due to the effects of (i) multi-step processes altering ionization and radiation via the excited states of atomic hydrogen and (ii) three body recombination. Preliminary investigations indicate that two body scaling interpretations of experimental data fail due to (i) multi-step processes when a significant region of the plasma exceeds a plasma density of 10 19 m -3 , or (ii) three body recombination when there is a significant region in which the temperature is ≤1 eV while the plasma density is ≥10 20 m -3 . These studies demonstrate that two body scaling arguments are often inappropriate in the divertor and the first results for alternate scalings are presented. (orig.)

  7. Scales of Natural Flood Management

    Science.gov (United States)

    Nicholson, Alex; Quinn, Paul; Owen, Gareth; Hetherington, David; Piedra Lara, Miguel; O'Donnell, Greg

    2016-04-01

    The scientific field of Natural flood Management (NFM) is receiving much attention and is now widely seen as a valid solution to sustainably manage flood risk whilst offering significant multiple benefits. However, few examples exist looking at NFM on a large scale (>10km2). Well-implemented NFM has the effect of restoring more natural catchment hydrological and sedimentological processes, which in turn can have significant flood risk and WFD benefits for catchment waterbodies. These catchment scale improvements in-turn allow more 'natural' processes to be returned to rivers and streams, creating a more resilient system. Although certain NFM interventions may appear distant and disconnected from main stem waterbodies, they will undoubtedly be contributing to WFD at the catchment waterbody scale. This paper offers examples of NFM, and explains how they can be maximised through practical design across many scales (from feature up to the whole catchment). New tools to assist in the selection of measures and their location, and to appreciate firstly, the flooding benefit at the local catchment scale and then show a Flood Impact Model that can best reflect the impacts of local changes further downstream. The tools will be discussed in the context of our most recent experiences on NFM projects including river catchments in the north east of England and in Scotland. This work has encouraged a more integrated approach to flood management planning that can use both traditional and novel NFM strategies in an effective and convincing way.

  8. Visions of Atomic Scale Tomography

    International Nuclear Information System (INIS)

    Kelly, T.F.; Miller, Michael K.; Rajan, Krishna; Ringer, S.P.

    2012-01-01

    A microscope, by definition, provides structural and analytical information about objects that are too small to see with the unaided eye. From the very first microscope, efforts to improve its capabilities and push them to ever-finer length scales have been pursued. In this context, it would seem that the concept of an ultimate microscope would have received much attention by now; but has it really ever been defined? Human knowledge extends to structures on a scale much finer than atoms, so it might seem that a proton-scale microscope or a quark-scale microscope would be the ultimate. However, we argue that an atomic-scale microscope is the ultimate for the following reason: the smallest building block for either synthetic structures or natural structures is the atom. Indeed, humans and nature both engineer structures with atoms, not quarks. So far as we know, all building blocks (atoms) of a given type are identical; it is the assembly of the building blocks that makes a useful structure. Thus, would a microscope that determines the position and identity of every atom in a structure with high precision and for large volumes be the ultimate microscope? We argue, yes. In this article, we consider how it could be built, and we ponder the answer to the equally important follow-on questions: who would care if it is built, and what could be achieved with it?

  9. Upscaling the impact of convective overshooting (COV) through BRAMS: a continental and wet-season scale study of the water vapour (WV) budget in the tropical tropopause layer (TTL).

    Science.gov (United States)

    Behera, Abhinna; Rivière, Emmanuel; Marécal, Virginie; Rysman, Jean-François; Claud, Chantal; Burgalat, Jérémie

    2017-04-01

    coincide with the TRO-pico campaign measurements. As of first step, we have already shown that, this model with only DC is well capable of producing key features of the TTL. Hence in the second step, keeping all the settings same in the model, a sub-grid scale process/parameterization is being developed in order to reproduce COV in the model. Then, we would be able to compare these two atmospheres, and it would describe quantitatively the impact of COV on the WV budget in the TTL at a continental scale. This on-going work reports about the further advancement done to introduce the COV parameterization in BRAMS by incorporating the information from satellite-borne and balloon-borne measurements. The preliminary results of the simulation with COV nudging, achieved till date of EGU assembly, will be presented.

  10. Pair plasma relaxation time scales.

    Science.gov (United States)

    Aksenov, A G; Ruffini, R; Vereshchagin, G V

    2010-04-01

    By numerically solving the relativistic Boltzmann equations, we compute the time scale for relaxation to thermal equilibrium for an optically thick electron-positron plasma with baryon loading. We focus on the time scales of electromagnetic interactions. The collisional integrals are obtained directly from the corresponding QED matrix elements. Thermalization time scales are computed for a wide range of values of both the total-energy density (over 10 orders of magnitude) and of the baryonic loading parameter (over 6 orders of magnitude). This also allows us to study such interesting limiting cases as the almost purely electron-positron plasma or electron-proton plasma as well as intermediate cases. These results appear to be important both for laboratory experiments aimed at generating optically thick pair plasmas as well as for astrophysical models in which electron-positron pair plasmas play a relevant role.

  11. Scaling and prescaling in quarkonium

    International Nuclear Information System (INIS)

    Warner, R.C.; Joshi, G.C.

    1979-01-01

    Recent experiments in the upsilon region indicate the quark mass dependence of quark-antiquark bound state properties. Classes of quark-antiquark potentials exhibiting scaling of energy level spacing with quark mass are presented, and the importance of mass dependence of bound state properties in investigating the nature of the potential is emphasised. The scaling potentials considered are V=V(msup(1/2)r), which exhibits constant level spacing, and V=bmsup(α)rsup(β), and its generalizations, which has scaling of energy levels controlled by the exponents α and β. The class of potentials yielding constant level spacing is shown to be consistent with the interpretation of the state recently observed at 9.46 GeV in e + e - annihilations as a bound state of a new quark and antiquark with esub(p)=1/3

  12. The Satisfaction With Life Scale.

    Science.gov (United States)

    Diener, E; Emmons, R A; Larsen, R J; Griffin, S

    1985-02-01

    This article reports the development and validation of a scale to measure global life satisfaction, the Satisfaction With Life Scale (SWLS). Among the various components of subjective well-being, the SWLS is narrowly focused to assess global life satisfaction and does not tap related constructs such as positive affect or loneliness. The SWLS is shown to have favorable psychometric properties, including high internal consistency and high temporal reliability. Scores on the SWLS correlate moderately to highly with other measures of subjective well-being, and correlate predictably with specific personality characteristics. It is noted that the SWLS is Suited for use with different age groups, and other potential uses of the scale are discussed.

  13. Bench-scale/field-scale interpretations: Session overview

    International Nuclear Information System (INIS)

    Cunningham, A.B.; Peyton, B.M.

    1995-04-01

    In situ bioremediation involves complex interactions between biological, chemical, and physical processes and requires integration of phenomena operating at scales ranging from that of a microbial cell (10 -6 ) to that of a remediation site (10 to 1000 m). Laboratory investigations of biodegradation are usually performed at a relatively small scale, governed by convenience, cost, and expedience. However, extending the results from a laboratory-scale experimental system to the design and operation of a field-scale system introduces (1) additional mass transport mechanisms and limitations; (2) the presence of multiple phases, contants, and competing microorganisms (3) spatial geologic heterogeneities; and (4) subsurface environmental factors that may inhibit bacterial growth such as temperature, pH, nutrient, or redox conditions. Field bioremediation rates may be limited by the availability of one of the necessary constituents for biotransformation: substrate, contaminant, electron acceptor, nutrients, or microorganisms capable of degrading the target compound. The factor that limits the rate of bioremediation may not be the same in the laboratory as it is in the field, thereby leading, to development of unsuccessful remediation strategies

  14. Status of xi-scaling

    International Nuclear Information System (INIS)

    Politzer, H.D.

    1977-01-01

    The logic of the xi-scaling analysis of inclusive lepton-hadron scattering is reviewed with the emphasis on clarifying what is assumed and what is predicted. The physics content of several recent papers, which purport to criticize this analysis, in fact confirm its validity and utility. For clarity, concentration is placed on the orthodox operator product analysis of electroproduction, local duality and precocious scaling. Other physics discussed includes the successes of QCD in the rate of charm production in muon inelastic scattering and in the energy--momentum sum rule. Gluons

  15. Shadowing in the scaling region

    International Nuclear Information System (INIS)

    Shaw, G.

    1989-01-01

    The approximate scaling behaviour of the shadowing effect at small χ, recently observed in deep inelastic muon scattering experiments on nuclei, is shown to arise naturally at very large ν in those hadron dominance models, formulated many years ago, which are dual to the parton model. At smaller ν small scale breaking effects are expected, which sould die away as ν increases. The predicted shadowing decreases rapidly with increasing χ, at a rate which is only weakly dependent on the atomic number A for reasonably large A. (orig.)

  16. Learning From the Furniture Scale

    DEFF Research Database (Denmark)

    Hvejsel, Marie Frier; Kirkegaard, Poul Henning

    2018-01-01

    Given its proximity to the human body, the furniture scale holds a particular potential in grasping the fundamental aesthetic potential of architecture to address its inhabitants by means of spatial ‘gestures’. Likewise, it holds a technical germ in realizing this potential given its immediate...... tangibility allowing experimentation with the ‘principles’ of architectural construction. In present paper we explore this dual tectonic potential of the furniture scale as an epistemological foundation in architectural education. In this matter, we discuss the conduct of a master-level course where we...

  17. Pelamis WEC - intermediate scale demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Yemm, R.

    2003-07-01

    This report describes the successful building and commissioning of an intermediate 1/7th scale model of the Pelamis Wave Energy Converter (WEC) and its testing in the wave climate of the Firth of Forth. Details are given of the design of the semi-submerged articulated structure of cylindrical elements linked by hinged joints. The specific programme objectives and conclusions, development issues addressed, and key remaining risks are discussed along with development milestones to be passed before the Pelamis WEC is ready for full-scale prototype testing.

  18. Organizational Scale and School Success.

    Science.gov (United States)

    Guthrie, James W.

    1979-01-01

    The relationship between the organizational scale of schooling (school and school district size) and school success is examined. The history of the movement toward larger school units, the evidence of the effects of that movement, and possible research strategies for further investigation of the issue are discussed. (JKS)

  19. Scaling up of renewable chemicals.

    Science.gov (United States)

    Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A

    2016-04-01

    The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Metrology at the nano scale

    International Nuclear Information System (INIS)

    Sheridan, B.; Cumpson, P.; Bailey, M.

    2006-01-01

    Progress in nano technology relies on ever more accurate measurements of quantities such as distance, force and current industry has long depended on accurate measurement. In the 19th century, for example, the performance of steam engines was seriously limited by inaccurately made components, a situation that was transformed by Henry Maudsley's screw micrometer calliper. And early in the 20th century, the development of telegraphy relied on improved standards of electrical resistance. Before this, each country had its own standards and cross border communication was difficult. The same is true today of nano technology if it is to be fully exploited by industry. Principles of measurement that work well at the macroscopic level often become completely unworkable at the nano metre scale - about 100 nm and below. Imaging, for example, is not possible on this scale using optical microscopes, and it is virtually impossible to weigh a nano metre-scale object with any accuracy. In addition to needing more accurate measurements, nano technology also often requires a greater variety of measurements than conventional technology. For example, standard techniques used to make microchips generally need accurate length measurements, but the manufacture of electronics at the molecular scale requires magnetic, electrical, mechanical and chemical measurements as well. (U.K.)

  1. Inviscid criterion for decomposing scales

    Science.gov (United States)

    Zhao, Dongxiao; Aluie, Hussein

    2018-05-01

    The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.

  2. Geometric scaling in exclusive processes

    International Nuclear Information System (INIS)

    Munier, S.; Wallon, S.

    2003-01-01

    We show that according to the present understanding of the energy evolution of the observables measured in deep-inelastic scattering, the photon-proton scattering amplitude has to exhibit geometric scaling at each impact parameter. We suggest a way to test this experimentally at HERA. A qualitative analysis based on published data is presented and discussed. (orig.)

  3. An Assertiveness Scale for Adolescents.

    Science.gov (United States)

    Lee, Dong Yul; And Others

    1985-01-01

    Developed a 33-item, situation-specific instrument that measures assertiveness of adolescents. Based on data from 682 elementary and secondary school students, adequate reliability and validity of the Assertiveness Scale for Adolescents (ASA) were obtained when tested against several variables about which predictions could be made. (BH)

  4. A Feminist Family Therapy Scale.

    Science.gov (United States)

    Black, Leora; Piercy, Fred P.

    1991-01-01

    Reports on development and psychometric properties of Feminist Family Therapy Scale (FFTS), a 17-item instrument intended to reflect degree to which family therapists conceptualize process of family therapy from feminist-informed perspective. Found that the instrument discriminated between self-identified feminists and nonfeminists, women and men,…

  5. National Image Interpretablility Rating Scales

    OpenAIRE

    2003-01-01

    Interactive Media Element This presentation media demonstrates the NIIRS scale and resolution numbers and presents a problem statement to help the student gain an intuitive understanding of the numbers. Last modified: 5/18/2009 ME3XXX Military Applications of Unmanned Air Vehicles/Remotely Operated Aircraft (UAV/ROA)

  6. Animal coloration: sexy spider scales.

    Science.gov (United States)

    Taylor, Lisa A; McGraw, Kevin J

    2007-08-07

    Many male jumping spiders display vibrant colors that are used in visual communication. A recent microscopic study on a jumping spider from Singapore shows that three-layered 'scale sandwiches' of chitin and air are responsible for producing their brilliant iridescent body coloration.

  7. Salzburger State Reactance Scale (SSR Scale): Validation of a Scale Measuring State Reactance.

    Science.gov (United States)

    Sittenthaler, Sandra; Traut-Mattausch, Eva; Steindl, Christina; Jonas, Eva

    This paper describes the construction and empirical evaluation of an instrument for measuring state reactance, the Salzburger State Reactance (SSR) Scale. The results of a confirmatory factor analysis supported a hypothesized three-factor structure: experience of reactance, aggressive behavioral intentions, and negative attitudes. Correlations with divergent and convergent measures support the validity of this structure. The SSR Subscales were strongly related to the other state reactance measures. Moreover, the SSR Subscales showed modest positive correlations with trait measures of reactance. The SSR Subscales correlated only slightly or not at all with neighboring constructs (e.g., autonomy, experience of control). The only exception was fairness scales, which showed moderate correlations with the SSR Subscales. Furthermore, a retest analysis confirmed the temporal stability of the scale. Suggestions for further validation of this questionnaire are discussed.

  8. Scale modelling in LMFBR safety

    International Nuclear Information System (INIS)

    Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.

    1979-01-01

    This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the

  9. Scale dependence of deuteron electrodisintegration

    Science.gov (United States)

    More, S. N.; Bogner, S. K.; Furnstahl, R. J.

    2017-11-01

    Background: Isolating nuclear structure properties from knock-out reactions in a process-independent manner requires a controlled factorization, which is always to some degree scale and scheme dependent. Understanding this dependence is important for robust extractions from experiment, to correctly use the structure information in other processes, and to understand the impact of approximations for both. Purpose: We seek insight into scale dependence by exploring a model calculation of deuteron electrodisintegration, which provides a simple and clean theoretical laboratory. Methods: By considering various kinematic regions of the longitudinal structure function, we can examine how the components—the initial deuteron wave function, the current operator, and the final-state interactions (FSIs)—combine at different scales. We use the similarity renormalization group to evolve each component. Results: When evolved to different resolutions, the ingredients are all modified, but how they combine depends strongly on the kinematic region. In some regions, for example, the FSIs are largely unaffected by evolution, while elsewhere FSIs are greatly reduced. For certain kinematics, the impulse approximation at a high renormalization group resolution gives an intuitive picture in terms of a one-body current breaking up a short-range correlated neutron-proton pair, although FSIs distort this simple picture. With evolution to low resolution, however, the cross section is unchanged but a very different and arguably simpler intuitive picture emerges, with the evolved current efficiently represented at low momentum through derivative expansions or low-rank singular value decompositions. Conclusions: The underlying physics of deuteron electrodisintegration is scale dependent and not just kinematics dependent. As a result, intuition about physics such as the role of short-range correlations or D -state mixing in particular kinematic regimes can be strongly scale dependent

  10. Scaling Irrational Beliefs in the General Attitude and Belief Scale

    Directory of Open Access Journals (Sweden)

    Lindsay R. Owings

    2013-04-01

    Full Text Available Accurate measurement of key constructs is essential to the continued development of Rational-Emotive Behavior Therapy (REBT. The General Attitude and Belief Scale (GABS, a contemporary inventory of rational and irrational beliefs based on current REBT theory, is one of the most valid and widely used instruments available, and recent research has continued to improve its psychometric standing. In this study of 544 students, item response theory (IRT methods were used (a to identify the most informative item in each irrational subscale of the GABS, (b to determine the level of irrationality represented by each of those items, and (c to suggest a condensed form of the GABS for further study with clinical populations. Administering only the most psychometrically informative items to clients could result in economies of time and effort. Further research based on the scaling of items could clarify the specific patterns of irrational beliefs associated with particular clinical syndromes.

  11. Universal Scaling Relations in Scale-Free Structure Formation

    Science.gov (United States)

    Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.

    2018-04-01

    A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM∝M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters and even dark matter halos. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power law tail of dA/d lnΣ∝Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D∝R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM halos) tend to a ρ∝R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation and detailed "full physics" hydrodynamical simulations. We find that these power-laws are good first order descriptions in all cases.

  12. Doppler method leak detection for LMFBR steam generators. Pt. 2. Detection characteristics of bubble in-water using large scale SG model

    International Nuclear Information System (INIS)

    Kumagai, Hiromichi

    2000-01-01

    To prevent the expansion of tube damage and to maintain structural integrity in the steam generators (SGs) of a fast breeder reactor (FBR), it is necessary to detect precisely and immediately the leakage of water from heat transfer tubes. Therefore, an active acoustic method was developed. Previous studies have revealed that, in practical steam generators, the active acoustic method can detect bubbles of 10 l/s within 10 seconds. However to prevent the expansion of damage to neighboring tubes, it is necessary to detect smaller leakages of water from the heat transfer tubes. The Doppler method is designed to detect small leakages and to find the source of a leak before damage spreads to neighboring tubes. The detection sensitivity of the Doppler method and the influence of background noise were investigated experimentally. In-water experiments were performed using an SG full-sector model that simulates actual SGs. The results show that the Doppler method can detect bubbles of 0.1 l/s (equivalent to a water leak rate of about 0.1 g/s) within a few seconds and that the background noise has little effect on water leak detection performance. The Doppler method thus has great potential for the detection of water leakage in SGs. (author)

  13. Frequency scaling for angle gathers

    KAUST Repository

    Zuberi, M. A H; Alkhalifah, Tariq Ali

    2014-01-01

    Angle gathers provide an extra dimension to analyze the velocity after migration. Space-shift and time shift-imaging conditions are two methods used to obtain angle gathers, but both are reasonably expensive. By scaling the time-lag axis of the time-shifted images, the computational cost of the time shift imaging condition can be considerably reduced. In imaging and more so Full waveform inversion, frequencydomain Helmholtz solvers are used more often to solve for the wavefields than conventional time domain extrapolators. In such cases, we do not need to extend the image, instead we scale the frequency axis of the frequency domain image to obtain the angle gathers more efficiently. Application on synthetic data demonstrate such features.

  14. Scaling in public transport networks

    Directory of Open Access Journals (Sweden)

    C. von Ferber

    2005-01-01

    Full Text Available We analyse the statistical properties of public transport networks. These networks are defined by a set of public transport routes (bus lines and the stations serviced by these. For larger networks these appear to possess a scale-free structure, as it is demonstrated e.g. by the Zipf law distribution of the number of routes servicing a given station or for the distribution of the number of stations which can be visited from a chosen one without changing the means of transport. Moreover, a rather particular feature of the public transport network is that many routes service common subsets of stations. We discuss the possibility of new scaling laws that govern intrinsic properties of such subsets.

  15. Holographic models with anisotropic scaling

    Science.gov (United States)

    Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.

    2013-12-01

    We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.

  16. Scaling laws for specialized hohlraums

    International Nuclear Information System (INIS)

    Rosen, M.D.

    1993-01-01

    The author presents scaling laws for the behavior of hohlraums that are somewhat more complex than a simple sphere or cylinder. In particular the author considers hohlraums that are in what has become known as a open-quotes primaryclose quotes open-quotes secondaryclose quotes configuration, namely geometries in which the laser is absorbed in a primary region of a hohlraum, and only radiation energy is transported to a secondary part of the hohlraum that is shielded from seeing the laser light directly. Such hohlraums have been in use of late for doing LTE opacity experiments on a sample in the secondary and in recently proposed open-quotes shimmedclose quotes hohlraums that use gold disks on axis to block a capsule's view of the cold laser entrance hole. The temperature/drive of the secondary, derived herein, scales somewhat differently than the drive in simple hohlraums

  17. Applied multidimensional scaling and unfolding

    CERN Document Server

    Borg, Ingwer; Mair, Patrick

    2018-01-01

    This book introduces multidimensional scaling (MDS) and unfolding as data analysis techniques for applied researchers. MDS is used for the analysis of proximity data on a set of objects, representing the data as distances between points in a geometric space (usually of two dimensions). Unfolding is a related method that maps preference data (typically evaluative ratings of different persons on a set of objects) as distances between two sets of points (representing the persons and the objects, resp.). This second edition has been completely revised to reflect new developments and the coverage of unfolding has also been substantially expanded. Intended for applied researchers whose main interests are in using these methods as tools for building substantive theories, it discusses numerous applications (classical and recent), highlights practical issues (such as evaluating model fit), presents ways to enforce theoretical expectations for the scaling solutions, and addresses the typical mistakes that MDS/unfoldin...

  18. Heavy quark hadron mass scale

    International Nuclear Information System (INIS)

    Anderson, J.T.

    1994-01-01

    Without the spin interactions the hardron masses within a multiplet are degenerate. The light quark hadron degenerate mulitplet mass spectrum is extended from the 3 quark ground state multiplets at J P =0 - , 1/2 + , 1 - to include the excited states which follow the spinorial decomposition of SU(2)xSU(2). The mass scales for the 4, 5, 6, .. quark hadrons are obtained from the degenerate multiplet mass m 0 /M=n 2 /α with n=4, 5, 6, .. The 4, 5, 6, .. quark hadron degenerate multiplet masses follow by splitting of the heavy quark mass scales according to the spinorial decomposition of SU(2)xSU(2). (orig.)

  19. Impedance Scaling and Impedance Control

    International Nuclear Information System (INIS)

    Chou, W.; Griffin, J.

    1997-06-01

    When a machine becomes really large, such as the Very Large Hadron Collider (VLHC), of which the circumference could reach the order of megameters, beam instability could be an essential bottleneck. This paper studies the scaling of the instability threshold vs. machine size when the coupling impedance scales in a ''normal'' way. It is shown that the beam would be intrinsically unstable for the VLHC. As a possible solution to this problem, it is proposed to introduce local impedance inserts for controlling the machine impedance. In the longitudinal plane, this could be done by using a heavily detuned rf cavity (e.g., a biconical structure), which could provide large imaginary impedance with the right sign (i.e., inductive or capacitive) while keeping the real part small. In the transverse direction, a carefully designed variation of the cross section of a beam pipe could generate negative impedance that would partially compensate the transverse impedance in one plane

  20. THE MODERN RACISM SCALE: PSYCHOMETRIC

    Directory of Open Access Journals (Sweden)

    MANUEL CÁRDENAS

    2007-08-01

    Full Text Available An adaption of McConahay, Harder and Batts’ (1981 moderm racism scale is presented for Chilean population andits psychometric properties, (reliability and validity are studied, along with its relationship with other relevantpsychosocial variables in studies on prejudice and ethnic discrimination (authoritarianism, religiousness, politicalposition, etc., as well as with other forms of prejudice (gender stereotypes and homophobia. The sample consistedof 120 participants, students of psychology, resident in the city of Antofagasta (a geographical zone with a highnumber of Latin-American inmigrants. Our findings show that the scale seems to be a reliable instrument to measurethe prejudice towards Bolivian immigrants in our social environment. Likewise, important differences among thesubjects are detected with high and low scores in the psychosocial variables used.

  1. Global scale groundwater flow model

    Science.gov (United States)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  2. Cosmological origin of mass scales

    International Nuclear Information System (INIS)

    Terazawa, H.

    1981-01-01

    We discuss the possibility that spontaneous breakdown of conformal invariance due to the very existence of our universe originates all the mass (or length) scales ranging from the Planck mass (approx. 10 19 GeV) to the Hubble constant (approx. 10 -42 GeV) and suggest that the photon may have a curvature-dependent mass which is as small as 10 -42 GeV. We also present a possible clue to Dirac's large number hypothesis. (orig.)

  3. Cosmological origin of mass scales

    International Nuclear Information System (INIS)

    Terazawa, Hidezumi.

    1981-02-01

    We discuss the possibility that spontaneous breakdown of conformal invariance due to the very existence of our universe originates all the mass (or length) scales ranging from the Planck mass (--10 19 GeV) to the Hubble constant (--10 -42 GeV) and suggest that the photon may have a curvature-dependent mass which is as small as 10 -42 GeV. We also present a possible clue to the Dirac's large number hypothesis. (author)

  4. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  5. Source Code Analysis Laboratory (SCALe)

    Science.gov (United States)

    2012-04-01

    products (including services) and processes. The agency has also published ISO / IEC 17025 :2005 General Requirements for the Competence of Testing...SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 also operate in accordance with ISO 9001. • NIST National...assessed by the accreditation body against all of the requirements of ISO / IEC 17025 : 2005 General requirements for the competence of testing and

  6. Scaling Exponents in Financial Markets

    Science.gov (United States)

    Kim, Kyungsik; Kim, Cheol-Hyun; Kim, Soo Yong

    2007-03-01

    We study the dynamical behavior of four exchange rates in foreign exchange markets. A detrended fluctuation analysis (DFA) is applied to detect the long-range correlation embedded in the non-stationary time series. It is for our case found that there exists a persistent long-range correlation in volatilities, which implies the deviation from the efficient market hypothesis. Particularly, the crossover is shown to exist in the scaling behaviors of the volatilities.

  7. Recent developments in complex scaling

    International Nuclear Information System (INIS)

    Rescigno, T.N.

    1980-01-01

    Some recent developments in the use of complex basis function techniques to study resonance as well as certain types of non-resonant, scattering phenomena are discussed. Complex scaling techniques and other closely related methods have continued to attract the attention of computational physicists and chemists and have now reached a point of development where meaningful calculations on many-electron atoms and molecules are beginning to appear feasible

  8. Turbulence Intensity Scaling: A Fugue

    OpenAIRE

    Basse, Nils T.

    2018-01-01

    We study streamwise turbulence intensity definitions using smooth- and rough-wall pipe flow measurements made in the Princeton Superpipe. Scaling of turbulence intensity with the bulk (and friction) Reynolds number is provided for the definitions. The turbulence intensity is proportional to the square root of the friction factor with the same proportionality constant for smooth- and rough-wall pipe flow. Turbulence intensity definitions providing the best description of the measurements are i...

  9. Transition physics and scaling overview

    International Nuclear Information System (INIS)

    Carlstrom, T.N.

    1996-01-01

    This paper presents an overview of recent experimental progress towards understanding H-mode transition physics and scaling. Terminology and techniques for studying H-mode are reviewed and discussed. The model of shear E x B flow stabilization of edge fluctuations at the L-H transition is gaining wide acceptance and is further supported by observations of edge rotation on a number of new devices. Observations of poloidal asymmetries of edge fluctuations and dephasing of density and potential fluctuations after the transition pose interesting challenges for understanding H-mode physics. Dedicated scans to determine the scaling of the power threshold have now been performed on many machines. A clear B t dependence is universally observed but dependence on the line averaged density is complicated. Other dependencies are also reported. Studies of the effect of neutrals and error fields on the power threshold are under investigation. The ITER threshold database has matured and offers guidance to the power threshold scaling issues relevant to next-step devices. (author)

  10. A laboratory scale fundamental time?

    International Nuclear Information System (INIS)

    Mendes, R.V.

    2012-01-01

    The existence of a fundamental time (or fundamental length) has been conjectured in many contexts. However, the ''stability of physical theories principle'' seems to be the one that provides, through the tools of algebraic deformation theory, an unambiguous derivation of the stable structures that Nature might have chosen for its algebraic framework. It is well-known that c and ℎ are the deformation parameters that stabilize the Galilean and the Poisson algebra. When the stability principle is applied to the Poincare-Heisenberg algebra, two deformation parameters emerge which define two time (or length) scales. In addition there are, for each of them, a plus or minus sign possibility in the relevant commutators. One of the deformation length scales, related to non-commutativity of momenta, is probably related to the Planck length scale but the other might be much larger and already detectable in laboratory experiments. In this paper, this is used as a working hypothesis to look for physical effects that might settle this question. Phase-space modifications, resonances, interference, electron spin resonance and non-commutative QED are considered. (orig.)

  11. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  12. Dimensional scaling for quasistationary states

    International Nuclear Information System (INIS)

    Kais, S.; Herschbach, D.R.

    1993-01-01

    Complex energy eigenvalues which specify the location and width of quasibound or resonant states are computed to good approximation by a simple dimensional scaling method. As applied to bound states, the method involves minimizing an effective potential function in appropriately scaled coordinates to obtain exact energies in the D→∞ limit, then computing approximate results for D=3 by a perturbation expansion in 1/D about this limit. For resonant states, the same procedure is used, with the radial coordinate now allowed to be complex. Five examples are treated: the repulsive exponential potential (e - r); a squelched harmonic oscillator (r 2 e - r); the inverted Kratzer potential (r -1 repulsion plus r -2 attraction); the Lennard-Jones potential (r -12 repulsion, r -6 attraction); and quasibound states for the rotational spectrum of the hydrogen molecule (X 1 summation g + , v=0, J=0 to 50). Comparisons with numerical integrations and other methods show that the much simpler dimensional scaling method, carried to second-order (terms in 1/D 2 ), yields good results over an extremely wide range of the ratio of level widths to spacings. Other methods have not yet evaluated the very broad H 2 rotational resonances reported here (J>39), which lie far above the centrifugal barrier

  13. Transition physics and scaling overview

    International Nuclear Information System (INIS)

    Carlstrom, T.N.

    1995-12-01

    This paper presents an overview of recent experimental progress towards understanding H-mode transition physics and scaling. Terminology and techniques for studying H-mode are reviewed and discussed. The model of shear E x B flow stabilization of edge fluctuations at the L-H transition is gaining wide acceptance and is further supported by observations of edge rotation on a number of new devices. Observations of poloidal asymmetries of edge fluctuations and dephasing of density and potential fluctuations after the transition pose interesting challenges for understanding H-mode physics. Dedicated scans to determine the scaling of the power threshold have now been performed on many machines. A dear B t dependence is universally observed but dependence on the line averaged density is complicated. Other dependencies are also reported. Studies of the effect of neutrals and error fields on the power threshold are under investigation. The ITER threshold database has matured and offers guidance to the power threshold scaling issues relevant to next-step devices

  14. The Principle of Social Scaling

    Directory of Open Access Journals (Sweden)

    Paulo L. dos Santos

    2017-01-01

    Full Text Available This paper identifies a general class of economic processes capable of generating the first-moment constraints implicit in the observed cross-sectional distributions of a number of economic variables: processes of social scaling. Across a variety of settings, the outcomes of economic competition reflect the normalization of individual values of certain economic quantities by average or social measures of themselves. The resulting socioreferential processes establish systematic interdependences among individual values of important economic variables, which under certain conditions take the form of emergent first-moment constraints on their distributions. The paper postulates a principle describing this systemic regulation of socially scaled variables and illustrates its empirical purchase by showing how capital- and labor-market competition can give rise to patterns of social scaling that help account for the observed distributions of Tobin’s q and wage income. The paper’s discussion embodies a distinctive approach to understanding and investigating empirically the relationship between individual agency and structural determinations in complex economic systems and motivates the development of observational foundations for aggregative, macrolevel economic analysis.

  15. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  16. Development of emotional stability scale

    Directory of Open Access Journals (Sweden)

    M Chaturvedi

    2010-01-01

    Full Text Available Background: Emotional stability remains the central theme in personality studies. The concept of stable emotional behavior at any level is that which reflects the fruits of normal emotional development. The study aims at development of an emotional stability scale. Materials and Methods: Based on available literature the components of emotional stability were identified and 250 items were developed, covering each component. Two-stage elimination of items was carried out, i.e. through judges′ opinions and item analysis. Results: Fifty items with highest ′t′ values covering 5 dimensions of emotional stability viz pessimism vs. optimism, anxiety vs. calm, aggression vs. tolerance., dependence vs. autonomy., apathy vs. empathy were retained in the final scale. Reliability as checked by Cronbach′s alpha was .81 and by split half method it was .79. Content validity and construct validity were checked. Norms are given in the form of cumulative percentages. Conclusion: Based on the psychometric principles a 50 item, self-administered 5 point Lickert type rating scale was developed for measurement of emotional stability.

  17. Interest in Aesthetic Rhinoplasty Scale.

    Science.gov (United States)

    Naraghi, Mohsen; Atari, Mohammad

    2017-04-01

    Interest in cosmetic surgery is increasing, with rhinoplasty being one of the most popular surgical procedures. It is essential that surgeons identify patients with existing psychological conditions before any procedure. This study aimed to develop and validate the Interest in Aesthetic Rhinoplasty Scale (IARS). Four studies were conducted to develop the IARS and to evaluate different indices of validity (face, content, construct, criterion, and concurrent validities) and reliability (internal consistency, split-half coefficient, and temporal stability) of the scale. The four study samples included a total of 463 participants. Statistical analysis revealed satisfactory psychometric properties in all samples. Scores on the IARS were negatively correlated with self-esteem scores ( r  = -0.296; p  social dysfunction ( r  = 0.268; p  < 0.01), and depression ( r  = 0.308; p  < 0.01). The internal and test-retest coefficients of consistency were found to be high (α = 0.93; intraclass coefficient = 0.94). Rhinoplasty patients were found to have significantly higher IARS scores than nonpatients ( p  < 0.001). Findings of the present studies provided evidence for face, content, construct, criterion, and concurrent validities and internal and test-retest reliability of the IARS. This evidence supports the use of the scale in clinical and research settings. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  18. Temporal scaling in information propagation

    Science.gov (United States)

    Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi

    2014-06-01

    For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.

  19. Scale-invariant gravity: geometrodynamics

    International Nuclear Information System (INIS)

    Anderson, Edward; Barbour, Julian; Foster, Brendan; Murchadha, Niall O

    2003-01-01

    We present a scale-invariant theory, conformal gravity, which closely resembles the geometrodynamical formulation of general relativity (GR). While previous attempts to create scale-invariant theories of gravity have been based on Weyl's idea of a compensating field, our direct approach dispenses with this and is built by extension of the method of best matching w.r.t. scaling developed in the parallel particle dynamics paper by one of the authors. In spatially compact GR, there is an infinity of degrees of freedom that describe the shape of 3-space which interact with a single volume degree of freedom. In conformal gravity, the shape degrees of freedom remain, but the volume is no longer a dynamical variable. Further theories and formulations related to GR and conformal gravity are presented. Conformal gravity is successfully coupled to scalars and the gauge fields of nature. It should describe the solar system observations as well as GR does, but its cosmology and quantization will be completely different

  20. Statistical and Judgmental Criteria for Scale Purification

    DEFF Research Database (Denmark)

    Wieland, Andreas; Durach, Christian F.; Kembro, Joakim

    2017-01-01

    of scale purification, to critically analyze the current state of scale purification in supply chain management (SCM) research and to provide suggestions for advancing the scale-purification process. Design/methodology/approach A framework for making scale-purification decisions is developed and used...

  1. Scaling properties of foreign exchange volatility

    NARCIS (Netherlands)

    Gençay, R.; Selçuk, F.; Whitcher, B.

    2001-01-01

    In this paper, we investigate the scaling properties of foreign exchange volatility. Our methodology is based on a wavelet multi-scaling approach which decomposes the variance of a time series and the covariance between two time series on a scale by scale basis through the application of a discrete

  2. Evaluating the impact of farm scale innovation at catchment scale

    Science.gov (United States)

    van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik

    2014-05-01

    Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear

  3. Development of a Facebook Addiction Scale.

    Science.gov (United States)

    Andreassen, Cecilie Schou; Torsheim, Torbjørn; Brunborg, Geir Scott; Pallesen, Ståle

    2012-04-01

    The Bergen Facebook Addiction Scale (BFAS), initially a pool of 18 items, three reflecting each of the six core elements of addiction (salience, mood modification, tolerance, withdrawal, conflict, and relapse), was constructed and administered to 423 students together with several other standardized self-report scales (Addictive Tendencies Scale, Online Sociability Scale, Facebook Attitude Scale, NEO-FFI, BIS/BAS scales, and Sleep questions). That item within each of the six addiction elements with the highest corrected item-total correlation was retained in the final scale. The factor structure of the scale was good (RMSEA = .046, CFI = .99) and coefficient alpha was .83. The 3-week test-retest reliability coefficient was .82. The scores converged with scores for other scales of Facebook activity. Also, they were positively related to Neuroticism and Extraversion, and negatively related to Conscientiousness. High scores on the new scale were associated with delayed bedtimes and rising times.

  4. The Origin of Scales and Scaling Laws in Star Formation

    Science.gov (United States)

    Guszejnov, David; Hopkins, Philip; Grudich, Michael

    2018-01-01

    Star formation is one of the key processes of cosmic evolution as it influences phenomena from the formation of galaxies to the formation of planets, and the development of life. Unfortunately, there is no comprehensive theory of star formation, despite intense effort on both the theoretical and observational sides, due to the large amount of complicated, non-linear physics involved (e.g. MHD, gravity, radiation). A possible approach is to formulate simple, easily testable models that allow us to draw a clear connection between phenomena and physical processes.In the first part of the talk I will focus on the origin of the IMF peak, the characteristic scale of stars. There is debate in the literature about whether the initial conditions of isothermal turbulence could set the IMF peak. Using detailed numerical simulations, I will demonstrate that not to be the case, the initial conditions are "forgotten" through the fragmentation cascade. Additional physics (e.g. feedback) is required to set the IMF peak.In the second part I will use simulated galaxies from the Feedback in Realistic Environments (FIRE) project to show that most star formation theories are unable to reproduce the near universal IMF peak of the Milky Way.Finally, I will present analytic arguments (supported by simulations) that a large number of observables (e.g. IMF slope) are the consequences of scale-free structure formation and are (to first order) unsuitable for differentiating between star formation theories.

  5. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  6. Proposing a tornado watch scale

    Science.gov (United States)

    Mason, Jonathan Brock

    This thesis provides an overview of language used in tornado safety recommendations from various sources, along with developing a rubric for scaled tornado safety recommendations, and subsequent development and testing of a tornado watch scale. The rubric is used to evaluate tornado refuge/shelter adequacy responses of Tuscaloosa residents gathered following the April 27, 2011 Tuscaloosa, Alabama EF4 tornado. There was a significant difference in the counts of refuge adequacy for Tuscaloosa residents when holding the locations during the April 27th tornado constant and comparing adequacy ratings for weak (EF0-EF1), strong (EF2-EF3) and violent (EF4-EF5) tornadoes. There was also a significant difference when comparing future tornado refuge plans of those same participants to the adequacy ratings for weak, strong and violent tornadoes. The tornado refuge rubric is then revised into a six-class, hierarchical Tornado Watch Scale (TWS) from Level 0 to Level 5 based on the likelihood of high-impact or low-impact severe weather events containing weak, strong or violent tornadoes. These levels represent maximum expected tornado intensity and include tornado safety recommendations from the tornado refuge rubric. Audio recordings similar to those used in current National Oceanic and Atmospheric Administration (NOAA) weather radio communications were developed to correspond to three levels of the TWS, a current Storm Prediction Center (SPC) tornado watch and a particularly dangerous situation (PDS) tornado watch. These were then used in interviews of Alabama residents to determine how changes to the information contained in the watch statements would affect each participant's tornado safety actions and perception of event danger. Results from interview participants (n=38) indicate a strong preference (97.37%) for the TWS when compared to current tornado watch and PDS tornado watch statements. Results also show the TWS elicits more adequate safety decisions from participants

  7. NSLINK: NJOY-SCALE-LINK

    International Nuclear Information System (INIS)

    Leege, P.F.A. de

    1991-05-01

    NSLINK is a set of computer codes to couple the NJOY cross-section generation code to the SCALE-3 code system (using AMPX-2 master library format) retaining the Nordheim resolved resonance treatment option. The following codes are included in NSLINK: XLACSR, a stripped-down version of the XLACS-2 code; MILER, converts NJOY output (GENDF format) to AMPX-2 master format; UNITABR, a revised version of the UNITAB code; BONAMI, in order to take into account the combination of Bondarenko and Nordheim resonance treatment, certain subroutines are included in the package which replace some subroutines in the BONAMI code. (author). 6 refs., 1 fig

  8. Water flow at all scales

    DEFF Research Database (Denmark)

    Sand-Jensen, K.

    2006-01-01

    Continuous water fl ow is a unique feature of streams and distinguishes them from all other ecosystems. The main fl ow is always downstream but it varies in time and space and can be diffi cult to measure and describe. The interest of hydrologists, geologists, biologists and farmers in water fl ow......, and its physical impact, depends on whether the main focus is on the entire stream system, the adjacent fi elds, the individual reaches or the habitats of different species. It is important to learn how to manage fl ow at all scales, in order to understand the ecology of streams and the biology...

  9. Accentuation-suppression and scaling

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik; Bundesen, Claus

    2012-01-01

    The limitations of the visual short-term memory (VSTM) system have become an increasingly popular field of study. One line of inquiry has focused on the way attention selects objects for encoding into VSTM. Using the framework of the Theory of Visual Attention (TVA; Bundesen, 1990 Psychological...... a scaling mechanism modulating the decision bias of the observer and also through an accentuation-suppression mechanism that modulates the degree of subjective relevance of objects, contracting attention around fewer, highly relevant objects while suppressing less relevant objects. These mechanisms may...

  10. The Scales of Gravitational Lensing

    Directory of Open Access Journals (Sweden)

    Francesco De Paolis

    2016-03-01

    Full Text Available After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge of supermassive black holes in the center of ours and nearby galaxies.

  11. JavaScript at scale

    CERN Document Server

    Boduch, Adam

    2015-01-01

    Have you ever come up against an application that felt like it was built on sand? Maybe you've been tasked with creating an application that needs to last longer than a year before a complete re-write? If so, JavaScript at Scale is your missing documentation for maintaining scalable architectures. There's no prerequisite framework knowledge required for this book, however, most concepts presented throughout are adaptations of components found in frameworks such as Backbone, AngularJS, or Ember. All code examples are presented using ECMAScript 6 syntax, to make sure your applications are ready

  12. Scaling law in laboratory astrophysics

    International Nuclear Information System (INIS)

    Xia Jiangfan; Zhang Jie

    2001-01-01

    The use of state-of-the-art lasers makes it possible to produce, in the laboratory, the extreme conditions similar to those in astrophysical processes. The introduction of astrophysics-relevant ideas in laser-plasma interaction experiments is propitious to the understanding of astrophysical phenomena. However, the great difference between laser-produced plasma and astrophysical objects makes it awkward to model the latter by laser-plasma experiments. The author presents the physical reasons for modeling astrophysical plasmas by laser plasmas, connecting these two kinds of plasmas by scaling laws. This allows the creation of experimental test beds where observation and models can be quantitatively compared with laboratory data

  13. Outer scale of atmospheric turbulence

    Science.gov (United States)

    Lukin, Vladimir P.

    2005-10-01

    In the early 70's, the scientists in Italy (A.Consortini, M.Bertolotti, L.Ronchi), USA (R.Buser, Ochs, S.Clifford) and USSR (V.Pokasov, V.Lukin) almost simultaneously discovered the phenomenon of deviation from the power law and the effect of saturation for the structure phase function. During a period of 35 years we have performed successively the investigations of the effect of low-frequency spectral range of atmospheric turbulence on the optical characteristics. The influence of the turbulence models as well as a outer scale of turbulence on the characteristics of telescopes and systems of laser beam formations has been determined too.

  14. Scaling of interfacial jump conditions

    International Nuclear Information System (INIS)

    Quezada G, S.; Vazquez R, A.; Espinosa P, G.

    2015-09-01

    To model the behavior of a nuclear reactor accurately is needed to have balance models that take into account the different phenomena occurring in the reactor. These balances have to be coupled together through boundary conditions. The boundary conditions have been studied and different treatments have been given to the interface. In this paper is a brief description of some of the interfacial jump conditions that have been proposed in recent years. Also, the scaling of an interfacial jump condition is proposed, for coupling the different materials that are in contact within a nuclear reactor. (Author)

  15. Drift-Scale Radionuclide Transport

    International Nuclear Information System (INIS)

    Houseworth, J.

    2004-01-01

    The purpose of this model report is to document the drift scale radionuclide transport model, taking into account the effects of emplacement drifts on flow and transport in the vicinity of the drift, which are not captured in the mountain-scale unsaturated zone (UZ) flow and transport models ''UZ Flow Models and Submodels'' (BSC 2004 [DIRS 169861]), ''Radionuclide Transport Models Under Ambient Conditions'' (BSC 2004 [DIRS 164500]), and ''Particle Tracking Model and Abstraction of Transport Process'' (BSC 2004 [DIRS 170041]). The drift scale radionuclide transport model is intended to be used as an alternative model for comparison with the engineered barrier system (EBS) radionuclide transport model ''EBS Radionuclide Transport Abstraction'' (BSC 2004 [DIRS 169868]). For that purpose, two alternative models have been developed for drift-scale radionuclide transport. One of the alternative models is a dual continuum flow and transport model called the drift shadow model. The effects of variations in the flow field and fracture-matrix interaction in the vicinity of a waste emplacement drift are investigated through sensitivity studies using the drift shadow model (Houseworth et al. 2003 [DIRS 164394]). In this model, the flow is significantly perturbed (reduced) beneath the waste emplacement drifts. However, comparisons of transport in this perturbed flow field with transport in an unperturbed flow field show similar results if the transport is initiated in the rock matrix. This has led to a second alternative model, called the fracture-matrix partitioning model, that focuses on the partitioning of radionuclide transport between the fractures and matrix upon exiting the waste emplacement drift. The fracture-matrix partitioning model computes the partitioning, between fractures and matrix, of diffusive radionuclide transport from the invert (for drifts without seepage) into the rock water. The invert is the structure constructed in a drift to provide the floor of the

  16. Water content estimated from point scale to plot scale

    Science.gov (United States)

    Akyurek, Z.; Binley, A. M.; Demir, G.; Abgarmi, B.

    2017-12-01

    Soil moisture controls the portioning of rainfall into infiltration and runoff. Here we investigate measurements of soil moisture using a range of techniques spanning different spatial scales. In order to understand soil water content in a test basin, 512 km2 in area, in the south of Turkey, a Cosmic Ray CRS200B soil moisture probe was installed at elevation of 1459 m and an ML3 ThetaProbe (CS 616) soil moisture sensor was established at 5cm depth used to get continuous soil moisture. Neutron count measurements were corrected for the changes in atmospheric pressure, atmospheric water vapour and intensity of incoming neutron flux. The calibration of the volumetric soil moisture was performed, from the laboratory analysis, the bulk density varies between 1.719 (g/cm3) -1.390 (g/cm3), and the dominant soil texture is silty clay loam and silt loamThe water content reflectometer was calibrated for soil-specific conditions and soil moisture estimates were also corrected with respect to soil temperature. In order to characterize the subsurface, soil electrical resistivity tomography was used. Wenner and Schlumberger array geometries were used with electrode spacing varied from 1m- 5 m along 40 m and 200 m profiles. From the inversions of ERT data it is apparent that within 50 m distance from the CRS200B, the soil is moderately resistive to a depth of 2m and more conductive at greater depths. At greater distances from the CRS200B, the ERT results indicate more resistive soils. In addition to the ERT surveys, ground penetrating radar surveys using a common mid-point configuration was used with 200MHz antennas. The volumetric soil moisture obtained from GPR appears to overestimate those based on TDR observations. The values obtained from CS616 (at a point scale) and CRS200B (at a mesoscale) are compared with the values obtained at a plot scale. For the field study dates (20-22.06.2017) the volumetric moisture content obtained from CS616 were 25.14%, 25.22% and 25

  17. The SCALE-UP Project

    Science.gov (United States)

    Beichner, Robert

    2015-03-01

    The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).

  18. The Regret/Disappointment Scale

    Directory of Open Access Journals (Sweden)

    Francesco Marcatto

    2008-01-01

    Full Text Available The present article investigates the effectiveness of methods traditionally used to distinguish between the emotions of regret and disappointment and presents a new method --- the Regret and Disappointment Scale (RDS --- for assessing the two emotions in decision making research. The validity of the RDS was tested in three studies. Study 1 used two scenarios, one prototypical of regret and the other of disappointment, to test and compare traditional methods (``How much regret do you feel'' and ``How much disappointment do you feel'' with the RDS. Results showed that only the RDS clearly differentiated between the constructs of regret and disappointment. Study 2 confirmed the validity of the RDS in a real-life scenario, in which both feelings of regret and disappointment could be experienced. Study 2 also demonstrated that the RDS can discriminate between regret and disappointment with results similar to those obtained by using a context-specific scale. Study 3 showed the advantages of the RDS over the traditional methods in gambling situations commonly used in decision making research, and provided evidence for the convergent validity of the RDS.

  19. Dynamic scaling in natural swarms

    Science.gov (United States)

    Cavagna, Andrea; Conti, Daniele; Creato, Chiara; Del Castello, Lorenzo; Giardina, Irene; Grigera, Tomas S.; Melillo, Stefania; Parisi, Leonardo; Viale, Massimiliano

    2017-09-01

    Collective behaviour in biological systems presents theoretical challenges beyond the borders of classical statistical physics. The lack of concepts such as scaling and renormalization is particularly problematic, as it forces us to negotiate details whose relevance is often hard to assess. In an attempt to improve this situation, we present here experimental evidence of the emergence of dynamic scaling laws in natural swarms of midges. We find that spatio-temporal correlation functions in different swarms can be rescaled by using a single characteristic time, which grows with the correlation length with a dynamical critical exponent z ~ 1, a value not found in any other standard statistical model. To check whether out-of-equilibrium effects may be responsible for this anomalous exponent, we run simulations of the simplest model of self-propelled particles and find z ~ 2, suggesting that natural swarms belong to a novel dynamic universality class. This conclusion is strengthened by experimental evidence of the presence of non-dissipative modes in the relaxation, indicating that previously overlooked inertial effects are needed to describe swarm dynamics. The absence of a purely dissipative regime suggests that natural swarms undergo a near-critical censorship of hydrodynamics.

  20. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  1. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  2. Goethite Bench-scale and Large-scale Preparation Tests

    Energy Technology Data Exchange (ETDEWEB)

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the

  3. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  4. Secondary cycle water chemistry for 500 MWe pressurised heavy water reactor (PHWR) plant: a case study

    International Nuclear Information System (INIS)

    Bhandakkar, A.; Subbarao, A.; Agarwal, N.K.

    1995-01-01

    In turbine and secondary cycle system of 500 MWe PHWR, chemistry of steam and water is controlled in secondary cycle for prevention of corrosion in steam generators (SGs), feedwater system and steam system, scale and deposit formation on heat transfer surfaces and carry-over of solids by steam and deposition on steam turbine blades. Water chemistry of secondary side of SGs and turbine cycle is discussed. (author). 8 refs., 2 tabs., 1 fig

  5. Dimensional analysis, scaling and fractals

    International Nuclear Information System (INIS)

    Timm, L.C.; Reichardt, K.; Oliveira Santos Bacchi, O.

    2004-01-01

    Dimensional analysis refers to the study of the dimensions that characterize physical entities, like mass, force and energy. Classical mechanics is based on three fundamental entities, with dimensions MLT, the mass M, the length L and the time T. The combination of these entities gives rise to derived entities, like volume, speed and force, of dimensions L 3 , LT -1 , MLT -2 , respectively. In other areas of physics, four other fundamental entities are defined, among them the temperature θ and the electrical current I. The parameters that characterize physical phenomena are related among themselves by laws, in general of quantitative nature, in which they appear as measures of the considered physical entities. The measure of an entity is the result of its comparison with another one, of the same type, called unit. Maps are also drawn in scale, for example, in a scale of 1:10,000, 1 cm 2 of paper can represent 10,000 m 2 in the field. Entities that differ in scale cannot be compared in a simple way. Fractal geometry, in contrast to the Euclidean geometry, admits fractional dimensions. The term fractal is defined in Mandelbrot (1982) as coming from the Latin fractus, derived from frangere which signifies to break, to form irregular fragments. The term fractal is opposite to the term algebra (from the Arabic: jabara) which means to join, to put together the parts. For Mandelbrot, fractals are non topologic objects, that is, objects which have as their dimension a real, non integer number, which exceeds the topologic dimension. For the topologic objects, or Euclidean forms, the dimension is an integer (0 for the point, 1 for a line, 2 for a surface, and 3 for a volume). The fractal dimension of Mandelbrot is a measure of the degree of irregularity of the object under consideration. It is related to the speed by which the estimate of the measure of an object increases as the measurement scale decreases. An object normally taken as uni-dimensional, like a piece of a

  6. Scaling criteria for rock dynamic experiments

    Energy Technology Data Exchange (ETDEWEB)

    Crowley, Barbara K [Lawrence Radiation Laboratory, University of California, Livermore, CA (United States)

    1970-05-01

    A set of necessary conditions for performing scaled rock dynamics experiments is derived from the conservation equations of continuum mechanics. Performing scaled experiments in two different materials is virtually impossible because of the scaling restrictions imposed by two equations of state. However, performing dynamically scaled experiments in the same material is possible if time and distance use the same scaling factor and if the effects of gravity are insignificant. When gravity becomes significant, dynamic scaling is no longer possible. To illustrate these results, example calculations of megaton and kiloton experiments are considered. (author00.

  7. Scaling laws for modeling nuclear reactor systems

    International Nuclear Information System (INIS)

    Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.

    1979-01-01

    Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion

  8. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  9. Drug delivery across length scales.

    Science.gov (United States)

    Delcassian, Derfogail; Patel, Asha K; Cortinas, Abel B; Langer, Robert

    2018-02-20

    Over the last century, there has been a dramatic change in the nature of therapeutic, biologically active molecules available to treat disease. Therapies have evolved from extracted natural products towards rationally designed biomolecules, including small molecules, engineered proteins and nucleic acids. The use of potent drugs which target specific organs, cells or biochemical pathways, necessitates new tools which can enable controlled delivery and dosing of these therapeutics to their biological targets. Here, we review the miniaturisation of drug delivery systems from the macro to nano-scale, focussing on controlled dosing and controlled targeting as two key parameters in drug delivery device design. We describe how the miniaturisation of these devices enables the move from repeated, systemic dosing, to on-demand, targeted delivery of therapeutic drugs and highlight areas of focus for the future.

  10. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  11. Particle Bed Reactor scaling relationships

    International Nuclear Information System (INIS)

    Slovik, G.; Araj, K.; Horn, F.L.; Ludewig, H.; Benenati, R.

    1987-01-01

    Scaling relationships for Particle Bed Reactors (PBRs) are discussed. The particular applications are short duration systems, i.e., for propulsion or burst power. Particle Bed Reactors can use a wide selection of different moderators and reflectors and be designed for such a wide range of power and bed power densities. Additional design considerations include the effect of varying the number of fuel elements, outlet Mach number in hot gas channel, etc. All of these variables and options result in a wide range of reactor weights and performance. Extremely light weight reactors (approximately 1 kg/MW) are possible with the appropriate choice of moderator/reflector and power density. Such systems are very attractive for propulsion systems where parasitic weight has to be minimized

  12. Scale-invariant extended inflation

    International Nuclear Information System (INIS)

    Holman, R.; Kolb, E.W.; Vadas, S.L.; Wang, Y.

    1991-01-01

    We propose a model of extended inflation which makes use of the nonlinear realization of scale invariance involving the dilaton coupled to an inflaton field whose potential admits a metastable ground state. The resulting theory resembles the Jordan-Brans-Dicke version of extended inflation. However, quantum effects, in the form of the conformal anomaly, generate a mass for the dilaton, thus allowing our model to evade the problems of the original version of extended inflation. We show that extended inflation can occur for a wide range of inflaton potentials with no fine-tuning of dimensionless parameters required. Furthermore, we also find that it is quite natural for the extended-inflation period to be followed by an epoch of slow-rollover inflation as the dilaton settles down to the minimum of its induced potential

  13. Size scaling of static friction.

    Science.gov (United States)

    Braun, O M; Manini, Nicola; Tosatti, Erio

    2013-02-22

    Sliding friction across a thin soft lubricant film typically occurs by stick slip, the lubricant fully solidifying at stick, yielding and flowing at slip. The static friction force per unit area preceding slip is known from molecular dynamics (MD) simulations to decrease with increasing contact area. That makes the large-size fate of stick slip unclear and unknown; its possible vanishing is important as it would herald smooth sliding with a dramatic drop of kinetic friction at large size. Here we formulate a scaling law of the static friction force, which for a soft lubricant is predicted to decrease as f(m)+Δf/A(γ) for increasing contact area A, with γ>0. Our main finding is that the value of f(m), controlling the survival of stick slip at large size, can be evaluated by simulations of comparably small size. MD simulations of soft lubricant sliding are presented, which verify this theory.

  14. Cognitive Reserve Scale and ageing

    Directory of Open Access Journals (Sweden)

    Irene León

    2016-01-01

    Full Text Available The construct of cognitive reserve attempts to explain why some individuals with brain impairment, and some people during normal ageing, can solve cognitive tasks better than expected. This study aimed to estimate cognitive reserve in a healthy sample of people aged 65 years and over, with special attention to its influence on cognitive performance. For this purpose, it used the Cognitive Reserve Scale (CRS and a neuropsychological battery that included tests of attention and memory. The results revealed that women obtained higher total CRS raw scores than men. Moreover, the CRS predicted the learning curve, short-term and long-term memory, but not attentional and working memory performance. Thus, the CRS offers a new proxy of cognitive reserve based on cognitively stimulating activities performed by healthy elderly people. Following an active lifestyle throughout life was associated with better intellectual performance and positive effects on relevant aspects of quality of life.

  15. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  16. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  17. Bacterial Communities: Interactions to Scale

    Directory of Open Access Journals (Sweden)

    Reed M. Stubbendieck

    2016-08-01

    Full Text Available In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities.

  18. Scaling Theory of Polyelectrolyte Nanogels

    Science.gov (United States)

    Qu, Li-Jian

    2017-08-01

    The present paper develops the scaling theory of polyelectrolyte nanogels in dilute and semidilute solutions. The dependencies of the nanogel dimension on branching topology, charge fraction, subchain length, segment number, solution concentration are obtained. For a single polyelectrolyte nanogel in salt free solution, the nanogel may be swelled by the Coulombic repulsion (the so-called polyelectrolyte regime) or the osmotic counterion pressure (the so-called osmotic regime). Characteristics and boundaries between different regimes of a single polyelectrolyte nanogel are summarized. In dilute solution, the nanogels in polyelectrolyte regime will distribute orderly with the increase of concentration. While the nanogels in osmotic regime will always distribute randomly. Different concentration dependencies of the size of a nanogel in polyelectrolyte regime and in osmotic regime are also explored. Supported by China Earthquake Administration under Grant No. 20150112 and National Natural Science Foundation of China under Grant No. 21504014

  19. Scaling Theory of Polyelectrolyte Nanogels

    International Nuclear Information System (INIS)

    Qu Li-Jian

    2017-01-01

    The present paper develops the scaling theory of polyelectrolyte nanogels in dilute and semidilute solutions. The dependencies of the nanogel dimension on branching topology, charge fraction, subchain length, segment number, solution concentration are obtained. For a single polyelectrolyte nanogel in salt free solution, the nanogel may be swelled by the Coulombic repulsion (the so-called polyelectrolyte regime) or the osmotic counterion pressure (the so-called osmotic regime). Characteristics and boundaries between different regimes of a single polyelectrolyte nanogel are summarized. In dilute solution, the nanogels in polyelectrolyte regime will distribute orderly with the increase of concentration. While the nanogels in osmotic regime will always distribute randomly. Different concentration dependencies of the size of a nanogel in polyelectrolyte regime and in osmotic regime are also explored. (paper)

  20. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  1. Scaling the Baltic Sea environment

    DEFF Research Database (Denmark)

    Larsen, Henrik Gutzon

    2008-01-01

    of this development, this article suggests that environmental politics critically depend on the delineation of relatively bounded spaces that identify and situate particular environmental concerns as spatial objects for politics. These spaces are not simply determined by ‘nature' or some environmental......The Baltic Sea environment has since the early 1970s passed through several phases of spatial objectification in which the ostensibly well-defined semi-enclosed sea has been framed and reframed as a geographical object for intergovernmental environmental politics. Based on a historical analysis......-scientific logic, but should rather be seen as temporal outcomes of scale framing processes, processes that are accentuated by contemporary conceptions of the environment (or nature) in terms of multi-scalar ecosystems. This has implications for how an environmental concern is perceived and politically addressed....

  2. Baryogenesis at the electroweak scale

    International Nuclear Information System (INIS)

    Dine, M.; Huet, P.; Singleton, R. Jr.

    1992-01-01

    We explore some issues involved in generating the baryon asymmetry at the electroweak scale. A simple two-dimensional model is analyzed which illustrates the role of the effective action in computing the asymmetry. We stress the fact that baryon production ceases at a very small value of the Higgs field; as a result, certain two-Higgs models which have been studied recently cannot produce sufficient asymmetry, while quite generally models with only doublets can barely produce the observed baryon density; models with gauge singlets are more promising. We also review limits on Higgs masses coming from the requirement that the baryon asymmetry not be wiped out after the phase transition. We note that there are a variety of uncertainties in these calculations, and that even in models with a single Higgs doublet one cannot rule out a Higgs mass below 55 GeV. (orig.)

  3. Small-scale classification schemes

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2004-01-01

    Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...

  4. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  5. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  6. The Adaptive Multi-scale Simulation Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-09-01

    The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.

  7. White Mango Scale, Aulacaspis tubercularis , Distribution and ...

    African Journals Online (AJOL)

    White Mango Scale, Aulacaspis tubercularis , Distribution and Severity Status in East and West Wollega Zones, ... Among the insect pests attacking mango plant, white mango scale is the most devastating insect pest. ... HOW TO USE AJOL.

  8. The Development of Marital Maturity Scale

    Directory of Open Access Journals (Sweden)

    Muhammed YILDIZ

    2017-06-01

    Full Text Available In this study, validity, reliability and item analysis studies of the Marital Maturity Scale prepared to test whether individuals are ready for marriage have been done. Studies of the development of the scale were made on 623 individuals, consisting of single adults. In the validity studies of the scale, explanatory and confirmatory factor analyses and criterion related validity studies were performed. Factor analysis revealed that the scale had four dimensions. The four factors in the measurement account for 60.91% of the total variance. The factor loadings of the items in the scale range from 0.42 to 0.86. Inonu Marriage Attitude Scale was used in the criterion related validity studies. Correlation value of the two scales r=0.72 (p=0.000 was found significant. It was determined that the subscales of the scale had a significant correlation with the total scale. The cronbach alpha value of the first dimension of the scale was 0.85, the cronbach alpha value of the second dimension of the scale was 0.68, the cronbach alpha value of the third dimension of the scale was 0.80, the cronbach alpha value of the fourth dimension of the scale was 0.91 and the cronbach alpha value of the total scale was 0.90. Test retest results r=0.70, (p=0.000 were found significant. In the item analysis studies, it was revealed that in the lower 27% group, the individuals in the upper 27% group were significantly different in all items (p=0.000. The item total correlation value of the items in the scale was between 0.40 and 0.63. As a result of the assessments, it was concluded that the Marital Maturity Scale was a reliable and valid instrument to measure marital maturity of single adults

  9. Abusive Supervision Scale Development in Indonesia

    OpenAIRE

    Wulani, Fenika; Purwanto, Bernadinus M; Handoko, Hani

    2014-01-01

    The purpose of this study was to develop a scale of abusive supervision in Indonesia. The study was conducted with a different context and scale development method from Tepper’s (2000) abusive supervision scale. The abusive supervision scale from Tepper (2000) was developed in the U.S., which has a cultural orientation of low power distance. The current study was conducted in Indonesia, which has a high power distance. This study used interview procedures to obtain information about superviso...

  10. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  11. One-fifth-scale and full-scale fuel element rocking tests

    International Nuclear Information System (INIS)

    Nau, P.V.; Olsen, B.E.

    1978-06-01

    Using 1 / 5 -scale and 1 / 1 -scale (prototype H451) fuel elements, one, two, or three stacked elements on a clamped base element were rocked from an initial release position. Relative displacement, rock-down loads, and dowel pin shear forces were measured. A scaled comparison between 1 / 5 -scale and 1 / 1 -scale results was made to evaluate the model scaling laws, and an error analysis was performed to assess the accuracy and usefulness of the test data

  12. Three scales of motions associated with tornadoes

    International Nuclear Information System (INIS)

    Forbes, G.S.

    1978-03-01

    This dissertation explores three scales of motion commonly associated with tornadoes, and the interaction of these scales: the tornado cyclone, the tornado, and the suction vortex. The goal of the research is to specify in detail the character and interaction of these scales of motion to explain tornadic phenomena

  13. Length scale for configurational entropy in microemulsions

    NARCIS (Netherlands)

    Reiss, H.; Kegel, W.K.; Groenewold, J.

    1996-01-01

    In this paper we study the length scale that must be used in evaluating the mixing entropy in a microemulsion. The central idea involves the choice of a length scale in configuration space that is consistent with the physical definition of entropy in phase space. We show that this scale may be

  14. Continued validation of the Multidimensional Perfectionism Scale.

    Science.gov (United States)

    Clavin, S L; Clavin, R H; Gayton, W F; Broida, J

    1996-06-01

    Scores on the Multidimensional Perfectionism Scale have been correlated with measures of obsessive-compulsive tendencies for women, so the validity of scores on this scale for 41 men was examined. Scores on the Perfectionism Scale were significantly correlated (.47-.03) with scores on the Maudsley Obsessive-Compulsive Inventory.

  15. Toward seamless hydrologic predictions across spatial scales

    NARCIS (Netherlands)

    Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin; Warrach-Sagi, Kirsten; Attinger, Sabine

    2017-01-01

    Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the

  16. Scaling analysis in bepu licensing of LWR

    Energy Technology Data Exchange (ETDEWEB)

    D' auria, Francesco; Lanfredini, Marco; Muellner, Nikolaus [University of Pisa, Pisa (Italy)

    2012-08-15

    'Scaling' plays an important role for safety analyses in the licensing of water cooled nuclear power reactors. Accident analyses, a sub set of safety analyses, is mostly based on nuclear reactor system thermal hydraulics, and therefore based on an adequate experimental data base, and in recent licensing applications, on best estimate computer code calculations. In the field of nuclear reactor technology, only a small set of the needed experiments can be executed at a nuclear power plant; the major part of experiments, either because of economics or because of safety concerns, has to be executed at reduced scale facilities. How to address the scaling issue has been the subject of numerous investigations in the past few decades (a lot of work has been performed in the 80thies and 90thies of the last century), and is still the focus of many scientific studies. The present paper proposes a 'roadmap' to scaling. Key elements are the 'scaling-pyramid', related 'scaling bridges' and a logical path across scaling achievements (which constitute the 'scaling puzzle'). The objective is addressing the scaling issue when demonstrating the applicability of the system codes, the 'key-to-scaling', in the licensing process of a nuclear power plant. The proposed 'road map to scaling' aims at solving the 'scaling puzzle', by introducing a unified approach to the problem.

  17. Scaling analysis in bepu licensing of LWR

    International Nuclear Information System (INIS)

    D'auria, Francesco; Lanfredini, Marco; Muellner, Nikolaus

    2012-01-01

    'Scaling' plays an important role for safety analyses in the licensing of water cooled nuclear power reactors. Accident analyses, a sub set of safety analyses, is mostly based on nuclear reactor system thermal hydraulics, and therefore based on an adequate experimental data base, and in recent licensing applications, on best estimate computer code calculations. In the field of nuclear reactor technology, only a small set of the needed experiments can be executed at a nuclear power plant; the major part of experiments, either because of economics or because of safety concerns, has to be executed at reduced scale facilities. How to address the scaling issue has been the subject of numerous investigations in the past few decades (a lot of work has been performed in the 80thies and 90thies of the last century), and is still the focus of many scientific studies. The present paper proposes a 'roadmap' to scaling. Key elements are the 'scaling-pyramid', related 'scaling bridges' and a logical path across scaling achievements (which constitute the 'scaling puzzle'). The objective is addressing the scaling issue when demonstrating the applicability of the system codes, the 'key-to-scaling', in the licensing process of a nuclear power plant. The proposed 'road map to scaling' aims at solving the 'scaling puzzle', by introducing a unified approach to the problem.

  18. Why Online Education Will Attain Full Scale

    Science.gov (United States)

    Sener, John

    2010-01-01

    Online higher education has attained scale and is poised to take the next step in its growth. Although significant obstacles to a full scale adoption of online education remain, we will see full scale adoption of online higher education within the next five to ten years. Practically all higher education students will experience online education in…

  19. Mineral scale management. Part II, Fundamental chemistry

    Science.gov (United States)

    Alan W. Rudie; Peter W. Hart

    2006-01-01

    The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process.As such, it is accurately modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that must be met for a scale problem to exist, the acid-base equilibria of the scaling anions often control where...

  20. Mechanics over micro and nano scales

    CERN Document Server

    Chakraborty, Suman

    2011-01-01

    Discusses the fundaments of mechanics over micro and nano scales in a level accessible to multi-disciplinary researchers, with a balance of mathematical details and physical principles Covers life sciences and chemistry for use in emerging applications related to mechanics over small scales Demonstrates the explicit interconnection between various scale issues and the mechanics of miniaturized systems

  1. 21 CFR 880.2720 - Patient scale.

    Science.gov (United States)

    2010-04-01

    ... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Patient scale. 880.2720 Section 880.2720 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES...

  2. 76 FR 50881 - Required Scale Tests

    Science.gov (United States)

    2011-08-17

    ... RIN 0580-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration... required scale tests. Those documents defined ``limited seasonal basis'' incorrectly. This document... 20, 2011 (76 FR 3485) and on April 4, 2011 (76 FR 18348), concerning required scale tests. Those...

  3. 76 FR 3485 - Required Scale Tests

    Science.gov (United States)

    2011-01-20

    ...-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration, USDA. ACTION... their scales tested at least twice each calendar year at intervals of approximately 6 months. This final rule requires that regulated entities complete the first of the two scale tests between January 1 and...

  4. 76 FR 18348 - Required Scale Tests

    Science.gov (United States)

    2011-04-04

    ... RIN 0580-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration... published a document in the Federal Register on January 20, 2011 (76 FR 3485), defining required scale tests... the last sentence of paragraph (a) to read as follows: Sec. 201.72 Scales; testing of. (a...

  5. Scale dependent inference in landscape genetics

    Science.gov (United States)

    Samuel A. Cushman; Erin L. Landguth

    2010-01-01

    Ecological relationships between patterns and processes are highly scale dependent. This paper reports the first formal exploration of how changing scale of research away from the scale of the processes governing gene flow affects the results of landscape genetic analysis. We used an individual-based, spatially explicit simulation model to generate patterns of genetic...

  6. COVERS Neonatal Pain Scale: Development and Validation

    Directory of Open Access Journals (Sweden)

    Ivan L. Hand

    2010-01-01

    Full Text Available Newborns and infants are often exposed to painful procedures during hospitalization. Several different scales have been validated to assess pain in specific populations of pediatric patients, but no single scale can easily and accurately assess pain in all newborns and infants regardless of gestational age and disease state. A new pain scale was developed, the COVERS scale, which incorporates 6 physiological and behavioral measures for scoring. Newborns admitted to the Neonatal Intensive Care Unit or Well Baby Nursery were evaluated for pain/discomfort during two procedures, a heel prick and a diaper change. Pain was assessed using indicators from three previously established scales (CRIES, the Premature Infant Pain Profile, and the Neonatal Infant Pain Scale, as well as the COVERS Scale, depending upon gestational age. Premature infant testing resulted in similar pain assessments using the COVERS and PIPP scales with an r=0.84. For the full-term infants, the COVERS scale and NIPS scale resulted in similar pain assessments with an r=0.95. The COVERS scale is a valid pain scale that can be used in the clinical setting to assess pain in newborns and infants and is universally applicable to all neonates, regardless of their age or physiological state.

  7. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  8. Computational applications of DNA physical scales

    DEFF Research Database (Denmark)

    Baldi, Pierre; Chauvin, Yves; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models......The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...

  9. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models......Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...

  10. Scaling solutions for dilaton quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Henz, T.; Pawlowski, J.M., E-mail: j.pawlowski@thphys.uni-heidelberg.de; Wetterich, C.

    2017-06-10

    Scaling solutions for the effective action in dilaton quantum gravity are investigated within the functional renormalization group approach. We find numerical solutions that connect ultraviolet and infrared fixed points as the ratio between scalar field and renormalization scale k is varied. In the Einstein frame the quantum effective action corresponding to the scaling solutions becomes independent of k. The field equations derived from this effective action can be used directly for cosmology. Scale symmetry is spontaneously broken by a non-vanishing cosmological value of the scalar field. For the cosmology corresponding to our scaling solutions, inflation arises naturally. The effective cosmological constant becomes dynamical and vanishes asymptotically as time goes to infinity.

  11. Scaling laws for coastal overwash morphology

    Science.gov (United States)

    Lazarus, Eli D.

    2016-12-01

    Overwash is a physical process of coastal sediment transport driven by storm events and is essential to landscape resilience in low-lying barrier environments. This work establishes a comprehensive set of scaling laws for overwash morphology: unifying quantitative descriptions with which to compare overwash features by their morphological attributes across case examples. Such scaling laws also help relate overwash features to other morphodynamic phenomena. Here morphometric data from a physical experiment are compared with data from natural examples of overwash features. The resulting scaling relationships indicate scale invariance spanning several orders of magnitude. Furthermore, these new relationships for overwash morphology align with classic scaling laws for fluvial drainages and alluvial fans.

  12. SCALE criticality safety verification and validation package

    International Nuclear Information System (INIS)

    Bowman, S.M.; Emmett, M.B.; Jordan, W.C.

    1998-01-01

    Verification and validation (V and V) are essential elements of software quality assurance (QA) for computer codes that are used for performing scientific calculations. V and V provides a means to ensure the reliability and accuracy of such software. As part of the SCALE QA and V and V plans, a general V and V package for the SCALE criticality safety codes has been assembled, tested and documented. The SCALE criticality safety V and V package is being made available to SCALE users through the Radiation Safety Information Computational Center (RSICC) to assist them in performing adequate V and V for their SCALE applications

  13. Fluctuation scaling, Taylor's law, and crime.

    Directory of Open Access Journals (Sweden)

    Quentin S Hanley

    Full Text Available Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026 while burglary exhibited a greater exponent (α = 1.292 ± 0.029 indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs to 2.094 ± 0081 (Other Crimes. Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  14. Fluctuation scaling, Taylor's law, and crime.

    Science.gov (United States)

    Hanley, Quentin S; Khatun, Suniya; Yosef, Amal; Dyer, Rachel-May

    2014-01-01

    Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026) while burglary exhibited a greater exponent (α = 1.292 ± 0.029) indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs) to 2.094 ± 0081 (Other Crimes). Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.

  15. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  16. Invariant relationships deriving from classical scaling transformations

    International Nuclear Information System (INIS)

    Bludman, Sidney; Kennedy, Dallas C.

    2011-01-01

    Because scaling symmetries of the Euler-Lagrange equations are generally not variational symmetries of the action, they do not lead to conservation laws. Instead, an extension of Noether's theorem reduces the equations of motion to evolutionary laws that prove useful, even if the transformations are not symmetries of the equations of motion. In the case of scaling, symmetry leads to a scaling evolutionary law, a first-order equation in terms of scale invariants, linearly relating kinematic and dynamic degrees of freedom. This scaling evolutionary law appears in dynamical and in static systems. Applied to dynamical central-force systems, the scaling evolutionary equation leads to generalized virial laws, which linearly connect the kinetic and potential energies. Applied to barotropic hydrostatic spheres, the scaling evolutionary equation linearly connects the gravitational and internal energy densities. This implies well-known properties of polytropes, describing degenerate stars and chemically homogeneous nondegenerate stellar cores.

  17. The scaling issue: scientific opportunities

    Science.gov (United States)

    Orbach, Raymond L.

    2009-07-01

    A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.

  18. The scaling issue: scientific opportunities

    International Nuclear Information System (INIS)

    Orbach, Raymond L

    2009-01-01

    A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.

  19. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...

  20. Scaling Agile Infrastructure to People

    CERN Document Server

    Jones, B; Traylen, S; Arias, N Barrientos

    2015-01-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow ...

  1. Scaling Agile Infrastructure to People

    Science.gov (United States)

    Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.

    2015-12-01

    When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.

  2. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  3. Self-scaling tumor growth

    DEFF Research Database (Denmark)

    Schmiegel, Jürgen

    We study the statistical properties of the star-shaped approximation of in vitro tumor profiles. The emphasis is on the two-point correlation structure of the radii of the tumor as a function of time and angle. In particular, we show that spatial two-point correlators follow a cosine law. Further......We study the statistical properties of the star-shaped approximation of in vitro tumor profiles. The emphasis is on the two-point correlation structure of the radii of the tumor as a function of time and angle. In particular, we show that spatial two-point correlators follow a cosine law....... Furthermore, we observe self-scaling behaviour of two-point correlators of different orders, i.e. correlators of a given order are a power law of the correlators of some other order. This power-law dependence is similar to what has been observed for the statistics of the energy-dissipation in a turbulent flow....... Based on this similarity, we provide a Lévy based model that captures the correlation structure of the radii of the star-shaped tumor profiles....

  4. Universal scaling in sports ranking

    International Nuclear Information System (INIS)

    Deng Weibing; Li Wei; Cai Xu; Bulou, Alain; Wang Qiuping A

    2012-01-01

    Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters. (paper)

  5. Scales and scaling in turbulent ocean sciences; physics-biology coupling

    Science.gov (United States)

    Schmitt, Francois

    2015-04-01

    Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.

  6. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  7. Thermodynamic scaling behavior in genechips

    Directory of Open Access Journals (Sweden)

    Van Hummelen Paul

    2009-01-01

    Full Text Available Abstract Background Affymetrix Genechips are characterized by probe pairs, a perfect match (PM and a mismatch (MM probe differing by a single nucleotide. Most of the data preprocessing algorithms neglect MM signals, as it was shown that MMs cannot be used as estimators of the non-specific hybridization as originally proposed by Affymetrix. The aim of this paper is to study in detail on a large number of experiments the behavior of the average PM/MM ratio. This is taken as an indicator of the quality of the hybridization and, when compared between different chip series, of the quality of the chip design. Results About 250 different GeneChip hybridizations performed at the VIB Microarray Facility for Homo sapiens, Drosophila melanogaster, and Arabidopsis thaliana were analyzed. The investigation of such a large set of data from the same source minimizes systematic experimental variations that may arise from differences in protocols or from different laboratories. The PM/MM ratios are derived theoretically from thermodynamic laws and a link is made with the sequence of PM and MM probe, more specifically with their central nucleotide triplets. Conclusion The PM/MM ratios subdivided according to the different central nucleotides triplets follow qualitatively those deduced from the hybridization free energies in solution. It is shown also that the PM and MM histograms are related by a simple scale transformation, in agreement with what is to be expected from hybridization thermodynamics. Different quantitative behavior is observed on the different chip organisms analyzed, suggesting that some organism chips have superior probe design compared to others.

  8. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  9. Scaling: From quanta to nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Zuber, Novak, E-mail: rohatgi@bnl.go [703 New Mark Esplanade, Rockville, MD 20850 (United States)

    2010-08-15

    This paper has three objectives. The first objective is to show how the Einstein-de Broglie equation (EdB) can be extended to model and scale, via fractional scaling, both conservative and dissipative processes ranging in scale from quanta to nuclear reactors. The paper also discusses how and why a single equation and associated fractional scaling method generate for each process of change the corresponding scaling criterion. The versatility and capability of fractional scaling are demonstrated by applying it to: (a) particle dynamics, (b) conservative (Bernoulli) and dissipative (hydraulic jump) flows, (c) viscous and turbulent flows through rough and smooth pipes, and (d) momentum diffusion in a semi-infinite medium. The capability of fractional scaling to scale a process over a vast range of temporal and spatial scales is demonstrated by applying it to fluctuating processes. The application shows that the modeling of fluctuations in fluid mechanics is analogous to that in relativistic quantum field theory. Thus, Kolmogorov dissipation frequency and length are the analogs of the characteristic time and length of quantum fluctuations. The paper briefly discusses the applicability of the fractional scaling approach (FSA) to nanotechnology and biology. It also notes the analogy between FSA and the approach used to scale polymers. These applications demonstrate the power of scaling as well as the validity of Pierre-Gilles de Gennes' ideas concerning scaling, analogies and simplicity. They also demonstrate the usefulness and efficiency of his approach to solving scientific problems. The second objective is to note and discuss the benefits of applying FSA to NPP technology. The third objective is to present a state of the art assessment of thermal-hydraulics (T/H) capabilities and needs relevant to NPP.

  10. Scaling: From quanta to nuclear reactors

    International Nuclear Information System (INIS)

    Zuber, Novak

    2010-01-01

    This paper has three objectives. The first objective is to show how the Einstein-de Broglie equation (EdB) can be extended to model and scale, via fractional scaling, both conservative and dissipative processes ranging in scale from quanta to nuclear reactors. The paper also discusses how and why a single equation and associated fractional scaling method generate for each process of change the corresponding scaling criterion. The versatility and capability of fractional scaling are demonstrated by applying it to: (a) particle dynamics, (b) conservative (Bernoulli) and dissipative (hydraulic jump) flows, (c) viscous and turbulent flows through rough and smooth pipes, and (d) momentum diffusion in a semi-infinite medium. The capability of fractional scaling to scale a process over a vast range of temporal and spatial scales is demonstrated by applying it to fluctuating processes. The application shows that the modeling of fluctuations in fluid mechanics is analogous to that in relativistic quantum field theory. Thus, Kolmogorov dissipation frequency and length are the analogs of the characteristic time and length of quantum fluctuations. The paper briefly discusses the applicability of the fractional scaling approach (FSA) to nanotechnology and biology. It also notes the analogy between FSA and the approach used to scale polymers. These applications demonstrate the power of scaling as well as the validity of Pierre-Gilles de Gennes' ideas concerning scaling, analogies and simplicity. They also demonstrate the usefulness and efficiency of his approach to solving scientific problems. The second objective is to note and discuss the benefits of applying FSA to NPP technology. The third objective is to present a state of the art assessment of thermal-hydraulics (T/H) capabilities and needs relevant to NPP.

  11. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  12. Scaling Consumers' Purchase Involvement: A New Approach

    Directory of Open Access Journals (Sweden)

    Jörg Kraigher-Krainer

    2012-06-01

    Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.

  13. Scaling analysis of meteorite shower mass distributions

    DEFF Research Database (Denmark)

    Oddershede, Lene; Meibom, A.; Bohr, Jakob

    1998-01-01

    Meteorite showers are the remains of extraterrestrial objects which are captivated by the gravitational field of the Earth. We have analyzed the mass distribution of fragments from 16 meteorite showers for scaling. The distributions exhibit distinct scaling behavior over several orders of magnetude......; the observed scaling exponents vary from shower to shower. Half of the analyzed showers show a single scaling region while the orther half show multiple scaling regimes. Such an analysis can provide knowledge about the fragmentation process and about the original meteoroid. We also suggest to compare...... the observed scaling exponents to exponents observed in laboratory experiments and discuss the possibility that one can derive insight into the original shapes of the meteoroids....

  14. Validation of the Early Functional Abilities scale

    DEFF Research Database (Denmark)

    Poulsen, Ingrid; Kreiner, Svend; Engberg, Aase W

    2018-01-01

    model item analysis. A secondary objective was to examine the relationship between the Early Functional Abilities scale and the Functional Independence Measurement™, in order to establish the criterion validity of the Early Functional Abilities scale and to compare the sensitivity of measurements using......), facio-oral, sensorimotor and communicative/cognitive functions. Removal of one item from the sensorimotor scale confirmed unidimensionality for each of the 4 subscales, but not for the entire scale. The Early Functional Abilities subscales are sensitive to differences between patients in ranges in which......OBJECTIVE: The Early Functional Abilities scale assesses the restoration of brain function after brain injury, based on 4 dimensions. The primary objective of this study was to evaluate the validity, objectivity, reliability and measurement precision of the Early Functional Abilities scale by Rasch...

  15. A scale distortion theory of anchoring.

    Science.gov (United States)

    Frederick, Shane W; Mochon, Daniel

    2012-02-01

    We propose that anchoring is often best interpreted as a scaling effect--that the anchor changes how the response scale is used, not how the focal stimulus is perceived. Of importance, we maintain that this holds true even for so-called objective scales (e.g., pounds, calories, meters, etc.). In support of this theory of scale distortion, we show that prior exposure to a numeric standard changes respondents' use of that specific response scale but does not generalize to conceptually affiliated judgments rendered on similar scales. Our findings highlight the necessity of distinguishing response language effects from representational effects in places where the need for that distinction has often been assumed away.

  16. Scale and the acceptability of nuclear energy

    International Nuclear Information System (INIS)

    Wilbanks, T.J.

    1984-01-01

    A rather speculative exploration is presented of scale as it may affect the acceptability of nuclear energy. In our utilization of this energy option, how does large vs. small relate to attitudes toward it, and what can we learn from this about technology choices in the United States more generally. In order to address such a question, several stepping-stones are needed. First, scale is defined for the purposes of the paper. Second, recent experience with nuclear energy is reviewed: trends in the scale of use, the current status of nuclear energy as an option, and the social context for its acceptance problems. Third, conventional notions about the importance of scale in electricity generation are summarized. With these preliminaries out of the way, the paper then discusses apparent relationships between scale and the acceptance of nuclear energy and suggests some policy implications of these preliminary findings. Finally, some comments are offered about general relationships between scale and technology choice

  17. Measuring Tourism motivation: Do Scales matter?

    OpenAIRE

    Huang, Songshan (Sam)

    2009-01-01

    Measuring tourist motivation has always been a challenging task for tourism researchers. This paper aimed to increase the understanding of tourist motivation measurement by comparing two frequently adopted motivation measurement approaches: self-perception (SP) and importance-rating (IR) approaches. Results indicated that both SP and IR scales were highly reliable in terms of internal consistency. However, respondents tended to rate more positively in the SP scale than in the IR scale. Factor...

  18. Further validation of the Indecisiveness Scale.

    Science.gov (United States)

    Gayton, W F; Clavin, R H; Clavin, S L; Broida, J

    1994-12-01

    Scores on the Indecisiveness Scale have been shown to be correlated with scores on measures of obsessive-compulsive tendencies and perfectionism for women. This study examined the validity of the Indecisiveness Scale with 41 men whose mean age was 21.1 yr. Indecisiveness scores were significantly correlated with scores on measures of obsessive-compulsive tendencies and perfectionism. Also, undeclared majors had a significantly higher mean on the Indecisiveness Scale than did declared majors.

  19. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  20. Resource Complementarity and IT Economies of Scale

    DEFF Research Database (Denmark)

    Woudstra, Ulco; Berghout, Egon; Tan, Chee-Wee

    2017-01-01

    In this study, we explore economies of scale for IT infrastructure and application services. An in-depth appreciation of economies of scale is imperative for an adequate understanding of the impact of IT investments. Our findings indicate that even low IT spending organizations can make...... a difference by devoting at least 60% of their total IT budget on IT infrastructure in order to foster economies of scale and extract strategic benefits....

  1. On BLM scale fixing in exclusive processes

    International Nuclear Information System (INIS)

    Anikin, I.V.; Pire, B.; Szymanowski, L.; Teryaev, O.V.; Wallon, S.

    2005-01-01

    We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x B . We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)

  2. On BLM scale fixing in exclusive processes

    Energy Technology Data Exchange (ETDEWEB)

    Anikin, I.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Universite Paris-Sud, LPT, Orsay (France); Pire, B. [Ecole Polytechnique, CPHT, Palaiseau (France); Szymanowski, L. [Soltan Institute for Nuclear Studies, Warsaw (Poland); Univ. de Liege, Inst. de Physique, Liege (Belgium); Teryaev, O.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Wallon, S. [Universite Paris-Sud, LPT, Orsay (France)

    2005-07-01

    We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x{sub B}. We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)

  3. Void probability scaling in hadron nucleus interactions

    International Nuclear Information System (INIS)

    Ghosh, Dipak; Deb, Argha; Bhattacharyya, Swarnapratim; Ghosh, Jayita; Bandyopadhyay, Prabhat; Das, Rupa; Mukherjee, Sima

    2002-01-01

    Heygi while investigating with the rapidity gap probability (that measures the chance of finding no particle in the pseudo-rapidity interval Δη) found that a scaling behavior in the rapidity gap probability has a close correspondence with the scaling of a void probability in galaxy correlation study. The main aim in this paper is to study the scaling behavior of the rapidity gap probability

  4. On the frequency scalings of RF guns

    International Nuclear Information System (INIS)

    Lin, L.C.; Chen, S.C.; Wurtele, J.S.

    1995-01-01

    A frequency scaling law for RF guns is derived from the normalized Vlasov-Maxwell equations. It shows that higher frequency RF guns can generate higher brightness beams under the assumption that the accelerating gradient and all beam and structure parameters are scaled with the RF frequency. Numerical simulation results using MAGIC confirm the scaling law. A discussion of the range of applicability of the law is presented. copyright 1995 American Institute of Physics

  5. Scale-up of precipitation processes

    OpenAIRE

    Zauner, R.

    1999-01-01

    This thesis concerns the scale-up of precipitation processes aimed at predicting product particle characteristics. Although precipitation is widely used in the chemical and pharmaceutical industry, successful scale-up is difficult due to the absence of a validated methodology. It is found that none of the conventional scale-up criteria reported in the literature (equal power input per unit mass, equal tip speed, equal stirring rate) is capable of predicting the experimentally o...

  6. Using Imagers for Scaling Ecological Observations

    OpenAIRE

    Graham, Eric; Hicks, John; Riordan, Erin; Wang, Eric; Yuen, Eric

    2009-01-01

    Stationary and mobile ground-based cameras can be used to scale ecological observations, relating pixel information in images to in situ measurements. Currently there are four CENS projects that involve using cameras for scaling ecological observations: 1. Scaling from one individual to the landscape. Pan-Tilt-Zoom cameras can be zoomed in on a tight focus on individual plants and parts of individuals and then zoomed out to get a landscape view, composed of the same and similar species. 2...

  7. Scale up, then power down

    International Nuclear Information System (INIS)

    Pichon, Max

    2011-01-01

    Full text: The University of Queensland has switched on what it says is Australia's largest solar photovoltaic installation, a 1.2MW system that spans 11 rooftops at the St Lucia campus. The UQ Solar Array, which effectively coats four buildings with more than 5,000 polycrystalline silicon solar panels, will generate about 1,850MWh a year. “During the day, the system will provide up to six per cent of the university's power requirements, reducing greenhouse gas emissions by approximately 1,650 tonnes of CO 2 -e per annum,”said Rodger Whitby, the GM of generation for renewables company Ingenero. It also underpins a number of cutting-edge research projects in diverse fields, according to Professor Paul Meredith, who oversaw the design and installation of the solar array. “A major objective of our array research program is to provide a clearer understanding of how to integrate megawatt- scale renewable energy sources into an urban grid,” said Professor Meredith, of the School of Mathematics and Physics and Global Change Institute. “Mid-size, commercial-scale renewable power generating systems like UQ's will become increasingly common in urban and remote areas. Addressing the engineering issues around how these systems can feed into and integrate with the grid is essential so that people can really understand and calculate their value as we transition to lower-emission forms of energy.” Electricity retailer Energex contributed $90,000 to the research project through state-of-the- art equipment to allow high-quality monitoring and analysis of the power feed. Another key research project addresses one of the most common criticisms of solar power: that it cannot replace baseload grid power. Through a partnership with Brisbane electricity storage technology company RedFlow, a 200kW battery bank will be connected to a 339kW section of the solar array. “The RedFlow system uses next-generation zinc bromine batteries,” Professor Meredith said.

  8. Functional nanometer-scale structures

    Science.gov (United States)

    Chan, Tsz On Mario

    Nanometer-scale structures have properties that are fundamentally different from their bulk counterparts. Much research effort has been devoted in the past decades to explore new fabrication techniques, model the physical properties of these structures, and construct functional devices. The ability to manipulate and control the structure of matter at the nanoscale has made many new classes of materials available for the study of fundamental physical processes and potential applications. The interplay between fabrication techniques and physical understanding of the nanostructures and processes has revolutionized the physical and material sciences, providing far superior properties in materials for novel applications that benefit society. This thesis consists of two major aspects of my graduate research in nano-scale materials. In the first part (Chapters 3--6), a comprehensive study on the nanostructures based on electrospinning and thermal treatment is presented. Electrospinning is a well-established method for producing high-aspect-ratio fibrous structures, with fiber diameter ranging from 1 nm--1 microm. A polymeric solution is typically used as a precursor in electrospinning. In our study, the functionality of the nanostructure relies on both the nanostructure and material constituents. Metallic ions containing precursors were added to the polymeric precursor following a sol-gel process to prepare the solution suitable for electrospinning. A typical electrospinning process produces as-spun fibers containing both polymer and metallic salt precursors. Subsequent thermal treatments of the as-spun fibers were carried out in various conditions to produce desired structures. In most cases, polymer in the solution and the as-spun fibers acted as a backbone for the structure formation during the subsequent heat treatment, and were thermally removed in the final stage. Polymers were also designed to react with the metallic ion precursors during heat treatment in some

  9. Socially responsible marketing decisions - scale development

    Directory of Open Access Journals (Sweden)

    Dina Lončarić

    2009-07-01

    Full Text Available The purpose of this research is to develop a measurement scale for evaluating the implementation level of the concept of social responsibility in taking marketing decisions, in accordance with a paradigm of the quality-of-life marketing. A new scale of "socially responsible marketing decisions" has been formed and its content validity, reliability and dimensionality have been analyzed. The scale has been tested on a sample of the most successful Croatian firms. The research results lead us to conclude that the scale has satisfactory psychometric characteristics but that it is necessary to improve it by generating new items and by testing it on a greater number of samples.

  10. International scaling of nuclear and radiological events

    International Nuclear Information System (INIS)

    Wang Yuhui; Wang Haidan

    2014-01-01

    Scales are inherent forms of measurement used in daily life, just like Celsius or Fahrenheit scales for temperature and Richter for scale for earthquakes. Jointly developed by the IAEA and OECD/NEA in 1990, the purpose of International Nuclear and Radiological Event Scale (INES) is to help nuclear and radiation safety authorities and the nuclear industry worldwide to rate nuclear and radiological events and to communicate their safety significance to the general public, the media and the technical community. INES was initially used to classify events at nuclear power plants only. It was subsequently extended to rate events associated with the transport, storage and use of radioactive material and radiation sources, from those occurring at nuclear facilities to those associated with industrial use. Since its inception, it has been adopted in 69 countries. Events are classified on the scale at seven levels: Levels 1-3 are called 'incidents' and Levels 4-7 'accidents'. The scale is designed so that the severity of an event is about ten times greater for each increase in level on the scale. Events without safety significance are called 'deviations' and are classified Below Scale/Level 0. INES classifies nuclear and radiological accidents and incidents by considering three areas of impact: People and the Environment; Radiological Barriers and Control; Defence-in-Depth. By now, two nuclear accidents were on the highest level of the scale: Chernobyl and Fukumashi. (authors)

  11. Self-adapted sliding scale spectroscopy ADC

    International Nuclear Information System (INIS)

    Xu Qichun; Wang Jingjin

    1992-01-01

    The traditional sliding scale technique causes a disabled range that is equal to the sliding length, thus reduces the analysis range of a MCA. A method for reduce ADC's DNL, which is called self-adapted sliding scale method, has been designed and tested. With this method, the disabled range caused by a traditional sliding scale method can be eliminated by a random trial scale and there is no need of an additional amplitude discriminator with swing threshold. A special trial-and-correct logic is presented. The tested DNL of the spectroscopy ADC described here is less than 0.5%

  12. Selecting numerical scales for pairwise comparisons

    International Nuclear Information System (INIS)

    Elliott, Michael A.

    2010-01-01

    It is often desirable in decision analysis problems to elicit from an individual the rankings of a population of attributes according to the individual's preference and to understand the degree to which each attribute is preferred to the others. A common method for obtaining this information involves the use of pairwise comparisons, which allows an analyst to convert subjective expressions of preference between two attributes into numerical values indicating preferences across the entire population of attributes. Key to the use of pairwise comparisons is the underlying numerical scale that is used to convert subjective linguistic expressions of preference into numerical values. This scale represents the psychological manner in which individuals perceive increments of preference among abstract attributes and it has important implications about the distribution and consistency of an individual's preferences. Three popular scale types, the traditional integer scales, balanced scales and power scales are examined. Results of a study of 64 individuals responding to a hypothetical decision problem show that none of these scales can accurately capture the preferences of all individuals. A study of three individuals working on an actual engineering decision problem involving the design of a decay heat removal system for a nuclear fission reactor show that the choice of scale can affect the preferred decision. It is concluded that applications of pairwise comparisons would benefit from permitting participants to choose the scale that best models their own particular way of thinking about the relative preference of attributes.

  13. New SCALE graphical interface for criticality safety

    International Nuclear Information System (INIS)

    Bowman, Stephen M.; Horwedel, James E.

    2003-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. SCALE includes the well-known KENO V.a and KENO-VI three-dimensional (3-D) Monte Carlo criticality computer codes. One of the current development efforts aimed at making SCALE easier to use is the SCALE Graphically Enhanced Editing Wizard (GeeWiz). GeeWiz is compatible with SCALE 5 and runs on Windows personal computers. GeeWiz provides input menus and context-sensitive help to guide users through the setup of their input. It includes a direct link to KENO3D to allow the user to view the components of their geometry model as it is constructed. Once the input is complete, the user can click a button to run SCALE and another button to view the output. KENO3D has also been upgraded for compatibility with SCALE 5 and interfaces directly with GeeWiz. GeeWiz and KENO3D for SCALE 5 are planned for release in late 2003. The presentation of this paper is designed as a live demonstration of GeeWiz and KENO3D for SCALE 5. (author)

  14. Ergodicity breakdown and scaling from single sequences

    Energy Technology Data Exchange (ETDEWEB)

    Kalashyan, Armen K. [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Buiatti, Marco [Laboratoire de Neurophysique et Physiologie, CNRS UMR 8119 Universite Rene Descartes - Paris 5 45, rue des Saints Peres, 75270 Paris Cedex 06 (France); Cognitive Neuroimaging Unit - INSERM U562, Service Hospitalier Frederic Joliot, CEA/DRM/DSV, 4 Place du general Leclerc, 91401 Orsay Cedex (France); Grigolini, Paolo [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Dipartimento di Fisica ' E.Fermi' - Universita di Pisa and INFM, Largo Pontecorvo 3, 56127 Pisa (Italy); Istituto dei Processi Chimico, Fisici del CNR Area della Ricerca di Pisa, Via G. Moruzzi 1, 56124 Pisa (Italy)], E-mail: grigo@df.unipi.it

    2009-01-30

    In the ergodic regime, several methods efficiently estimate the temporal scaling of time series characterized by long-range power-law correlations by converting them into diffusion processes. However, in the condition of ergodicity breakdown, the same methods give ambiguous results. We show that in such regime, two different scaling behaviors emerge depending on the age of the windows used for the estimation. We explain the ambiguity of the estimation methods by the different influence of the two scaling behaviors on each method. Our results suggest that aging drastically alters the scaling properties of non-ergodic processes.

  15. Ergodicity breakdown and scaling from single sequences

    International Nuclear Information System (INIS)

    Kalashyan, Armen K.; Buiatti, Marco; Grigolini, Paolo

    2009-01-01

    In the ergodic regime, several methods efficiently estimate the temporal scaling of time series characterized by long-range power-law correlations by converting them into diffusion processes. However, in the condition of ergodicity breakdown, the same methods give ambiguous results. We show that in such regime, two different scaling behaviors emerge depending on the age of the windows used for the estimation. We explain the ambiguity of the estimation methods by the different influence of the two scaling behaviors on each method. Our results suggest that aging drastically alters the scaling properties of non-ergodic processes.

  16. Scale Mismatches in Management of Urban Landscapes

    Directory of Open Access Journals (Sweden)

    Sara T. Borgström

    2006-12-01

    Full Text Available Urban landscapes constitute the future environment for most of the world's human population. An increased understanding of the urbanization process and of the effects of urbanization at multiple scales is, therefore, key to ensuring human well-being. In many conventional natural resource management regimes, incomplete knowledge of ecosystem dynamics and institutional constraints often leads to institutional management frameworks that do not match the scale of ecological patterns and processes. In this paper, we argue that scale mismatches are particularly pronounced in urban landscapes. Urban green spaces provide numerous important ecosystem services to urban citizens, and the management of these urban green spaces, including recognition of scales, is crucial to the well-being of the citizens. From a qualitative study of the current management practices in five urban green spaces within the Greater Stockholm Metropolitan Area, Sweden, we found that 1 several spatial, temporal, and functional scales are recognized, but the cross-scale interactions are often neglected, and 2 spatial and temporal meso-scales are seldom given priority. One potential effect of the neglect of ecological cross-scale interactions in these highly fragmented landscapes is a gradual reduction in the capacity of the ecosystems to provide ecosystem services. Two important strategies for overcoming urban scale mismatches are suggested: 1 development of an integrative view of the whole urban social-ecological landscape, and 2 creation of adaptive governance systems to support practical management.

  17. Inflation in a Scale Invariant Universe

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Pedro G. [Oxford U.; Hill, Christopher T. [Fermilab; Noller, Johannes [Zurich U.; Ross, Graham G. [Oxford U., Theor. Phys.

    2018-02-16

    A scale-invariant universe can have a period of accelerated expansion at early times: inflation. We use a frame-invariant approach to calculate inflationary observables in a scale invariant theory of gravity involving two scalar fields - the spectral indices, the tensor to scalar ratio, the level of isocurvature modes and non-Gaussianity. We show that scale symmetry leads to an exact cancellation of isocurvature modes and that, in the scale-symmetry broken phase, this theory is well described by a single scalar field theory. We find the predictions of this theory strongly compatible with current observations.

  18. Water scaling in the North Sea oil and gas fields and scale prediction: An overview

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, M

    1997-12-31

    Water-scaling is a common and major production chemistry problem in the North Sea oil and gas fields and scale prediction has been an important means to assess the potential and extent of scale deposition. This paper presents an overview of sulphate and carbonate scaling problems in the North Sea and a review of several widely used and commercially available scale prediction software. In the paper, the water chemistries and scale types and severities are discussed relative of the geographical distribution of the fields in the North Sea. The theories behind scale prediction are then briefly described. Five scale or geochemical models are presented and various definitions of saturation index are compared and correlated. Views are the expressed on how to predict scale precipitation under some extreme conditions such as that encountered in HPHT reservoirs. 15 refs., 7 figs., 9 tabs.

  19. Dispersion and Cluster Scales in the Ocean

    Science.gov (United States)

    Kirwan, A. D., Jr.; Chang, H.; Huntley, H.; Carlson, D. F.; Mensa, J. A.; Poje, A. C.; Fox-Kemper, B.

    2017-12-01

    Ocean flow space scales range from centimeters to thousands of kilometers. Because of their large Reynolds number these flows are considered turbulent. However, because of rotation and stratification constraints they do not conform to classical turbulence scaling theory. Mesoscale and large-scale motions are well described by geostrophic or "2D turbulence" theory, however extending this theory to submesoscales has proved to be problematic. One obvious reason is the difficulty in obtaining reliable data over many orders of magnitude of spatial scales in an ocean environment. The goal of this presentation is to provide a preliminary synopsis of two recent experiments that overcame these obstacles. The first experiment, the Grand LAgrangian Deployment (GLAD) was conducted during July 2012 in the eastern half of the Gulf of Mexico. Here approximately 300 GPS-tracked drifters were deployed with the primary goal to determine whether the relative dispersion of an initially densely clustered array was driven by processes acting at local pair separation scales or by straining imposed by mesoscale motions. The second experiment was a component of the LAgrangian Submesoscale Experiment (LASER) conducted during the winter of 2016. Here thousands of bamboo plates were tracked optically from an Aerostat. Together these two deployments provided an unprecedented data set on dispersion and clustering processes from 1 to 106 meter scales. Calculations of statistics such as two point separations, structure functions, and scale dependent relative diffusivities showed: inverse energy cascade as expected for scales above 10 km, a forward energy cascade at scales below 10 km with a possible energy input at Langmuir circulation scales. We also find evidence from structure function calculations for surface flow convergence at scales less than 10 km that account for material clustering at the ocean surface.

  20. On the random cascading model study of anomalous scaling in multiparticle production with continuously diminishing scale

    International Nuclear Information System (INIS)

    Liu Lianshou; Zhang Yang; Wu Yuanfang

    1996-01-01

    The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)

  1. A satellite simulator for TRMM PR applied to climate model simulations

    Science.gov (United States)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  2. Price Discrimination, Economies of Scale, and Profits.

    Science.gov (United States)

    Park, Donghyun

    2000-01-01

    Demonstrates that it is possible for economies of scale to induce a price-discriminating monopolist to sell in an unprofitable market where the average cost always exceeds the price. States that higher profits in the profitable market caused by economies of scale may exceed losses incurred in the unprofitable market. (CMK)

  3. Geometrical scaling, furry branching and minijets

    International Nuclear Information System (INIS)

    Hwa, R.C.

    1988-01-01

    Scaling properties and their violations in hadronic collisions are discussed in the framework of the geometrical branching model. Geometrical scaling supplemented by Furry branching characterizes the soft component, while the production of jets specifies the hard component. Many features of multiparticle production processes are well described by this model. 21 refs

  4. The Resiliency Scale for Young Adults

    Science.gov (United States)

    Prince-Embury, Sandra; Saklofske, Donald H.; Nordstokke, David W.

    2017-01-01

    The Resiliency Scale for Young Adults (RSYA) is presented as an upward extension of the Resiliency Scales for Children and Adolescents (RSCA). The RSYA is based on the "three-factor model of personal resiliency" including "mastery," "relatedness," and "emotional reactivity." Several stages of scale…

  5. Strontium Removal: Full-Scale Ohio Demonstrations

    Science.gov (United States)

    The objectives of this presentation are to present a brief overview of past bench-scale research to evaluate the impact lime softening on strontium removal from drinking water and present full-scale drinking water treatment studies to impact of lime softening and ion exchange sof...

  6. Evaluation of a constipation risk assessment scale.

    Science.gov (United States)

    Zernike, W; Henderson, A

    1999-06-01

    This project was undertaken in order to evaluate the utility of a constipation risk assessment scale and the accompanying bowel management protocol. The risk assessment scale was primarily introduced to teach and guide staff in managing constipation when caring for patients. The intention of the project was to reduce the incidence of constipation in patients during their admission to hospital.

  7. Scaling solutions for dilaton quantum gravity

    Directory of Open Access Journals (Sweden)

    T. Henz

    2017-06-01

    The field equations derived from this effective action can be used directly for cosmology. Scale symmetry is spontaneously broken by a non-vanishing cosmological value of the scalar field. For the cosmology corresponding to our scaling solutions, inflation arises naturally. The effective cosmological constant becomes dynamical and vanishes asymptotically as time goes to infinity.

  8. Scale invariant Volkov–Akulov supergravity

    Directory of Open Access Journals (Sweden)

    S. Ferrara

    2015-10-01

    Full Text Available A scale invariant goldstino theory coupled to supergravity is obtained as a standard supergravity dual of a rigidly scale-invariant higher-curvature supergravity with a nilpotent chiral scalar curvature. The bosonic part of this theory describes a massless scalaron and a massive axion in a de Sitter Universe.

  9. Crown ratio influences allometric scaling in trees

    Science.gov (United States)

    Annikki Makela; Harry T. Valentine

    2006-01-01

    Allometric theories suggest that the size and shape of organisms follow universal rules, with a tendency toward quarter-power scaling. In woody plants, however, structure is influenced by branch death and shedding, which leads to decreasing crown ratios, accumulation of heartwood, and stem and branch tapering. This paper examines the impacts on allometric scaling of...

  10. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  11. Scaling and critical behaviour in nuclear fragmentation

    International Nuclear Information System (INIS)

    Campi, X.

    1990-09-01

    These notes review recent results on nuclear fragmentation. An analysis of experimental data from exclusive experiments is made in the framework of modern theories of fragmentation of finite size objects. We discuss the existence of a critical regime of fragmentation and the relevance of scaling and finite size scaling

  12. SCALE Code System 6.2.2

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-05-01

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.

  13. Statistics for Locally Scaled Point Patterns

    DEFF Research Database (Denmark)

    Prokesová, Michaela; Hahn, Ute; Vedel Jensen, Eva B.

    2006-01-01

    scale factor. The main emphasis of the present paper is on analysis of such models. Statistical methods are developed for estimation of scaling function and template parameters as well as for model validation. The proposed methods are assessed by simulation and used in the analysis of a vegetation...

  14. Scale-sensitive governance of the environment

    NARCIS (Netherlands)

    Padt, F.; Opdam, P.F.M.; Polman, N.B.P.; Termeer, C.J.A.M.

    2014-01-01

    Sensitivity to scales is one of the key challenges in environmental governance. Climate change, food production, energy supply, and natural resource management are examples of environmental challenges that stretch across scales and require action at multiple levels. Governance systems are typically

  15. Designing the Nuclear Energy Attitude Scale.

    Science.gov (United States)

    Calhoun, Lawrence; And Others

    1988-01-01

    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  16. Moral regulation: historical geography and scale

    OpenAIRE

    Legg, Stephen; Brown, Michael

    2013-01-01

    This paper introduces a special issue on the historical geography of moral regulation and scale. The paper examines the rich and varied work of geographers on moral geographies before looking at wider work on moral regulation influenced by Michel Foucault. Highlighting the significance of the\\ud neglected dimension of scale, the paper introduces the themes examined in the subsequent papers.

  17. Scaling with known uncertainty: a synthesis

    Science.gov (United States)

    Jianguo Wu; Harbin Li; K. Bruce Jones; Orie L. Loucks

    2006-01-01

    Scale is a fundamental concept in ecology and all sciences (Levin 1992, Wu and Loucks 1995, Barenblatt 1996), which has received increasing attention in recent years. The previous chapters have demonstrated an immerse diversity of scaling issues present in different areas of ecology, covering species distribution, population dynamics, ecosystem processes, and...

  18. Speculation about near-wall turbulence scales

    International Nuclear Information System (INIS)

    Yurchenko, N F

    2008-01-01

    A strategy to control near-wall turbulence modifying scales of fluid motion is developed. The boundary-layer flow is shown to respond selectively to the scale of streamwise vortices initiated, e.g. with the spanwise regular temperature distribution over a model surface. It is used to generate sustainable streamwise vortices and thus to optimize integral flow characteristics.

  19. Reliability of Multi-Category Rating Scales

    Science.gov (United States)

    Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.

    2013-01-01

    The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…

  20. Getting to Scale: Evidence, Professionalism, and Community

    Science.gov (United States)

    Slavin, Robert E.

    2016-01-01

    Evidence-based reform, in which proven programs are scaled up to reach many students, is playing an increasing role in American education. This article summarizes articles in this issue to explain how Reading Recovery has managed to sustain itself and go to scale over more than 30 years. It argues that Reading Recovery has succeeded due to a focus…