Bukovinszky, T.; Gols, R.; Agrawal, A.A.; Roge, C.; Bezemer, T.M.; Biere, A.; Harvey, J.A.
2014-01-01
Following its introduction into Europe (EU), the common milkweed (Asclepias syriaca) has been free of most specialist herbivores that are present in its native North American (NA) range, except for the oleander aphid Aphis nerii. We compared EU and NA populations of A. nerii on EU and NA milkweed
Dombrovsky, Aviv; Luria, Neta
2013-04-01
In a survey that was conducted during the year 2011, a local strain of Aphid lethal paralysis virus (ALPV) was identified and isolated from a wild population of Aphis nerii aphids living on Nerium oleander plants located in northern Israel. The new strain was tentatively named (ALPV-An). RNA extracted from the viral particles allowed the amplification and determination of the complete genome sequence. The virus genome is comprised of 9835 nucleotides. In a BLAST search analysis, the ALPV-An sequence showed 89 % nucleotide sequence identity with the whole genome of a South African ALPV and 96 and 94 % amino acid sequence identity with the ORF1 and ORF2 of that strain, respectively. In preliminary experiments, spray-applied, purified ALPV virions were highly pathogenic to the green peach aphid Myzus persicae; 95 % mortality was recorded 4 days post-infection. These preliminary results demonstrate the potential of ALPV for use as a biologic agent for some aphid control. Surprisingly, no visible ALPV pathogenic effects, such as morphological changes or paralysis, were observed in the A. nerii aphids infected with ALPV-An. The absence of clear ALPV symptoms in A. nerii led to the formulation of two hypotheses, which were partially examined in this study. The first hypothesis suggest that A. nerii is resistant or tolerant of ALPV, while the second hypothesis propose that ALPV-An may be a mild strain of ALPV. Currently, our results is in favor with the first hypothesis since ALPV-An is cryptic in A. nerii aphids and can be lethal for M. persicae aphids.
Manfred Hartbauer
2010-04-01
Full Text Available The prevalent way aphids accomplish colony defense against natural enemies is a mutualistic relationship with ants or the occurrence of a specialised soldier caste typical for eusocial aphids, or even both. Despite a group-living life style of those aphid species lacking these defense lines, communal defense against natural predators has not yet been observed there. Individuals of Aphis nerii (Oleander aphid and Uroleucon hypochoeridis, an aphid species feeding on Hypochoeris radicata (hairy cat's ear, show a behavioral response to visual stimulation in the form of spinning or twitching, which is often accompanied by coordinated kicks executed with hind legs. Interestingly, this behaviour is highly synchronized among members of a colony and repetitive visual stimulation caused strong habituation. Observations of natural aphid colonies revealed that a collective twitching and kicking response (CTKR was frequently evoked during oviposition attempts of the parasitoid wasp Aphidius colemani and during attacks of aphidophagous larvae. CTKR effectively interrupted oviposition attempts of this parasitoid wasp and even repelled this parasitoid from colonies after evoking consecutive CTKRs. In contrast, solitary feeding A. nerii individuals were not able to successfully repel this parasitoid wasp. In addition, CTKR was also evoked through gentle substrate vibrations. Laser vibrometry of the substrate revealed twitching-associated vibrations that form a train of sharp acceleration peaks in the course of a CTKR. This suggests that visual signals in combination with twitching-related substrate vibrations may play an important role in synchronising defense among members of a colony. In both aphid species collective defense in encounters with different natural enemies was executed in a stereotypical way and was similar to CTKR evoked through visual stimulation. This cooperative defense behavior provides an example of a surprising sociality that can be found
Mohammad Rouhani
2012-12-01
Full Text Available The oleander aphid, Aphis nerii Boyer de Fonscolombe, is one of the common pests of ornamental plants in the families of Apocynaceae and Sclepiadaceae and distributed throughout the world, which has been responsible for the mortality of a large number of oleander (Nerium oleander L. shrubs each year. In this research, the insecticidal activity of Ag nanoparticles against the A. nerii was investigated. Nanoparticles of Ag and Ag-Zn were synthesized through a solvothermal method, and using them, insecticidal solutions of different concentrations were prepared and tested on A. nerii. For comparison purposes, imidacloprid was also used as a conventional insecticide. In the experiments, the LC50 value for imidacloprid, Ag and Ag-Zn nanoparticles were calculated to be 0.13 μL mL-1, 424.67 mg mL-1, and 539.46 mg mL-1, respectively. The result showed that Ag nanoparticles can be used as a valuable tool in pest management programs of A. nerii. Additionally, the study showed that imidacloprid at 1 μL mL-1 and nanoparticles at 700 mg mL-1 had the highest insect mortality effect.El áfido de la adelfa, Aphis nerii Boyer de Fonscolombe, es una de las plagas más comunes de plantas ornamentales en las familias Apocynaceae y Sclepiadaceae y tiene distribución mundial, ha sido responsable de la mortalidad de un gran número de arbustos de adelfa (Nerium oleander L. cada ano. En este estudio se investigó la actividad insecticida de nanopartículas de Ag contra A. nerii. Nanopartículas de Ag y Ag-Zn fueron sintetizadas a través de un método solvotérmico, y con ellas se prepararon soluciones insecticidas de diferentes concentraciones y se probaron contra A. nerii. Con fines de comparación, también se usó imidacloprid como un insecticida convencional. En los experimentos, el valor LC50 para imidacloprid, nanopartículas de Ag y Ag-Zn se calculó como 0.13 μL mL-1, 424.67 mg mL-1, y 539.46 mg mL-1, respectivamente. El resultado mostró que nanopart
Harrison, John Scott; Mondor, Edward B.
2011-01-01
Background The importance of genetic diversity in successful biological invasions is unclear. In animals, but not necessarily plants, increased genetic diversity is generally associated with successful colonization and establishment of novel habitats. The Oleander aphid, Aphis nerii, though native to the Mediterranean region, is an invasive pest species throughout much of the world. Feeding primarily on Oleander (Nerium oleander) and Milkweed (Asclepias spp.) under natural conditions, these plants are unlikely to support aphid populations year round in the southern US. The objective of this study was to describe the genetic variation within and among US populations of A. nerii, during extinction/recolonization events, to better understand the population ecology of this invasive species. Methodology/Principal Findings We used five microsatellite markers to assess genetic diversity over a two year period within and among three aphid populations separated by small (100 km) and large (3,700 km) geographic distances on two host plant species. Here we provide evidence for A. nerii “superclones”. Genotypic variation was absent in all populations (i.e., each population consisted of a single multilocus genotype (MLG) or “clone”) and the genetic composition of only one population completely changed across years. There was no evidence of sexual reproduction or host races on different plant species. Conclusions/Significance Aphis nerii is a well established invasive species despite having extremely low genetic diversity. As this aphid appears to be obligatorily asexual, it may share more similarities with clonally reproducing invasive plants, than with other animals. Patterns of temporal and geographic genetic variation, viewed in the context of its population dynamics, have important implications for the management of invasive pests and the evolutionary biology of asexual species. PMID:21408073
John Scott Harrison
2011-03-01
Full Text Available The importance of genetic diversity in successful biological invasions is unclear. In animals, but not necessarily plants, increased genetic diversity is generally associated with successful colonization and establishment of novel habitats. The Oleander aphid, Aphis nerii, though native to the Mediterranean region, is an invasive pest species throughout much of the world. Feeding primarily on Oleander (Nerium oleander and Milkweed (Asclepias spp. under natural conditions, these plants are unlikely to support aphid populations year round in the southern US. The objective of this study was to describe the genetic variation within and among US populations of A. nerii, during extinction/recolonization events, to better understand the population ecology of this invasive species.We used five microsatellite markers to assess genetic diversity over a two year period within and among three aphid populations separated by small (100 km and large (3,700 km geographic distances on two host plant species. Here we provide evidence for A. nerii "superclones". Genotypic variation was absent in all populations (i.e., each population consisted of a single multilocus genotype (MLG or "clone" and the genetic composition of only one population completely changed across years. There was no evidence of sexual reproduction or host races on different plant species.Aphis nerii is a well established invasive species despite having extremely low genetic diversity. As this aphid appears to be obligatorily asexual, it may share more similarities with clonally reproducing invasive plants, than with other animals. Patterns of temporal and geographic genetic variation, viewed in the context of its population dynamics, have important implications for the management of invasive pests and the evolutionary biology of asexual species.
Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...
Howard, Matt C.
2018-01-01
Scale pretests analyze the suitability of individual scale items for further analysis, whether through judging their face validity, wording concerns, and/or other aspects. The current article reviews scale pretests, separated by qualitative and quantitative methods, in order to identify the differences, similarities, and even existence of the…
Falk, C.; And Others
The development of the Maslowian Scale, a method of revealing a picture of one's needs and concerns based on Abraham Maslow's levels of self-actualization, is described. This paper also explains how the scale is supported by the theories of L. Kohlberg, C. Rogers, and T. Rusk. After a literature search, a list of statements was generated…
Framing scales and scaling frames
van Lieshout, M.; Dewulf, A.; Aarts, N.; Termeer, K.
2009-01-01
Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment
Ronald L Breiger
2015-11-01
Full Text Available While “scaling up” is a lively topic in network science and Big Data analysis today, my purpose in this essay is to articulate an alternative problem, that of “scaling down,” which I believe will also require increased attention in coming years. “Scaling down” is the problem of how macro-level features of Big Data affect, shape, and evoke lower-level features and processes. I identify four aspects of this problem: the extent to which findings from studies of Facebook and other Big-Data platforms apply to human behavior at the scale of church suppers and department politics where we spend much of our lives; the extent to which the mathematics of scaling might be consistent with behavioral principles, moving beyond a “universal” theory of networks to the study of variation within and between networks; and how a large social field, including its history and culture, shapes the typical representations, interactions, and strategies at local levels in a text or social network.
Golokhvastov, A.I.; )
2001-01-01
A correct version of the KNO scaling of multiplicity distributions is discussed in detail. Some assertions on KNO-scaling violation based on the misinterpretation of experimental data behavior are analyzed. An accurate comparison with experiment is presented for the distributions of negative particles in e + e - annihilation at √S = 3 - 161 GeV, in inelastic pp interactions at √S = 2.4 - 62 GeV and in nucleus-nucleus interactions at p lab = 4.5 - 520 GeV/c per nucleon. The p-bar p data at √S 546 GeV are considered [ru
Wilson, K M; Huff, J L
2001-05-01
The influence on social behavior of beliefs in Satan and the nature of evil has received little empirical study. Elaine Pagels (1995) in her book, The Origin of Satan, argued that Christians' intolerance toward others is due to their belief in an active Satan. In this study, more than 200 college undergraduates completed the Manitoba Prejudice Scale and the Attitudes Toward Homosexuals Scale (B. Altemeyer, 1988), as well as the Belief in an Active Satan Scale, developed by the authors. The Belief in an Active Satan Scale demonstrated good internal consistency and temporal stability. Correlational analyses revealed that for the female participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men and intolerance toward ethnic minorities. For the male participants, belief in an active Satan was directly related to intolerance toward lesbians and gay men but was not significantly related to intolerance toward ethnic minorities. Results of this research showed that it is possible to meaningfully measure belief in an active Satan and that such beliefs may encourage intolerance toward others.
Friar, J.L.
1998-01-01
Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the π-γ force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted
Friar, J.L.
1998-12-01
Nuclear scales are discussed from the nuclear physics viewpoint. The conventional nuclear potential is characterized as a black box that interpolates nucleon-nucleon (NN) data, while being constrained by the best possible theoretical input. The latter consists of the longer-range parts of the NN force (e.g., OPEP, TPEP, the {pi}-{gamma} force), which can be calculated using chiral perturbation theory and gauged using modern phase-shift analyses. The shorter-range parts of the force are effectively parameterized by moments of the interaction that are independent of the details of the force model, in analogy to chiral perturbation theory. Results of GFMC calculations in light nuclei are interpreted in terms of fundamental scales, which are in good agreement with expectations from chiral effective field theories. Problems with spin-orbit-type observables are noted.
Christopher H. Childers
2016-03-01
Full Text Available This manuscript demonstrates the molecular scale cure rate dependence of di-functional epoxide based thermoset polymers cured with amines. A series of cure heating ramp rates were used to determine the influence of ramp rate on the glass transition temperature (Tg and sub-Tg transitions and the average free volume hole size in these systems. The networks were comprised of 3,3′-diaminodiphenyl sulfone (33DDS and diglycidyl ether of bisphenol F (DGEBF and were cured at ramp rates ranging from 0.5 to 20 °C/min. Differential scanning calorimetry (DSC and NIR spectroscopy were used to explore the cure ramp rate dependence of the polymer network growth, whereas broadband dielectric spectroscopy (BDS and free volume hole size measurements were used to interrogate networks’ molecular level structural variations upon curing at variable heating ramp rates. It was found that although the Tg of the polymer matrices was similar, the NIR and DSC measurements revealed a strong correlation for how these networks grow in relation to the cure heating ramp rate. The free volume analysis and BDS results for the cured samples suggest differences in the molecular architecture of the matrix polymers due to cure heating rate dependence.
Scaling of Metabolic Scaling within Physical Limits
Douglas S. Glazier
2014-10-01
Full Text Available Both the slope and elevation of scaling relationships between log metabolic rate and log body size vary taxonomically and in relation to physiological or developmental state, ecological lifestyle and environmental conditions. Here I discuss how the recently proposed metabolic-level boundaries hypothesis (MLBH provides a useful conceptual framework for explaining and predicting much, but not all of this variation. This hypothesis is based on three major assumptions: (1 various processes related to body volume and surface area exert state-dependent effects on the scaling slope for metabolic rate in relation to body mass; (2 the elevation and slope of metabolic scaling relationships are linked; and (3 both intrinsic (anatomical, biochemical and physiological and extrinsic (ecological factors can affect metabolic scaling. According to the MLBH, the diversity of metabolic scaling relationships occurs within physical boundary limits related to body volume and surface area. Within these limits, specific metabolic scaling slopes can be predicted from the metabolic level (or scaling elevation of a species or group of species. In essence, metabolic scaling itself scales with metabolic level, which is in turn contingent on various intrinsic and extrinsic conditions operating in physiological or evolutionary time. The MLBH represents a “meta-mechanism” or collection of multiple, specific mechanisms that have contingent, state-dependent effects. As such, the MLBH is Darwinian in approach (the theory of natural selection is also meta-mechanistic, in contrast to currently influential metabolic scaling theory that is Newtonian in approach (i.e., based on unitary deterministic laws. Furthermore, the MLBH can be viewed as part of a more general theory that includes other mechanisms that may also affect metabolic scaling.
First page Back Continue Last page Overview Graphics. Flux scaling: Ultimate regime. With the Nusselt number and the mixing length scales, we get the Nusselt number and Reynolds number (w'd/ν) scalings: and or. and. scaling expected to occur at extremely high Ra Rayleigh-Benard convection. Get the ultimate regime ...
Svendsen, Morten Bo Søndergaard; Christensen, Emil Aputsiaq Flindt; Steffensen, John Fleng
2017-01-01
Conventionally, dynamic energy budget (DEB) models operate with animals that have maintenance rates scaling with their body volume, and assimilation rates scaling with body surface area. However, when applying such criteria for the individual in a population level model, the emergent behaviour...
Atlantic Salmon Scale Measurements
National Oceanic and Atmospheric Administration, Department of Commerce — Scales are collected annually from smolt trapping operations in Maine as wellas other sampling opportunities (e.g. marine surveys, fishery sampling etc.). Scale...
Padt, F.J.G.; Arts, B.J.M.
2014-01-01
This chapter provides some clarity to the scale debate. It bridges a variety of approaches, definitions and jargons used in various disciplines in order to provide common ground for a concept of scale as a basis for scale-sensitive governance of the environment. The chapter introduces the concept of
Optimal renormalization scales and commensurate scale relations
Brodsky, S.J.; Lu, H.J.
1996-01-01
Commensurate scale relations relate observables to observables and thus are independent of theoretical conventions, such as the choice of intermediate renormalization scheme. The physical quantities are related at commensurate scales which satisfy a transitivity rule which ensures that predictions are independent of the choice of an intermediate renormalization scheme. QCD can thus be tested in a new and precise way by checking that the observables track both in their relative normalization and in their commensurate scale dependence. For example, the radiative corrections to the Bjorken sum rule at a given momentum transfer Q can be predicted from measurements of the e+e - annihilation cross section at a corresponding commensurate energy scale √s ∝ Q, thus generalizing Crewther's relation to non-conformal QCD. The coefficients that appear in this perturbative expansion take the form of a simple geometric series and thus have no renormalon divergent behavior. The authors also discuss scale-fixed relations between the threshold corrections to the heavy quark production cross section in e+e - annihilation and the heavy quark coupling α V which is measurable in lattice gauge theory
B Bello; M Junker
2006-01-01
Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)
Scaling of differential equations
Langtangen, Hans Petter
2016-01-01
The book serves both as a reference for various scaled models with corresponding dimensionless numbers, and as a resource for learning the art of scaling. A special feature of the book is the emphasis on how to create software for scaled models, based on existing software for unscaled models. Scaling (or non-dimensionalization) is a mathematical technique that greatly simplifies the setting of input parameters in numerical simulations. Moreover, scaling enhances the understanding of how different physical processes interact in a differential equation model. Compared to the existing literature, where the topic of scaling is frequently encountered, but very often in only a brief and shallow setting, the present book gives much more thorough explanations of how to reason about finding the right scales. This process is highly problem dependent, and therefore the book features a lot of worked examples, from very simple ODEs to systems of PDEs, especially from fluid mechanics. The text is easily accessible and exam...
Small scale models equal large scale savings
Lee, R.; Segroves, R.
1994-01-01
A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)
Scale and scaling in agronomy and environmental sciences
Scale is of paramount importance in environmental studies, engineering, and design. The unique course covers the following topics: scale and scaling, methods and theories, scaling in soils and other porous media, scaling in plants and crops; scaling in landscapes and watersheds, and scaling in agro...
Semerci, Nuriye
2000-01-01
The main purpose of this study is to develop the scale for critical thinking. The Scale of Critical Thinking was applied to 200 student. In this scale, there are total 55 items, four of which are negative and 51 of which are positive. The KMO (Kaiser-Meyer-Olkin) value is 0.75, the Bartlett test value is 7145.41, and the Cronbach Alpha value is 0.90.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Flynn, Tim
This 25-item scale for rating prekindergarten children concerns personal and cognitive skills. Directions for using the scale are provided. Personal skills include personal hygiene, communication skills, eating habits, relationships with the teacher, peer relations, and personal behavior. Cognitive skills rated are verbal skills, object…
Jung, Lee Ann
2018-01-01
What is Goal Attainment Scaling? In this article, Lee Ann Jung defines it as a way to measure a student's progress toward an individualized goal. Instead of measuring a skill at a set time (for instance, on a test or other assignment), Goal Attainment Scaling tracks the steps a student takes over the course of a year in a targeted skill. Together,…
Magnetron injection gun scaling
Lawson, W.
1988-01-01
Existing analytic design equations for magnetron injection guns (MIG's) are approximated to obtain a set of scaling laws. The constraints are chosen to examine the maximum peak power capabilities of MIG's. The scaling laws are compared with exact solutions of the design equations and are supported by MIG simulations
Image scaling curve generation
2012-01-01
The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then
Image scaling curve generation.
2011-01-01
The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then
There is a need to develop scale explicit understanding of erosion to overcome existing conceptual and methodological flaws in our modelling methods currently applied to understand the process of erosion, transport and deposition at the catchment scale. These models need to be based on a sound under...
DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.
2010-01-01
The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement
Banavar, Jayanth
2009-03-01
The unity of life is expressed not only in the universal basis of inheritance and energetics at the molecular level, but also in the pervasive scaling of traits with body size at the whole-organism level. More than 75 years ago, Kleiber and Brody and Proctor independently showed that the metabolic rates, B, of mammals and birds scale as the three-quarter power of their mass, M. Subsequent studies showed that most biological rates and times scale as M-1/4 and M^1/4 respectively, and that these so called quarter-power scaling relations hold for a variety of organisms, from unicellular prokaryotes and eukaryotes to trees and mammals. The wide applicability of Kleiber's law, across the 22 orders of magnitude of body mass from minute bacteria to giant whales and sequoias, raises the hope that there is some simple general explanation that underlies the incredible diversity of form and function. We will present a general theoretical framework for understanding the relationship between metabolic rate, B, and body mass, M. We show how the pervasive quarter-power biological scaling relations arise naturally from optimal directed resource supply systems. This framework robustly predicts that: 1) whole organism power and resource supply rate, B, scale as M^3/4; 2) most other rates, such as heart rate and maximal population growth rate scale as M-1/4; 3) most biological times, such as blood circulation time and lifespan, scale as M^1/4; and 4) the average velocity of flow through the network, v, such as the speed of blood and oxygen delivery, scales as M^1/12. Our framework is valid even when there is no underlying network. Our theory is applicable to unicellular organisms as well as to large animals and plants. This work was carried out in collaboration with Amos Maritan along with Jim Brown, John Damuth, Melanie Moses, Andrea Rinaldo, and Geoff West.
Yupapin, Preecha
2013-01-01
The behavior of light in small scale optics or nano/micro optical devices has shown promising results, which can be used for basic and applied research, especially in nanoelectronics. Small Scale Optics presents the use of optical nonlinear behaviors for spins, antennae, and whispering gallery modes within micro/nano devices and circuits, which can be used in many applications. This book proposes a new design for a small scale optical device-a microring resonator device. Most chapters are based on the proposed device, which uses a configuration know as a PANDA ring resonator. Analytical and nu
Nottale, Laurent
2003-01-01
The principle of relativity, when it is applied to scale transformations, leads to the suggestion of a generalization of fundamental dilations laws. These new special scale-relativistic resolution transformations involve log-Lorentz factors and lead to the occurrence of a minimal and of a maximal length-scale in nature, which are invariant under dilations. The minimal length-scale, that replaces the zero from the viewpoint of its physical properties, is identified with the Planck length l P , and the maximal scale, that replaces infinity, is identified with the cosmic scale L=Λ -1/2 , where Λ is the cosmological constant.The new interpretation of the Planck scale has several implications for the structure and history of the early Universe: we consider the questions of the origin, of the status of physical laws at very early times, of the horizon/causality problem and of fluctuations at recombination epoch.The new interpretation of the cosmic scale has consequences for our knowledge of the present universe, concerning in particular Mach's principle, the large number coincidence, the problem of the vacuum energy density, the nature and the value of the cosmological constant. The value (theoretically predicted ten years ago) of the scaled cosmological constant Ω Λ =0.75+/-0.15 is now supported by several different experiments (Hubble diagram of Supernovae, Boomerang measurements, gravitational lensing by clusters of galaxies).The scale-relativity framework also allows one to suggest a solution to the missing mass problem, and to make theoretical predictions of fundamental energy scales, thanks to the interpretation of new structures in scale space: fractal/classical transitions as Compton lengths, mass-coupling relations and critical value 4π 2 of inverse couplings. Among them, we find a structure at 3.27+/-0.26x10 20 eV, which agrees closely with the observed highest energy cosmic rays at 3.2+/-0.9x10 20 eV, and another at 5.3x10 -3 eV, which corresponds to the
Hegyi, S.
1998-01-01
A generalization of the Koba-Nielsen-Olesen scaling law of the multiplicity distributions P(n) is presented. It consists of a change in the normalization point of P(n) compensated by a suitable change in the renormalized parameters and a rescaling. The iterative repetition of the transformation yields the sequence of higher-order moment distributions of P(n). Each member of this sequence may exhibit data collapsing behavior in case of violation of the original KNO scaling hypothesis. It is shown that the iterative procedure can be viewed as varying the collision energy, i.e. the moment distributions of P(n) can represent the pattern of pre-asymptotic KNO scaling violation. The fixed points of the iteration will be determined and a consistency test based on Feynman scaling is to be given. (author)
Lysenko, W.P.
1986-01-01
Accelerator scaling laws how they can be generated, and how they are used are discussed. A scaling law is a relation between machine parameters and beam parameters. An alternative point of view is that a scaling law is an imposed relation between the equations of motion and the initial conditions. The relation between the parameters is obtained by requiring the beam to be matched. (A beam is said to be matched if the phase-space distribution function is a function of single-particle invariants of the motion.) Because of this restriction, the number of independent parameters describing the system is reduced. Using simple models for bunched- and unbunched-beam situations. Scaling laws are shown to determine the general behavior of beams in accelerators. Such knowledge is useful in design studies for new machines such as high-brightness linacs. The simple model presented shows much of the same behavior as a more detailed RFQ model
Kaneko, Makoto; Shirai, Tatsuya; Tsuji, Toshio
2000-01-01
This paper discusses the scale-dependent grasp.Suppose that a human approaches an object initially placed on atable and finally achieves an enveloping grasp. Under such initialand final conditions, he (or she) unconsciously changes the graspstrategy according to the size of objects, even though they havesimilar geometry. We call the grasp planning the scale-dependentgrasp. We find that grasp patterns are also changed according tothe surface friction and the geometry of cross section in additi...
Fast ignition breakeven scaling
Slutz, Stephen A.; Vesey, Roger Alan
2005-01-01
A series of numerical simulations have been performed to determine scaling laws for fast ignition break even of a hot spot formed by energetic particles created by a short pulse laser. Hot spot break even is defined to be when the fusion yield is equal to the total energy deposited in the hot spot through both the initial compression and the subsequent heating. In these simulations, only a small portion of a previously compressed mass of deuterium-tritium fuel is heated on a short time scale, i.e., the hot spot is tamped by the cold dense fuel which surrounds it. The hot spot tamping reduces the minimum energy required to obtain break even as compared to the situation where the entire fuel mass is heated, as was assumed in a previous study [S. A. Slutz, R. A. Vesey, I. Shoemaker, T. A. Mehlhorn, and K. Cochrane, Phys. Plasmas 7, 3483 (2004)]. The minimum energy required to obtain hot spot break even is given approximately by the scaling law E T = 7.5(ρ/100) -1.87 kJ for tamped hot spots, as compared to the previously reported scaling of E UT = 15.3(ρ/100) -1.5 kJ for untamped hotspots. The size of the compressed fuel mass and the focusability of the particles generated by the short pulse laser determines which scaling law to use for an experiment designed to achieve hot spot break even
Dvali, Gia; Kolanovic, Marko; Nitti, Francesco; Gabadadze, Gregory
2002-01-01
We propose a framework in which the quantum gravity scale can be as low as 10 -3 eV. The key assumption is that the standard model ultraviolet cutoff is much higher than the quantum gravity scale. This ensures that we observe conventional weak gravity. We construct an explicit brane-world model in which the brane-localized standard model is coupled to strong 5D gravity of infinite-volume flat extra space. Because of the high ultraviolet scale, the standard model fields generate a large graviton kinetic term on the brane. This kinetic term 'shields' the standard model from the strong bulk gravity. As a result, an observer on the brane sees weak 4D gravity up to astronomically large distances beyond which gravity becomes five dimensional. Modeling quantum gravity above its scale by the closed string spectrum we show that the shielding phenomenon protects the standard model from an apparent phenomenological catastrophe due to the exponentially large number of light string states. The collider experiments, astrophysics, cosmology and gravity measurements independently point to the same lower bound on the quantum gravity scale, 10 -3 eV. For this value the model has experimental signatures both for colliders and for submillimeter gravity measurements. Black holes reveal certain interesting properties in this framework
Universities scale like cities.
Anthony F J van Raan
Full Text Available Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.
Universities scale like cities.
van Raan, Anthony F J
2013-01-01
Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.
Child Development Program Evaluation Scale.
Fiene, Richard J.
The Child Development Program Evaluation Scale (CDPES) is actually two scales in one, a licensing scale and a quality scale. Licensing predictor items have been found to predict overall compliance of child day care centers with state regulations in four states. Quality scale items have been found to predict the overall quality of child day care…
2014-01-01
This document presents the International Nuclear Event Scale (INES) which has been created to classify nuclear and radiological events in terms of severity. This scale comprises eight levels from 0 to 7, from a slight but noticeable shift with respect to nominal operation to a major accident. Criteria used for incident and accident classification are indicated; they are based on consequences outside the site, consequences within the site, degradation of in-depth defence. The benefit and weaknesses of this scale are briefly indicated. The major concerned actors are the IAEA, the NEA and the ASN. Some key figures are given (number of declared events and incidents), and a ranking of the main nuclear events is proposed with a brief description of the event: Chernobyl, Fukushima, Kyshtym, Three Mile Island, Sellafield, Tokaimura, Saint-Laurent-des-Eaux. Countries which have adopted INES are indicated, as well as the number of incidents reports in France to the ASN
Wavelets, vibrations and scalings
Meyer, Yves
1997-01-01
Physicists and mathematicians are intensely studying fractal sets of fractal curves. Mandelbrot advocated modeling of real-life signals by fractal or multifractal functions. One example is fractional Brownian motion, where large-scale behavior is related to a corresponding infrared divergence. Self-similarities and scaling laws play a key role in this new area. There is a widely accepted belief that wavelet analysis should provide the best available tool to unveil such scaling laws. And orthonormal wavelet bases are the only existing bases which are structurally invariant through dyadic dilations. This book discusses the relevance of wavelet analysis to problems in which self-similarities are important. Among the conclusions drawn are the following: 1) A weak form of self-similarity can be given a simple characterization through size estimates on wavelet coefficients, and 2) Wavelet bases can be tuned in order to provide a sharper characterization of this self-similarity. A pioneer of the wavelet "saga", Meye...
Ellis, John; Nanopoulos, Dimitri V.; Olive, Keith A.
2016-01-01
Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on $R + R^2$ gravity, with a tilted spectrum of scalar perturbations: $n_s \\sim 0.96$, and small values of the tensor-to-scalar perturbation ratio $r < 0.1$, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.
Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.
2016-05-01
Supersymmetry is the most natural framework for physics above the TeV scale, and the corresponding framework for early-Universe cosmology, including inflation, is supergravity. No-scale supergravity emerges from generic string compactifications and yields a non-negative potential, and is therefore a plausible framework for constructing models of inflation. No-scale inflation yields naturally predictions similar to those of the Starobinsky model based on R+{R}2 gravity, with a tilted spectrum of scalar perturbations: {n}s∼ 0.96, and small values of the tensor-to-scalar perturbation ratio r\\lt 0.1, as favoured by Planck and other data on the cosmic microwave background (CMB). Detailed measurements of the CMB may provide insights into the embedding of inflation within string theory as well as its links to collider physics.
Inverse scale space decomposition
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Rittenberg, V.
1983-01-01
Fischer's finite-size scaling describes the cross over from the singular behaviour of thermodynamic quantities at the critical point to the analytic behaviour of the finite system. Recent extensions of the method--transfer matrix technique, and the Hamiltonian formalism--are discussed in this paper. The method is presented, with equations deriving scaling function, critical temperature, and exponent v. As an application of the method, a 3-states Hamiltonian with Z 3 global symmetry is studied. Diagonalization of the Hamiltonian for finite chains allows one to estimate the critical exponents, and also to discover new phase transitions at lower temperatures. The critical points lambda, and indices v estimated for finite-scaling are given
Spatial ecology across scales.
Hastings, Alan; Petrovskii, Sergei; Morozov, Andrew
2011-04-23
The international conference 'Models in population dynamics and ecology 2010: animal movement, dispersal and spatial ecology' took place at the University of Leicester, UK, on 1-3 September 2010, focusing on mathematical approaches to spatial population dynamics and emphasizing cross-scale issues. Exciting new developments in scaling up from individual level movement to descriptions of this movement at the macroscopic level highlighted the importance of mechanistic approaches, with different descriptions at the microscopic level leading to different ecological outcomes. At higher levels of organization, different macroscopic descriptions of movement also led to different properties at the ecosystem and larger scales. New developments from Levy flight descriptions to the incorporation of new methods from physics and elsewhere are revitalizing research in spatial ecology, which will both increase understanding of fundamental ecological processes and lead to tools for better management.
West, G.B.
1988-01-01
Although much of the intuition for interpreting the high energy data as scattering from structureless constituents came from nuclear physics (and to a lesser extent atomic physics) virtually no data existed for nuclear targets in the non-relativistic regime until relatively recently. It is therefore not so surprising that,in site of the fact that the basic nuclear physics has been well understood for a very long time, the corresponding non-relativistic scaling law was not written down until after the relativistic one,relevant to particle physics, had been explored. Of course, to the extent that these scaling laws simply reflect quasi-elastic scattering of the probe from the constituents, they contain little new physics once the nature of the constitutents is known and understood. On the other hand, deviations from scaling represent corrections to the impulse approximation and can reflect important dynamical and coherent features of the system. Furthermore, as will be discussed in detail here, the scaling curve itself represents the single particle momentum distribution of constituents inside the target. It is therefore prudent to plot the data in terms of a suitable scaling variable since this immediately focuses attention on the dominant physics. Extraneous physics, such as Rutherford scattering in the case of electrons, or magnetic scattering in the case of thermal neutrons is factored out and the use of a scaling variable (such as y) automatically takes into account the fact that the target is a bound state of well-defined constituents. In this talk I shall concentrate almost entirely on non-relativistic systems. Although the formalism applies equally well to both electron scattering from nuclei and thermal neutron scattering from liquids, I shall, because of my background, usually be thinking of the former. On the other hand I shall completely ignore spin considerations so, ironically, the results actually apply more to the latter case!
Elders Health Empowerment Scale
2014-01-01
Introduction: Empowerment refers to patient skills that allow them to become primary decision-makers in control of daily self-management of health problems. As important the concept as it is, particularly for elders with chronic diseases, few available instruments have been validated for use with Spanish speaking people. Objective: Translate and adapt the Health Empowerment Scale (HES) for a Spanish-speaking older adults sample and perform its psychometric validation. Methods: The HES was adapted based on the Diabetes Empowerment Scale-Short Form. Where "diabetes" was mentioned in the original tool, it was replaced with "health" terms to cover all kinds of conditions that could affect health empowerment. Statistical and Psychometric Analyses were conducted on 648 urban-dwelling seniors. Results: The HES had an acceptable internal consistency with a Cronbach's α of 0.89. The convergent validity was supported by significant Pearson's Coefficient correlations between the HES total and item scores and the General Self Efficacy Scale (r= 0.77), Swedish Rheumatic Disease Empowerment Scale (r= 0.69) and Making Decisions Empowerment Scale (r= 0.70). Construct validity was evaluated using item analysis, half-split test and corrected item to total correlation coefficients; with good internal consistency (α> 0.8). The content validity was supported by Scale and Item Content Validity Index of 0.98 and 1.0, respectively. Conclusions: HES had acceptable face validity and reliability coefficients; which added to its ease administration and users' unbiased comprehension, could set it as a suitable tool in evaluating elder's outpatient empowerment-based medical education programs. PMID:25767307
Pfirsch, D.; Duechs, D.F.
1985-01-01
A number of statistical implications of empirical scaling laws in form of power products obtained by linear regression are analysed. The sensitivity of the error against a change of exponents is described by a sensitivity factor and the uncertainty of predictions by a ''range of predictions factor''. Inner relations in the statistical material is discussed, as well as the consequences of discarding variables.A recipe is given for the computations to be done. The whole is exemplified by considering scaling laws for the electron energy confinement time of ohmically heated tokamak plasmas. (author)
Tokamak confinement scaling laws
Connor, J.
1998-01-01
The scaling of energy confinement with engineering parameters, such as plasma current and major radius, is important for establishing the size of an ignited fusion device. Tokamaks exhibit a variety of modes of operation with different confinement properties. At present there is no adequate first principles theory to predict tokamak energy confinement and the empirical scaling method is the preferred approach to designing next step tokamaks. This paper reviews a number of robust theoretical concepts, such as dimensional analysis and stability boundaries, which provide a framework for characterising and understanding tokamak confinement and, therefore, generate more confidence in using empirical laws for extrapolation to future devices. (author)
Nielsen, Kim L.; Niordson, Christian F.; Hutchinson, John W.
2016-01-01
The rolling process is widely used in the metal forming industry and has been so for many years. However, the process has attracted renewed interest as it recently has been adapted to very small scales where conventional plasticity theory cannot accurately predict the material response. It is well....... Metals are known to be stronger when large strain gradients appear over a few microns; hence, the forces involved in the rolling process are expected to increase relatively at these smaller scales. In the present numerical analysis, a steady-state modeling technique that enables convergence without...
Christensen, Jannie Kristine Bang; Nielsen, Jeppe Agger; Gustafsson, Jeppe
through negotiating, mobilizing coalitions, and legitimacy building. To illustrate and further develop this conceptualization, we build on insights from a longitudinal case study (2008-2014) and provide a rich empirical account of how a Danish telemedicine pilot was transformed into a large......-scale telemedicine project through simultaneous translation and theorization efforts in a cross-sectorial, politicized social context. Although we focus on upscaling as a bottom up process (from pilot to large scale), we argue that translation and theorization, and associated political behavior occurs in a broader...
Petrie, L.M.
1984-01-01
The SCALE driver was designed to allow implementation of a modular code system consisting of control modules, which determine the calculation path, and functional modules, which perform the basic calculations. The user can either select a control module and have that module determine the execution path, or the user can select functional modules directly by input
Furmanski, W.
1981-08-01
The effects of scaling violation in QCD are discussed in the perturbative scheme, based on the factorization of mass singularities in the light-like gauge. Some recent applications including the next-to-leading corrections are presented (large psub(T) scattering, numerical analysis of the leptoproduction data). A proposal is made for extending the method on the higher twist sector. (author)
Braendas, E.
1986-01-01
The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented
Amy Doolittle
2013-07-01
Full Text Available This study reports on the development and validation of the Civic Engagement Scale (CES. This scale is developed to be easily administered and useful to educators who are seeking to measure the attitudes and behaviors that have been affected by a service-learning experience. This instrument was administered as a validation study in a purposive sample of social work and education majors at three universities (N = 513 with a return of 354 (69%. After the reliability and validity analysis was completed, the Attitude subscale was left with eight items and a Cronbach’s alpha level of .91. The Behavior subscale was left with six items and a Cronbach’s alpha level of .85. Principal component analysis indicated a two-dimensional scale with high loadings on both factors (mean factor loading for the attitude factor = .79, and mean factor loading for the behavior factor = .77. These results indicate that the CES is strong enough to recommend its use in educational settings. Preliminary use has demonstrated that this scale will be useful to researchers seeking to better understand the relationship of attitudes and behaviors with civic engagement in the service-learning setting. The primary limitations of this research are that the sample was limited to social work and education majors who were primarily White (n = 312, 88.1% and female (n = 294, 83.1%. Therefore, further research would be needed to generalize this research to other populations.
Difficulty scaling through incongruity
Lankveld, van G.; Spronck, P.; Rauterberg, G.W.M.; Mateas, M.; Darken, C.
2008-01-01
In this paper we discuss our work on using the incongruity measure from psychological literature to scale the difficulty level of a game online to the capabilities of the human player. Our approach has been implemented in a small game called Glove.
Symbolic Multidimensional Scaling
P.J.F. Groenen (Patrick); Y. Terada
2015-01-01
markdownabstract__Abstract__ Multidimensional scaling (MDS) is a technique that visualizes dissimilarities between pairs of objects as distances between points in a low dimensional space. In symbolic MDS, a dissimilarity is not just a value but can represent an interval or even a histogram. Here,
Cardinal scales for health evaluation
Harvey, Charles; Østerdal, Lars Peter Raahave
2010-01-01
Policy studies often evaluate health for an individual or for a population by using measurement scales that are ordinal scales or expected-utility scales. This paper develops scales of a different type, commonly called cardinal scales, that measure changes in health. Also, we argue that cardinal...... scales provide a meaningful and useful means of evaluating health policies. Thus, we develop a means of using the perspective of early neoclassical welfare economics as an alternative to ordinalist and expected-utility perspectives....
Jessee, Matthew Anderson [ORNL
2016-04-01
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE
Gonzalez, Brett Christopher
) caves, and the interstitium, recovering six monophyletic clades within Aphroditiformia: Acoetidae, Aphroditidae, Eulepethidae, Iphionidae, Polynoidae, and Sigalionidae (inclusive of the former ‘Pisionidae’ and ‘Pholoidae’), respectively. Tracing of morphological character evolution showed a high degree...... of adaptability and convergent evolution between relatively closely related scale worms. While some morphological and behavioral modifications in cave polynoids reflected troglomorphism, other modifications like eye loss were found to stem from a common ancestor inhabiting the deep sea, further corroborating...... the deep sea ancestry of scale worm cave fauna. In conclusion, while morphological characterization across Aphroditiformia appears deceptively easy due to the presence of elytra, convergent evolution during multiple early radiations across wide ranging habitats have confounded our ability to reconstruct...
Kuehn, Christian
2015-01-01
This book provides an introduction to dynamical systems with multiple time scales. The approach it takes is to provide an overview of key areas, particularly topics that are less available in the introductory form. The broad range of topics included makes it accessible for students and researchers new to the field to gain a quick and thorough overview. The first of its kind, this book merges a wide variety of different mathematical techniques into a more unified framework. The book is highly illustrated with many examples and exercises and an extensive bibliography. The target audience of this book are senior undergraduates, graduate students as well as researchers interested in using the multiple time scale dynamics theory in nonlinear science, either from a theoretical or a mathematical modeling perspective.
Hirano, Kemmei; Murao, Yoshio
1980-01-01
The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)
Hen, J.
1991-11-26
This patent describes a composition for the removal of sulfate scale from surfaces. It comprises: an aqueous solution of about 0.1 to 1.0 molar concentration of an aminopolycarboxylic acid (APCA) containing 1 to 4 amino groups or a salt thereof, and about 0.1 to 1.0 molar concentration of a second component which is diethylenetriaminepenta (methylenephosphonic acid) (DTPMP) or a salt thereof, or aminotri (methylenephosphonic acid) (ATMP) or a salt thereof as an internal phase enveloped by a hydrocarbon membrane phase which is itself emulsified in an external aqueous phase, the hydrocarbon membrane phase continuing a complexing agent weaker for the cations of the sulfate scale than the APCA and DTPMP or ATMP, any complexing agent for the cations in the external aqueous phase being weaker than that in the hydrocarbon membrane phase.
Density scaling for multiplets
Nagy, A
2011-01-01
Generalized Kohn-Sham equations are presented for lowest-lying multiplets. The way of treating non-integer particle numbers is coupled with an earlier method of the author. The fundamental quantity of the theory is the subspace density. The Kohn-Sham equations are similar to the conventional Kohn-Sham equations. The difference is that the subspace density is used instead of the density and the Kohn-Sham potential is different for different subspaces. The exchange-correlation functional is studied using density scaling. It is shown that there exists a value of the scaling factor ζ for which the correlation energy disappears. Generalized OPM and Krieger-Li-Iafrate (KLI) methods incorporating correlation are presented. The ζKLI method, being as simple as the original KLI method, is proposed for multiplets.
Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.
1989-01-01
Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs
2016-03-16
the structural funds for regional development and cohesion . Until recently, several systems of territorial units have coexisted in European...for European MAs versus population. See text and figures 1–7, electronic supplementary material, figures S1–S8 for additional details and electronic...scale as expected, although with wide confidence intervals (table 1). The urbanized area of Spanish cities appears superlinear, contrary to theory
Li, Xian; Dong, Xin Luna; Lyons, Kenneth B.; Meng, Weiyi; Srivastava, Divesh
2015-01-01
Recent research shows that copying is prevalent for Deep-Web data and considering copying can significantly improve truth finding from conflicting values. However, existing copy detection techniques do not scale for large sizes and numbers of data sources, so truth finding can be slowed down by one to two orders of magnitude compared with the corresponding techniques that do not consider copying. In this paper, we study {\\em how to improve scalability of copy detection on structured data}. Ou...
Giddings, Steven B.
2009-01-01
I outline motivations for believing that important quantum gravity effects lie beyond the Planck scale at both higher energies and longer distances and times. These motivations arise in part from the study of ultra-high energy scattering, and also from considerations in cosmology. I briefly summarize some inferences about such ultra-planckian physics, and clues we might pursue towards the principles of a more fundamental theory addressing the known puzzles and paradoxes of quantum gravity.
Accurate scaling on multiplicity
Golokhvastov, A.I.
1989-01-01
The commonly used formula of KNO scaling P n =Ψ(n/ ) for descrete distributions (multiplicity distributions) is shown to contradict mathematically the condition ΣP n =1. The effect is essential even at ISR energies. A consistent generalization of the concept of similarity for multiplicity distributions is obtained. The multiplicity distributions of negative particles in PP and also e + e - inelastic interactions are similar over the whole studied energy range. Collider data are discussed. 14 refs.; 8 figs
Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines
Fingersh, Lee J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Loth, Eric [University of Virginia; Kaminski, Meghan [University of Virginia; Qin, Chao [University of Virginia; Griffith, D. Todd [Sandia National Laboratories
2017-06-09
A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3 wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.
The ''invisible'' radioactive scale
Bjoernstad, T.; Ramsoey, T.
1999-04-01
Production and up-concentration of naturally occurring radioactive material (NORM) in the petroleum industry has attracted steadily increasing attention during the last 15 years. Most production engineers today associate this radioactivity with precipitates (scales) and sludges in production tubing, pumps, valves, separators, settling tanks etc., wherever water is being transported or treated. 226 Ra and 228 Ra are the most well known radioactive constituents in scale. Surprisingly little known is the radioactive contamination by 210 Pb and progeny 210 Bi and 210 Po. These are found in combination with 226 Ra in ordinary scale, often in layer of non-radioactive metallic lead in water transportation systems, but also in pure gas and condensate handling systems ''unsupported'' by 226 Ra, but due to transportation and decay of the noble gas 222 Rn in NG/LNG. This latter contamination may be rather thin, in some cases virtually invisible. When, in addition, the radiation energies are low enough for not being detectable on the equipment outer surface, its existence has for most people in the industry been a secret. The report discusses transportation and deposition mechanisms, detection methods and provides some examples of measured results from the North Sea on equipment sent for maintenance. It is concluded that a regular measurement program for this type of contamination should be mandatory under all dismantling processes of transportation and fluid handling equipment for fluids and gases offshore and onshore
Offner, Avshalom; Ramon, Guy Z.
2016-11-01
Thermoacoustic phenomena - conversion of heat to acoustic oscillations - may be harnessed for construction of reliable, practically maintenance-free engines and heat pumps. Specifically, miniaturization of thermoacoustic devices holds great promise for cooling of micro-electronic components. However, as devices size is pushed down to micro-meter scale it is expected that non-negligible slip effects will exist at the solid-fluid interface. Accordingly, new theoretical models for thermoacoustic engines and heat pumps were derived, accounting for a slip boundary condition. These models are essential for the design process of micro-scale thermoacoustic devices that will operate under ultrasonic frequencies. Stability curves for engines - representing the onset of self-sustained oscillations - were calculated with both no-slip and slip boundary conditions, revealing improvement in the performance of engines with slip at the resonance frequency range applicable for micro-scale devices. Maximum achievable temperature differences curves for thermoacoustic heat pumps were calculated, revealing the negative effect of slip on the ability to pump heat up a temperature gradient. The authors acknowledge the support from the Nancy and Stephen Grand Technion Energy Program (GTEP).
Ruth, Mark
2017-07-12
'H2@Scale' is a concept based on the opportunity for hydrogen to act as an intermediate between energy sources and uses. Hydrogen has the potential to be used like the primary intermediate in use today, electricity, because it too is fungible. This presentation summarizes the H2@Scale analysis efforts performed during the first third of 2017. Results of technical potential uses and supply options are summarized and show that the technical potential demand for hydrogen is 60 million metric tons per year and that the U.S. has sufficient domestic resources to meet that demand. A high level infrastructure analysis is also presented that shows an 85% increase in energy on the grid if all hydrogen is produced from grid electricity. However, a preliminary spatial assessment shows that supply is sufficient in most counties across the U.S. The presentation also shows plans for analysis of the economic potential for the H2@Scale concept. Those plans involve developing supply and demand curves for potential hydrogen generation options and as compared to other options for use of that hydrogen.
Khayyat, Zuhair
2017-07-31
Data cleansing approaches have usually focused on detecting and fixing errors with little attention to big data scaling. This presents a serious impediment since identify- ing and repairing dirty data often involves processing huge input datasets, handling sophisticated error discovery approaches and managing huge arbitrary errors. With large datasets, error detection becomes overly expensive and complicated especially when considering user-defined functions. Furthermore, a distinctive algorithm is de- sired to optimize inequality joins in sophisticated error discovery rather than na ̈ıvely parallelizing them. Also, when repairing large errors, their skewed distribution may obstruct effective error repairs. In this dissertation, I present solutions to overcome the above three problems in scaling data cleansing. First, I present BigDansing as a general system to tackle efficiency, scalability, and ease-of-use issues in data cleansing for Big Data. It automatically parallelizes the user’s code on top of general-purpose distributed platforms. Its programming inter- face allows users to express data quality rules independently from the requirements of parallel and distributed environments. Without sacrificing their quality, BigDans- ing also enables parallel execution of serial repair algorithms by exploiting the graph representation of discovered errors. The experimental results show that BigDansing outperforms existing baselines up to more than two orders of magnitude. Although BigDansing scales cleansing jobs, it still lacks the ability to handle sophisticated error discovery requiring inequality joins. Therefore, I developed IEJoin as an algorithm for fast inequality joins. It is based on sorted arrays and space efficient bit-arrays to reduce the problem’s search space. By comparing IEJoin against well- known optimizations, I show that it is more scalable, and several orders of magnitude faster. BigDansing depends on vertex-centric graph systems, i.e., Pregel
Magnetic Scaling in Superconductors
Lawrie, I.D.
1997-01-01
The Ginzburg-Landau-Wilson superconductor in a magnetic field B is considered in the approximation that magnetic-field fluctuations are neglected. A formulation of perturbation theory is presented in which multiloop calculations fully retaining all Landau levels are tractable. A 2-loop calculation shows that, near the zero-field critical point, the singular part of the free energy scales as F sing ∼ |t| 2-α F(B|t| -2ν ), where ν is the coherence-length exponent emdash a result which has hitherto been assumed on purely dimensional grounds. copyright 1997 The American Physical Society
Holt, Bradley
2011-01-01
This practical guide offers a short course on scaling CouchDB to meet the capacity needs of your distributed application. Through a series of scenario-based examples, this book lets you explore several methods for creating a system that can accommodate growth and meet expected demand. In the process, you learn about several tools that can help you with replication, load balancing, clustering, and load testing and monitoring. Apply performance tips for tuning your databaseReplicate data, using Futon and CouchDB's RESTful interfaceDistribute CouchDB's workload through load balancingLearn option
J. Ambjørn
1995-07-01
Full Text Available The 2-point function is the natural object in quantum gravity for extracting critical behavior: The exponential falloff of the 2-point function with geodesic distance determines the fractal dimension dH of space-time. The integral of the 2-point function determines the entropy exponent γ, i.e. the fractal structure related to baby universes, while the short distance behavior of the 2-point function connects γ and dH by a quantum gravity version of Fisher's scaling relation. We verify this behavior in the case of 2d gravity by explicit calculation.
Hanks, T.C.; Kanamori, H.
1979-05-10
The nearly conincident forms of the relations between seismic moment M/sub o/ and the magnitudes M/sub L/, M/sub s/, and M/sub w/ imply a moment magnitude scale M=2/3 log M/sub o/-10.7 which is uniformly valid for 3< or approx. =M/sub L/< or approx. = 7, 5 < or approx. =M/sub s/< or approx. =7 1/2 and M/sub w/> or approx. = 7 1/2.
Jamil A
2013-05-01
Full Text Available A five-year-old boy presented with a six-week history of scales, flaking and crusting of the scalp. He had mild pruritus but no pain. He did not have a history of atopy and there were no pets at home. Examination of the scalp showed thick, yellowish dry crusts on the vertex and parietal areas and the hair was adhered to the scalp in clumps. There was non-scarring alopecia and mild erythema (Figure 1 & 2. There was no cervical or occipital lymphadenopathy. The patient’s nails and skin in other parts of the body were normal.
Harrison, L.
1991-01-01
Small scale wind energy conversion is finding it even more difficult to realise its huge potential market than grid connected wind power. One of the main reasons for this is that its technical development is carried out in isolated parts of the world with little opportunity for technology transfer: small scale wind energy converters (SWECS) are not born of one technology, but have been evolved for different purposes; as a result, the SWECS community has no powerful lobbying force speaking with one voice to promote the technology. There are three distinct areas of application for SWECS, water pumping for domestic and livestock water supplies, irrigation, drainage etc., where no other mechanical means of power is available or viable, battery charging for lighting, TV, radio, and telecommunications in areas far from a grid or road system, and wind-diesel systems, mainly for use on islands where supply of diesel oil is possible, but costly. An attempt is being made to found an association to support the widespread implementation of SWECS and to promote their implementation. It is intended for Wind Energy for Rural Areas to have a permanent secretariat, based in Holland. (AB)
The Unintentional Procrastination Scale.
Fernie, Bruce A; Bharucha, Zinnia; Nikčević, Ana V; Spada, Marcantonio M
2017-01-01
Procrastination refers to the delay or postponement of a task or decision and is often conceptualised as a failure of self-regulation. Recent research has suggested that procrastination could be delineated into two domains: intentional and unintentional. In this two-study paper, we aimed to develop a measure of unintentional procrastination (named the Unintentional Procrastination Scale or the 'UPS') and test whether this would be a stronger marker of psychopathology than intentional and general procrastination. In Study 1, a community sample of 139 participants completed a questionnaire that consisted of several items pertaining to unintentional procrastination that had been derived from theory, previous research, and clinical experience. Responses were subjected to a principle components analysis and assessment of internal consistency. In Study 2, a community sample of 155 participants completed the newly developed scale, along with measures of general and intentional procrastination, metacognitions about procrastination, and negative affect. Data from the UPS were subjected to confirmatory factor analysis and revised accordingly. The UPS was then validated using correlation and regression analyses. The six-item UPS possesses construct and divergent validity and good internal consistency. The UPS appears to be a stronger marker of psychopathology than the pre-existing measures of procrastination used in this study. Results from the regression models suggest that both negative affect and metacognitions about procrastination differentiate between general, intentional, and unintentional procrastination. The UPS is brief, has good psychometric properties, and has strong associations with negative affect, suggesting it has value as a research and clinical tool.
Chodorow, Kristina
2011-01-01
Create a MongoDB cluster that will to grow to meet the needs of your application. With this short and concise book, you'll get guidelines for setting up and using clusters to store a large volume of data, and learn how to access the data efficiently. In the process, you'll understand how to make your application work with a distributed database system. Scaling MongoDB will help you: Set up a MongoDB cluster through shardingWork with a cluster to query and update dataOperate, monitor, and backup your clusterPlan your application to deal with outages By following the advice in this book, you'l
Heller, Alfred
2001-01-01
The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....
Kadish, David
2017-01-01
This paper explores thematic parallels between artistic and agricultural practices in the postwar period to establish a link to media art and cultural practices that are currently emerging in urban agriculture. Industrial agriculture has roots in the post-WWII abundance of mechanical and chemical...... equipment and research. These systems are highly mechanically efficient. With minimal physical labour, they extract ever staggering crop yields from ever poorer soils in shifting climatic conditions. However, the fact of mechanical efficiency is used to mask a set of problems with industrial......-scale agricultural systems that range from spreading pests and diseases to poor global distribution of concentrated regional food wealth. That the conversion of vegetatively diverse farmland into monochromatic fields was popularized at the same time as the arrival of colour field paintings like Barnett Newman...
Small Business Administration — SBA’s new ScaleUp America Initiative is designed to help small firms with high potential “scale up” and grow their businesses so that they will provide more jobs and...
MULTIPLE SCALES FOR SUSTAINABLE RESULTS
This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...
Absolute flux scale for radioastronomy
Ivanov, V.P.; Stankevich, K.S.
1986-01-01
The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized
Northeast Snowfall Impact Scale (NESIS)
National Oceanic and Atmospheric Administration, Department of Commerce — While the Fujita and Saffir-Simpson Scales characterize tornadoes and hurricanes respectively, there is no widely used scale to classify snowstorms. The Northeast...
Sommer, Rainer
2014-02-01
The principles of scale setting in lattice QCD as well as the advantages and disadvantages of various commonly used scales are discussed. After listing criteria for good scales, I concentrate on the main presently used ones with an emphasis on scales derived from the Yang-Mills gradient flow. For these I discuss discretisation errors, statistical precision and mass effects. A short review on numerical results also brings me to an unpleasant disagreement which remains to be explained.
Žardin, Norbert
2017-01-01
NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...
Sommer, Rainer [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC
2014-02-15
The principles of scale setting in lattice QCD as well as the advantages and disadvantages of various commonly used scales are discussed. After listing criteria for good scales, I concentrate on the main presently used ones with an emphasis on scales derived from the Yang-Mills gradient flow. For these I discuss discretisation errors, statistical precision and mass effects. A short review on numerical results also brings me to an unpleasant disagreement which remains to be explained.
Scale issues in tourism development
Sinji Yang; Lori Pennington-Gray; Donald F. Holecek
1998-01-01
Proponents of Alternative Tourism overwhelmingly believe that alternative forms of tourism development need to be small in scale. Inasmuch as tourists' demand has great power to shape the market, the issues surrounding the tourism development scale deserve further consideration. This paper discusses the implications and effects of the tourism development scale on...
Scaling up: Assessing social impacts at the macro-scale
Schirmer, Jacki
2011-01-01
Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.
Bazant, Z.P. [Northwestern Univ., Evanston, IL (United States); Chen, Er-Ping [Sandia National Lab., Albuquerque, NM (United States)
1997-01-01
This article attempts to review the progress achieved in the understanding of scaling and size effect in the failure of structures. Particular emphasis is placed on quasibrittle materials for which the size effect is complicated. Attention is focused on three main types of size effects, namely the statistical size effect due to randomness of strength, the energy release size effect, and the possible size effect due to fractality of fracture or microcracks. Definitive conclusions on the applicability of these theories are drawn. Subsequently, the article discusses the application of the known size effect law for the measurement of material fracture properties, and the modeling of the size effect by the cohesive crack model, nonlocal finite element models and discrete element models. Extensions to compression failure and to the rate-dependent material behavior are also outlined. The damage constitutive law needed for describing a microcracked material in the fracture process zone is discussed. Various applications to quasibrittle materials, including concrete, sea ice, fiber composites, rocks and ceramics are presented.
SPACE BASED INTERCEPTOR SCALING
G. CANAVAN
2001-02-01
Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.
Large scale tracking algorithms
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
1998-12-01
In the US, the October 1998 murder of a physician who performed abortions was an outward manifestation of the insidious battle against legal abortion being waged by radical Christian social conservatives seeking to transform the US democracy into a theocracy. This movement has been documented in a publication entitled, "Tipping the Scales: The Christian Right's Legal Crusade Against Choice" produced as a result of a 4-year investigation conducted by The Center for Reproductive Law and Policy. This publication describes how these fundamentalists have used sophisticated legal, lobbying, and communication strategies to further their goals of challenging the separation of church and state, opposing family planning and sexuality education that is not based solely on abstinence, promoting school prayer, and restricting homosexual rights. The movement has resulted in the introduction of more than 300 anti-abortion bills in states, 50 of which have passed in 23 states. Most Christian fundamentalist groups provide free legal representation to abortion clinic terrorists, and some groups solicit women to bring specious malpractice claims against providers. Sophisticated legal tactics are used by these groups to remove the taint of extremism and mask the danger posed to US constitutional principles being posed by "a well-financed and zealous brand of radical lawyers and their supporters."
Li, Jie; Stroebe, Magaret; Chan, Cecilia L W; Chow, Amy Y M
2017-06-01
The rationale, development, and validation of the Bereavement Guilt Scale (BGS) are described in this article. The BGS was based on a theoretically developed, multidimensional conceptualization of guilt. Part 1 describes the generation of the item pool, derived from in-depth interviews, and review of the scientific literature. Part 2 details statistical analyses for further item selection (Sample 1, N = 273). Part 3 covers the psychometric properties of the emergent-BGS (Sample 2, N = 600, and Sample 3, N = 479). Confirmatory factor analysis indicated that a five-factor model fit the data best. Correlations of BGS scores with depression, anxiety, self-esteem, self-forgiveness, and mode of death were consistent with theoretical predictions, supporting the construct validity of the measure. The internal consistency and test-retest reliability were also supported. Thus, initial testing or examination suggests that the BGS is a valid tool to assess multiple components of bereavement guilt. Further psychometric testing across cultures is recommended.
A multi scale model for small scale plasticity
Zbib, Hussein M.
2002-01-01
Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band
Minimum Efficient Scale (MES) and preferred scale of container terminals
Kaselimi, Evangelia N.; Notteboom, Theo E.; Pallis, Athanasios A.; Farrell, Sheila
2011-01-01
Abstract: The decision on the scale of a port terminal affects the terminals managerial, operational and competitive position in all the phases of its life. It also affects competition structures in the port in which the terminal is operating, and has a potential impact on other terminals. Port authorities and terminal operators need to know the scale of the terminal when engaging in concession agreements. In economic theory the scale of a plant/firm is typically defined in relation to the Mi...
Charles Blattberg
2008-02-01
Full Text Available This paper criticises four major approaches to criminal law – consequentialism, retributivism, abolitionism, and “mixed” pluralism – each of which, in its own fashion, affirms the celebrated emblem of the “scales of justice.” The argument is that there is a better way of dealing with the tensions that often arise between the various legal purposes than by merely balancing them against each other. It consists, essentially, of striving to genuinely reconcile those purposes, a goal which is shown to require taking a new, “patriotic” approach to law. Le présent article porte une critique à quatre approches majeures en droit pénal : le conséquentialisme, le rétributivisme, l’abolitionnisme et le pluralisme « mixte. » Toutes ces approches se rangent, chacune à leur manière, sous le célèbre emblème des « échelles de justice. » L’argument est qu’il existe une meilleure façon de faire face aux tensions qui opposent les multiples objectifs judiciaires plutôt que de comparer le poids des uns contre le poids des autres. Il s’agit essentiellement de s’efforcer à réaliser une authentique réconciliation de ces objectifs. Il apparaîtra que pour y parvenir il est nécessaire d’avoir recours à une nouvelle approche du droit, une approche précisément « patriotique. »
Industrial scale gene synthesis.
Notka, Frank; Liss, Michael; Wagner, Ralf
2011-01-01
The most recent developments in the area of deep DNA sequencing and downstream quantitative and functional analysis are rapidly adding a new dimension to understanding biochemical pathways and metabolic interdependencies. These increasing insights pave the way to designing new strategies that address public needs, including environmental applications and therapeutic inventions, or novel cell factories for sustainable and reconcilable energy or chemicals sources. Adding yet another level is building upon nonnaturally occurring networks and pathways. Recent developments in synthetic biology have created economic and reliable options for designing and synthesizing genes, operons, and eventually complete genomes. Meanwhile, high-throughput design and synthesis of extremely comprehensive DNA sequences have evolved into an enabling technology already indispensable in various life science sectors today. Here, we describe the industrial perspective of modern gene synthesis and its relationship with synthetic biology. Gene synthesis contributed significantly to the emergence of synthetic biology by not only providing the genetic material in high quality and quantity but also enabling its assembly, according to engineering design principles, in a standardized format. Synthetic biology on the other hand, added the need for assembling complex circuits and large complexes, thus fostering the development of appropriate methods and expanding the scope of applications. Synthetic biology has also stimulated interdisciplinary collaboration as well as integration of the broader public by addressing socioeconomic, philosophical, ethical, political, and legal opportunities and concerns. The demand-driven technological achievements of gene synthesis and the implemented processes are exemplified by an industrial setting of large-scale gene synthesis, describing production from order to delivery. Copyright © 2011 Elsevier Inc. All rights reserved.
Scaling Effects on Materials Tribology: From Macro to Micro Scale.
Stoyanov, Pantcho; Chromik, Richard R
2017-05-18
The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale.
Dynamic critical behaviour and scaling
Oezoguz, B.E.
2001-01-01
Traditionally the scaling is the property of dynamical systems at thermal equilibrium. In second order phase transitions scaling behaviour is due to the infinite correlation length around the critical point. In first order phase transitions however, the correlation length remains finite and a different type of scaling can be observed. For first order phase transitions all singularities are governed by the volume of the system. Recently, a different type of scaling, namely dynamic scaling has attracted attention in second order phase transitions. In dynamic scaling, when a system prepared at high temperature is quenched to the critical temperature, it exhibits scaling behaviour. Dynamic scaling has been applied to various spin systems and the validity of the arguments are shown. Firstly, in this thesis project the dynamic scaling is applied to 4-dimensional using spin system which exhibits second order phase transition with mean-field critical indices. Secondly, it is shown that although the dynamic is quite different, first order phase transitions also has a different type of dynamic scaling
Plague and Climate: Scales Matter
Ben Ari, Tamara; Neerinckx, Simon; Gage, Kenneth L.; Kreppel, Katharina; Laudisoit, Anne; Leirs, Herwig; Stenseth, Nils Chr.
2011-01-01
Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts) of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations. PMID:21949648
Pivovar, Bryan
2017-03-31
Final report from the H2@Scale Workshop held November 16-17, 2016, at the National Renewable Energy Laboratory in Golden, Colorado. The U.S. Department of Energy's National Renewable Energy Laboratory hosted a technology workshop to identify the current barriers and research needs of the H2@Scale concept. H2@Scale is a concept regarding the potential for wide-scale impact of hydrogen produced from diverse domestic resources to enhance U.S. energy security and enable growth of innovative technologies and domestic industries. Feedback received from a diverse set of stakeholders at the workshop will guide the development of an H2@Scale roadmap for research, development, and early stage demonstration activities that can enable hydrogen as an energy carrier at a national scale.
Scaling structure loads for SMA
Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam [KEPCO ENC, Yongin (Korea, Republic of)
2012-10-15
When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies.
Scaling structure loads for SMA
Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam
2012-01-01
When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies
International Symposia on Scale Modeling
Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori
2015-01-01
This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...
Contact kinematics of biomimetic scales
Ghosh, Ranajay; Ebrahimi, Hamid; Vaziri, Ashkan, E-mail: vaziri@coe.neu.edu [Department of Mechanical and Industrial Engineering, Northeastern University, Boston, Massachusetts 02115 (United States)
2014-12-08
Dermal scales, prevalent across biological groups, considerably boost survival by providing multifunctional advantages. Here, we investigate the nonlinear mechanical effects of biomimetic scale like attachments on the behavior of an elastic substrate brought about by the contact interaction of scales in pure bending using qualitative experiments, analytical models, and detailed finite element (FE) analysis. Our results reveal the existence of three distinct kinematic phases of operation spanning linear, nonlinear, and rigid behavior driven by kinematic interactions of scales. The response of the modified elastic beam strongly depends on the size and spatial overlap of rigid scales. The nonlinearity is perceptible even in relatively small strain regime and without invoking material level complexities of either the scales or the substrate.
Rutqvist, J.
2004-01-01
This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because a sufficient amount of water must be available within a
Scale symmetry and virial theorem
Westenholz, C. von
1978-01-01
Scale symmetry (or dilatation invariance) is discussed in terms of Noether's Theorem expressed in terms of a symmetry group action on phase space endowed with a symplectic structure. The conventional conceptual approach expressing invariance of some Hamiltonian under scale transformations is re-expressed in alternate form by infinitesimal automorphisms of the given symplectic structure. That is, the vector field representing scale transformations leaves the symplectic structure invariant. In this model, the conserved quantity or constant of motion related to scale symmetry is the virial. It is shown that the conventional virial theorem can be derived within this framework
1999-01-01
The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)
Natural Scales in Geographical Patterns
Menezes, Telmo; Roth, Camille
2017-04-01
Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Collider Scaling and Cost Estimation
Palmer, R.B.
1986-01-01
This paper deals with collider cost and scaling. The main points of the discussion are the following ones: 1) scaling laws and cost estimation: accelerating gradient requirements, total stored RF energy considerations, peak power consideration, average power consumption; 2) cost optimization; 3) Bremsstrahlung considerations; 4) Focusing optics: conventional, laser focusing or super disruption. 13 refs
Voice, Schooling, Inequality, and Scale
Collins, James
2013-01-01
The rich studies in this collection show that the investigation of voice requires analysis of "recognition" across layered spatial-temporal and sociolinguistic scales. I argue that the concepts of voice, recognition, and scale provide insight into contemporary educational inequality and that their study benefits, in turn, from paying attention to…
Spiritual Competency Scale: Further Analysis
Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.
2015-01-01
This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC…
Scaling as an Organizational Method
Papazu, Irina Maria Clara Hansen; Nelund, Mette
2018-01-01
Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....
The Assertiveness Scale for Children.
Peeler, Elizabeth; Rimmer, Susan M.
1981-01-01
Described an assertiveness scale for children developed to assess four dimensions of assertiveness across three categories of interpersonal situations. The scale was administered to elementary and middle school children (N=609) and readministered to students (N=164) to assess test-retest reliability. Test-retest reliability was low while internal…
Gradstein, F.M.; Ogg, J.G.; Hilgen, F.J.
2012-01-01
This report summarizes the international divisions and ages in the Geologic Time Scale, published in 2012 (GTS2012). Since 2004, when GTS2004 was detailed, major developments have taken place that directly bear and have considerable impact on the intricate science of geologic time scaling. Precam
Yaman, Erkan
2012-01-01
The aim of this research was to develop the Mobbing Impacts Scale and to examine its validity and reliability analyses. The sample of study consisted of 509 teachers from Sakarya. In this study construct validity, internal consistency, test-retest reliabilities and item analysis of the scale were examined. As a result of factor analysis for…
Znamenny scale – fait accompli?
Alexei Yaropolov
2010-12-01
Full Text Available The author addresses one of the most sensitive topics of Znamenny chant: its scale. He tries to restore the “burned bridges” between ideographic and staff-notation of the Chant as he redefines and essentially generalizes the concept of scale as such. The possibility to artificially construct ad hoc many scales sounding sometimes very similar to the scale suggested by the modern staff-notation is a serious argument to regard the “staff-notation based” deciphering from the 17th century (dvoeznamenniki a pure game of chance. The constructed scales, presented in the paper, are different to “keyboard diatonica” and from one another and are never subject to the unified theorizing (unified nomenclature of degrees, etc. Critically commented is the practice to uncontrollably use the trivial pitch-symbols for deciphering, which ipso facto makes the probabilistic steps of unknown scales look as the ill-founded deviations from the diatonic scale steps, which are currently in use in the common musical education. This practice hinders the chance to acknowledge the right of the remote musical culture to rest on foundations that can be formulated both positively and explicitly, all the more so, as the usage of paleographic signs looks rather consistent. The resemblances and differences between musical cultures may be treated more liberally since no scale is seen a norm. The author is based on the writings of Russian musicologist and organologist Felix Raudonikas.
Gkoulalas-Divanis, Aris
2014-01-01
Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field
Giraud, E.
A sample of dwarf and spiral galaxies with extended rotation curves is analysed, assuming that the fraction of dark matter is small. The objective of the paper is to prepare a framework for a theory, based on fundamental principles, that would give fits of the same quality as the phenomenology of dark halos. The following results are obtained: 1) The geodesics of massive systems with low density (Class I galaxies) can be described by the metric ds^2 = b^{-1}(r)dr^2 - b(r)dt^2 + r^2 dOmega^2 where b(r) = 1 - {2 over c^2}({{GM} over r} + gamma_f M^{1/2}) In this expression Gamma_f is a new fundamental constant which has been deduced from rotation curves of galaxies with circular velocity V_c^2 >= 2 {{GM} over r} for all r 2) The above metric is deduced from the conformal invariant metric ds^2 = B^{-1}(r)dr^2 - B(r)dt^2 + r^2 dOmega^2 where B(r) = 1 - {2 over c^2}({{GM} over r} + Gamma_f M^{1/2} + {1 over 3} {Gamma_f^2 over G}r) through a linear transform, u, of the linear special group SL(2, R) 3) The term {2 over c^2}Gamma_f M^{1/2} accounts for the difference between the observed rotation velocity and the Newtonian velocity. The term {2 over {3c^2}}{Gamma_f^2 over G}r is interpreted as a scale invariance between systems of different masses and sizes. 4) The metric B is a vacuum solution around a mass M deduced from the least action principle applied to the unique action I_a = -2 a int (-g)^{1/2} [R_{mu kappa}R^{ mu kappa} - 1/3(Ralphaalpha)^2] dx^4 built with the conformal Weyl tensor 5) For galaxies such that there is a radius, r_0, at which {{GM} over r_0} = Gamma M^{1/2} (Class II), the term Gamma M^{1/2} might be confined by the Newtonian potential yielding stationary solutions. 6) The analysed rotation curves of Class II galaxies are indeed well described with metrics of the form b(r) = 1 - {2 over c^2}({{GM} over r} + (n + 1) Gamma_0 M^{1/2}) where n is an integer and Gamma_0 = {1 over the square root of 3}Gamma_f 7) The effective potential is determined and
Dynamic inequalities on time scales
Agarwal, Ravi; Saker, Samir
2014-01-01
This is a monograph devoted to recent research and results on dynamic inequalities on time scales. The study of dynamic inequalities on time scales has been covered extensively in the literature in recent years and has now become a major sub-field in pure and applied mathematics. In particular, this book will cover recent results on integral inequalities, including Young's inequality, Jensen's inequality, Holder's inequality, Minkowski's inequality, Steffensen's inequality, Hermite-Hadamard inequality and Čebyšv's inequality. Opial type inequalities on time scales and their extensions with weighted functions, Lyapunov type inequalities, Halanay type inequalities for dynamic equations on time scales, and Wirtinger type inequalities on time scales and their extensions will also be discussed here in detail.
Entanglement scaling in lattice systems
Audenaert, K M R [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom); Cramer, M [QOLS, Blackett Laboratory, Imperial College London, Prince Consort Road, London SW7 2BW (United Kingdom); Eisert, J [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom); Plenio, M B [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, Exhibition Road, London SW7 2PG (United Kingdom)
2007-05-15
We review some recent rigorous results on scaling laws of entanglement properties in quantum many body systems. More specifically, we study the entanglement of a region with its surrounding and determine its scaling behaviour with its size for systems in the ground and thermal states of bosonic and fermionic lattice systems. A theorem connecting entanglement between a region and the rest of the lattice with the surface area of the boundary between the two regions is presented for non-critical systems in arbitrary spatial dimensions. The entanglement scaling in the field limit exhibits a peculiar difference between fermionic and bosonic systems. In one-spatial dimension a logarithmic divergence is recovered for both bosonic and fermionic systems. In two spatial dimensions in the setting of half-spaces however we observe strict area scaling for bosonic systems and a multiplicative logarithmic correction to such an area scaling in fermionic systems. Similar questions may be posed and answered in classical systems.
Spinrath, Martin
2014-01-01
We present a series of recent works related to group theoretical factors from GUT symmetry breaking which lead to predictions for the ratios of quark and lepton Yukawa couplings at the unification scale. New predictions for the GUT scale ratios y μ /y s , y τ /y b and y t /y b in particular are shown and compared to experimental data. For this comparison it is important to include possibly large supersymmetric threshold corrections. Due to this reason the structure of the fermion masses at the GUT scale depends on TeV scale physics and makes GUT scale physics testable at the LHC. We also discuss how this new predictions might lead to predictions for mixing angles by discussing the example of the recently measured last missing leptonic mixing angle θ 13 making this new class of GUT models also testable in neutrino experiments
Convergent Validity of Four Innovativeness Scales.
Goldsmith, Ronald E.
1986-01-01
Four scales of innovativeness were administered to two samples of undergraduate students: the Open Processing Scale, Innovativeness Scale, innovation subscale of the Jackson Personality Inventory, and Kirton Adaption-Innovation Inventory. Intercorrelations indicated the scales generally exhibited convergent validity. (GDC)
Cardiac Depression Scale: Mokken scaling in heart failure patients
Ski Chantal F
2012-11-01
Full Text Available Abstract Background There is a high prevalence of depression in patients with heart failure (HF that is associated with worsening prognosis. The value of using a reliable and valid instrument to measure depression in this population is therefore essential. We validated the Cardiac Depression Scale (CDS in heart failure patients using a model of ordinal unidimensional measurement known as Mokken scaling. Findings We administered in face-to-face interviews the CDS to 603 patients with HF. Data were analysed using Mokken scale analysis. Items of the CDS formed a statistically significant unidimensional Mokken scale of low strength (H0.8. Conclusions The CDS has a hierarchy of items which can be interpreted in terms of the increasingly serious effects of depression occurring as a result of HF. Identifying an appropriate instrument to measure depression in patients with HF allows for early identification and better medical management.
Scale-by-scale contributions to Lagrangian particle acceleration
Lalescu, Cristian C.; Wilczek, Michael
2017-11-01
Fluctuations on a wide range of scales in both space and time are characteristic of turbulence. Lagrangian particles, advected by the flow, probe these fluctuations along their trajectories. In an effort to isolate the influence of the different scales on Lagrangian statistics, we employ direct numerical simulations (DNS) combined with a filtering approach. Specifically, we study the acceleration statistics of tracers advected in filtered fields to characterize the smallest temporal scales of the flow. Emphasis is put on the acceleration variance as a function of filter scale, along with the scaling properties of the relevant terms of the Navier-Stokes equations. We furthermore discuss scaling ranges for higher-order moments of the tracer acceleration, as well as the influence of the choice of filter on the results. Starting from the Lagrangian tracer acceleration as the short time limit of the Lagrangian velocity increment, we also quantify the influence of filtering on Lagrangian intermittency. Our work complements existing experimental results on intermittency and accelerations of finite-sized, neutrally-buoyant particles: for the passive tracers used in our DNS, feedback effects are neglected such that the spatial averaging effect is cleanly isolated.
On inertial range scaling laws
Bowman, J.C.
1994-12-01
Inertial-range scaling laws for two- and three-dimensional turbulence are re-examined within a unified framework. A new correction to Kolmogorov's k -5/3 scaling is derived for the energy inertial range. A related modification is found to Kraichnan's logarithmically corrected two-dimensional enstrophy cascade law that removes its unexpected divergence at the injection wavenumber. The significance of these corrections is illustrated with steady-state energy spectra from recent high-resolution closure computations. The results also underscore the asymptotic nature of inertial-range scaling laws. Implications for conventional numerical simulations are discussed
Geometric scaling as traveling waves
Munier, S.; Peschanski, R.
2003-01-01
We show the relevance of the nonlinear Fisher and Kolmogorov-Petrovsky-Piscounov (KPP) equation to the problem of high energy evolution of the QCD amplitudes. We explain how the traveling wave solutions of this equation are related to geometric scaling, a phenomenon observed in deep-inelastic scattering experiments. Geometric scaling is for the first time shown to result from an exact solution of nonlinear QCD evolution equations. Using general results on the KPP equation, we compute the velocity of the wave front, which gives the full high energy dependence of the saturation scale
Straight scaling FFAG beam line
Lagrange, J.-B.; Planche, T.; Yamakawa, E.; Uesugi, T.; Ishi, Y.; Kuriyama, Y.; Qin, B.; Okabe, K.; Mori, Y.
2012-01-01
Fixed field alternating gradient (FFAG) accelerators are recently subject to a strong revival. They are usually designed in a circular shape; however, it would be an asset to guide particles with no overall bend in this type of accelerator. An analytical development of a straight FFAG cell which keeps zero-chromaticity is presented here. A magnetic field law is thus obtained, called “straight scaling law”, and an experiment has been conducted to confirm this zero-chromatic law. A straight scaling FFAG prototype has been designed and manufactured, and horizontal phase advances of two different energies are measured. Results are analyzed to clarify the straight scaling law.
Compositeness and the Fermi scale
Peccei, R.D.
1984-01-01
The positive attitude adopted up to now, due to the non-observation of effects of substructure, is that the compositeness scale Λ must be large: Λ > or approx. 1 TeV. Such a large value of Λ gives rise to two theoretical problems which I examine here, namely: 1) What dynamics yields light composite quarks and leptons (msub(f) < < Λ) and 2) What relation does the compositeness scale Λ have with the Fermi scale Λsub(F) = (√2 Gsub(F))sup(-1/2) approx.= 250 GeV. (orig./HSI)
Straight scaling FFAG beam line
Lagrange, J.-B.; Planche, T.; Yamakawa, E.; Uesugi, T.; Ishi, Y.; Kuriyama, Y.; Qin, B.; Okabe, K.; Mori, Y.
2012-11-01
Fixed field alternating gradient (FFAG) accelerators are recently subject to a strong revival. They are usually designed in a circular shape; however, it would be an asset to guide particles with no overall bend in this type of accelerator. An analytical development of a straight FFAG cell which keeps zero-chromaticity is presented here. A magnetic field law is thus obtained, called "straight scaling law", and an experiment has been conducted to confirm this zero-chromatic law. A straight scaling FFAG prototype has been designed and manufactured, and horizontal phase advances of two different energies are measured. Results are analyzed to clarify the straight scaling law.
Japanese large-scale interferometers
Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K
2002-01-01
The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.
Hidden scale invariance of metals
Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.
2015-01-01
Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general “hidden” scale invariance...... of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were...... available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant...
Arzano, Michele; Gubitosi, Giulia; Magueijo, João
2018-06-01
We explore the possibility that well known properties of the parity operator, such as its idempotency and unitarity, might break down at the Planck scale. Parity might then do more than just swap right and left polarized states and reverse the sign of spatial momentum k: it might generate superpositions of right and left handed states, as well as mix momenta of different magnitudes. We lay down the general formalism, but also consider the concrete case of the Planck scale kinematics governed by κ-Poincaré symmetries, where some of the general features highlighted appear explicitly. We explore some of the observational implications for cosmological fluctuations. Different power spectra for right handed and left handed tensor modes might actually be a manifestation of deformed parity symmetry at the Planck scale. Moreover, scale-invariance and parity symmetry appear deeply interconnected.
Scaling of graphene integrated circuits.
Bianchi, Massimiliano; Guerriero, Erica; Fiocco, Marco; Alberti, Ruggero; Polloni, Laura; Behnam, Ashkan; Carrion, Enrique A; Pop, Eric; Sordan, Roman
2015-05-07
The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing.
Large scale structure and baryogenesis
Kirilova, D.P.; Chizhov, M.V.
2001-08-01
We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)
Morocco - Small-Scale Fisheries
Millennium Challenge Corporation — The final performance evaluation roadmap for the Small-Scale Fisheries Project (PPA-MCC) is developed using a grid constructed around indicators relating to Project...
Minimum scaling laws in tokamaks
Zhang, Y.Z.; Mahajan, S.M.
1986-10-01
Scaling laws governing anomalous electron transport in tokamaks with ohmic and/or auxiliary heating are derived using renormalized Vlasov-Ampere equations for low frequency electromagnetic microturbulence. It is also shown that for pure auxiliary heating (or when auxiliary heating power far exceeds the ohmic power), the energy confinement time scales as tau/sub E/ ∼ P/sub inj//sup -1/3/, where P/sub inj/ is the injected power
Scale issues in remote sensing
Weng, Qihao
2014-01-01
This book provides up-to-date developments, methods, and techniques in the field of GIS and remote sensing and features articles from internationally renowned authorities on three interrelated perspectives of scaling issues: scale in land surface properties, land surface patterns, and land surface processes. The book is ideal as a professional reference for practicing geographic information scientists and remote sensing engineers as well as a supplemental reading for graduate level students.
Langdal, Bjoern Inge; Eggen, Arnt Ove
2003-01-01
The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series
Normalization of emotion control scale
Hojatoolah Tahmasebian
2014-09-01
Full Text Available Background: Emotion control skill teaches the individuals how to identify their emotions and how to express and control them in various situations. The aim of this study was to normalize and measure the internal and external validity and reliability of emotion control test. Methods: This standardization study was carried out on a statistical society, including all pupils, students, teachers, nurses and university professors in Kermanshah in 2012, using Williams’ emotion control scale. The subjects included 1,500 (810 females and 690 males people who were selected by stratified random sampling. Williams (1997 emotion control scale, was used to collect the required data. Emotional Control Scale is a tool for measuring the degree of control people have over their emotions. This scale has four subscales, including anger, depressed mood, anxiety and positive affect. The collected data were analyzed by SPSS software using correlation and Cronbach's alpha tests. Results: The results of internal consistency of the questionnaire reported by Cronbach's alpha indicated an acceptable internal consistency for emotional control scale, and the correlation between the subscales of the test and between the items of the questionnaire was significant at 0.01 confidence level. Conclusion: The validity of emotion control scale among the pupils, students, teachers, nurses and teachers in Iran has an acceptable range, and the test itemswere correlated with each other, thereby making them appropriate for measuring emotion control.
The Menopause Rating Scale (MRS scale: A methodological review
Strelow Frank
2004-09-01
Full Text Available Abstract Background This paper compiles data from different sources to get a first comprehensive picture of psychometric and other methodological characteristics of the Menopause Rating Scale (MRS scale. The scale was designed and standardized as a self-administered scale to (a to assess symptoms/complaints of aging women under different conditions, (b to evaluate the severity of symptoms over time, and (c to measure changes pre- and postmenopause replacement therapy. The scale became widespread used (available in 10 languages. Method A large multinational survey (9 countries in 4 continents from 2001/ 2002 is the basis for in depth analyses on reliability and validity of the MRS. Additional small convenience samples were used to get first impressions about test-retest reliability. The data were centrally analyzed. Data from a postmarketing HRT study were used to estimate discriminative validity. Results Reliability measures (consistency and test-retest stability were found to be good across countries, although the sample size for test-retest reliability was small. Validity: The internal structure of the MRS across countries was astonishingly similar to conclude that the scale really measures the same phenomenon in symptomatic women. The sub-scores and total score correlations were high (0.7–0.9 but lower among the sub-scales (0.5–0.7. This however suggests that the subscales are not fully independent. Norm values from different populations were presented showing that a direct comparison between Europe and North America is possible, but caution recommended with comparisons of data from Latin America and Indonesia. But this will not affect intra-individual comparisons within clinical trials. The comparison with the Kupperman Index showed sufficiently good correlations, illustrating an adept criterion-oriented validity. The same is true for the comparison with the generic quality-of-life scale SF-36 where also a sufficiently close association
Mokken scaling of the Myocardial Infarction Dimensional Assessment Scale (MIDAS).
Thompson, David R; Watson, Roger
2011-02-01
The purpose of this study was to examine the hierarchical and cumulative nature of the 35 items of the Myocardial Infarction Dimensional Assessment Scale (MIDAS), a disease-specific health-related quality of life measure. Data from 668 participants who completed the MIDAS were analysed using the Mokken Scaling Procedure, which is a computer program that searches polychotomous data for hierarchical and cumulative scales on the basis of a range of diagnostic criteria. Fourteen MIDAS items were retained in a Mokken scale and these items included physical activity, insecurity, emotional reaction and dependency items but excluded items related to diet, medication or side-effects. Item difficulty, in item response theory terms, ran from physical activity items (low difficulty) to insecurity, suggesting that the most severe quality of life effect of myocardial infarction is loneliness and isolation. Items from the MIDAS form a strong and reliable Mokken scale, which provides new insight into the relationship between items in the MIDAS and the measurement of quality of life after myocardial infarction. © 2010 Blackwell Publishing Ltd.
The Torino Impact Hazard Scale
Binzel, Richard P.
2000-04-01
Newly discovered asteroids and comets have inherent uncertainties in their orbit determinations owing to the natural limits of positional measurement precision and the finite lengths of orbital arcs over which determinations are made. For some objects making predictable future close approaches to the Earth, orbital uncertainties may be such that a collision with the Earth cannot be ruled out. Careful and responsible communication between astronomers and the public is required for reporting these predictions and a 0-10 point hazard scale, reported inseparably with the date of close encounter, is recommended as a simple and efficient tool for this purpose. The goal of this scale, endorsed as the Torino Impact Hazard Scale, is to place into context the level of public concern that is warranted for any close encounter event within the next century. Concomitant reporting of the close encounter date further conveys the sense of urgency that is warranted. The Torino Scale value for a close approach event is based upon both collision probability and the estimated kinetic energy (collision consequence), where the scale value can change as probability and energy estimates are refined by further data. On the scale, Category 1 corresponds to collision probabilities that are comparable to the current annual chance for any given size impactor. Categories 8-10 correspond to certain (probability >99%) collisions having increasingly dire consequences. While close approaches falling Category 0 may be no cause for noteworthy public concern, there remains a professional responsibility to further refine orbital parameters for such objects and a figure of merit is suggested for evaluating such objects. Because impact predictions represent a multi-dimensional problem, there is no unique or perfect translation into a one-dimensional system such as the Torino Scale. These limitations are discussed.
The Internet Gaming Disorder Scale.
Lemmens, Jeroen S; Valkenburg, Patti M; Gentile, Douglas A
2015-06-01
Recently, the American Psychiatric Association included Internet gaming disorder (IGD) in the appendix of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). The main aim of the current study was to test the reliability and validity of 4 survey instruments to measure IGD on the basis of the 9 criteria from the DSM-5: a long (27-item) and short (9-item) polytomous scale and a long (27-item) and short (9-item) dichotomous scale. The psychometric properties of these scales were tested among a representative sample of 2,444 Dutch adolescents and adults, ages 13-40 years. Confirmatory factor analyses demonstrated that the structural validity (i.e., the dimensional structure) of all scales was satisfactory. Both types of assessment (polytomous and dichotomous) were also reliable (i.e., internally consistent) and showed good criterion-related validity, as indicated by positive correlations with time spent playing games, loneliness, and aggression and negative correlations with self-esteem, prosocial behavior, and life satisfaction. The dichotomous 9-item IGD scale showed solid psychometric properties and was the most practical scale for diagnostic purposes. Latent class analysis of this dichotomous scale indicated that 3 groups could be discerned: normal gamers, risky gamers, and disordered gamers. On the basis of the number of people in this last group, the prevalence of IGD among 13- through 40-year-olds in the Netherlands is approximately 4%. If the DSM-5 threshold for diagnosis (experiencing 5 or more criteria) is applied, the prevalence of disordered gamers is more than 5%. (c) 2015 APA, all rights reserved).
Dolan, T.J.
1993-01-01
In the next 50 yr, the world will need to develop hundreds of gigawatts of non-fossil-fuel energy sources for production of electricity and fuels. Nuclear fusion can probably provide much of the required energy economically, if large single-unit power plants are acceptable. Large power plants are more common than most people realize: There are already many multiple-unit power plants producing 2 to 5 GW(electric) at a single site. The cost of electricity (COE) from fusion energy is predicted to scale as COE ∼ COE 0 (P/P 0 ) -n , where P is the electrical power, the subscript zero denotes reference values, and the exponent n ∼ 0.36 to 0.7 in various designs. The validity ranges of these scalings are limited and need to be extended by future work. The fusion power economy of scale derives from four interrelated effects: improved operations and maintenance costs; scaling of equipment unit costs; a geometric effect that increases the mass power density; and reduction of the recirculating power fraction. Increased plasma size also relaxes the required confinement parameters: For the same neutron wall loading, larger tokamaks can use lower magnetic fields. Fossil-fuel power plants have a weaker economy of scale than fusion because the fuel costs constitute much of their COE. Solar and wind power plants consist of many small units, so they have little economy of scale. Fission power plants have a strong economy of scale but are unable to exploit it because the maximum unit size is limited by safety concerns. Large, steady-state fusion reactors generating 3 to 6 GW(electric) may be able to produce electricity for 4 to 5 cents/kW·h, which would be competitive with other future energy sources. 38 refs., 6 figs., 6 tabs
Copper atomic-scale transistors.
Xie, Fangqing; Kavalenka, Maryna N; Röger, Moritz; Albrecht, Daniel; Hölscher, Hendrik; Leuthold, Jürgen; Schimmel, Thomas
2017-01-01
We investigated copper as a working material for metallic atomic-scale transistors and confirmed that copper atomic-scale transistors can be fabricated and operated electrochemically in a copper electrolyte (CuSO 4 + H 2 SO 4 ) in bi-distilled water under ambient conditions with three microelectrodes (source, drain and gate). The electrochemical switching-on potential of the atomic-scale transistor is below 350 mV, and the switching-off potential is between 0 and -170 mV. The switching-on current is above 1 μA, which is compatible with semiconductor transistor devices. Both sign and amplitude of the voltage applied across the source and drain electrodes ( U bias ) influence the switching rate of the transistor and the copper deposition on the electrodes, and correspondingly shift the electrochemical operation potential. The copper atomic-scale transistors can be switched using a function generator without a computer-controlled feedback switching mechanism. The copper atomic-scale transistors, with only one or two atoms at the narrowest constriction, were realized to switch between 0 and 1 G 0 ( G 0 = 2e 2 /h; with e being the electron charge, and h being Planck's constant) or 2 G 0 by the function generator. The switching rate can reach up to 10 Hz. The copper atomic-scale transistor demonstrates volatile/non-volatile dual functionalities. Such an optimal merging of the logic with memory may open a perspective for processor-in-memory and logic-in-memory architectures, using copper as an alternative working material besides silver for fully metallic atomic-scale transistors.
Resiliency Scale (RS): Scale Development, Reliability and Validity Study
GÜRGAN, Uğur
2003-01-01
The purpose of this study was to develop a new Resiliency Scale (RS) for Turkish samples. Various items from some major resiliency scales, most of them with some partial change, were collected and a pool of 228 items containing almost all possible resilience areas were obtained. This item-pool was administered to a college sample of 419. Resulting of analysis 50 item RS were obtained and administered to a new college sample of 112 participants. This second sample has also received the Rosenba...
Scale Construction: Motivation and Relationship Scale in Education
Yunus Emre Demir
2016-01-01
Full Text Available The aim of this study is to analyze the validity and reliability of the Turkish version of Motivation and Relationship Scale (MRS, (Raufelder , Drury , Jagenow , Hoferichter & Bukowski , 2013.Participants were 526 students of secondary school. The results of confirmatory factor analysis described that the 21 items loaded three factor and the three-dimensional model was well fit (x2= 640.04, sd= 185, RMSEA= .068, NNFI= .90, CFI = .91, IFI=.91,SRMR=079, GFI= .90,AGFI=.87. Overall findings demonstrated that this scale is a valid and indicates that the adapted MRS is a valid instrument for measuring secondary school children’s motivation in Turkey.
Scaling laws of Rydberg excitons
Heckötter, J.; Freitag, M.; Fröhlich, D.; Aßmann, M.; Bayer, M.; Semina, M. A.; Glazov, M. M.
2017-09-01
Rydberg atoms have attracted considerable interest due to their huge interaction among each other and with external fields. They demonstrate characteristic scaling laws in dependence on the principal quantum number n for features such as the magnetic field for level crossing or the electric field of dissociation. Recently, the observation of excitons in highly excited states has allowed studying Rydberg physics in cuprous oxide crystals. Fundamentally different insights may be expected for Rydberg excitons, as the crystal environment and associated symmetry reduction compared to vacuum give not only optical access to many more states within an exciton multiplet but also extend the Hamiltonian for describing the exciton beyond the hydrogen model. Here we study experimentally and theoretically the scaling of several parameters of Rydberg excitons with n , for some of which we indeed find laws different from those of atoms. For others we find identical scaling laws with n , even though their origin may be distinctly different from the atomic case. At zero field the energy splitting of a particular multiplet n scales as n-3 due to crystal-specific terms in the Hamiltonian, e.g., from the valence band structure. From absorption spectra in magnetic field we find for the first crossing of levels with adjacent principal quantum numbers a Br∝n-4 dependence of the resonance field strength, Br, due to the dominant paramagnetic term unlike for atoms for which the diamagnetic contribution is decisive, resulting in a Br∝n-6 dependence. By contrast, the resonance electric field strength shows a scaling as Er∝n-5 as for Rydberg atoms. Also similar to atoms with the exception of hydrogen we observe anticrossings between states belonging to multiplets with different principal quantum numbers at these resonances. The energy splittings at the avoided crossings scale roughly as n-4, again due to crystal specific features in the exciton Hamiltonian. The data also allow us to
Divertor scaling laws for tokamaks
Catto, P.J.; Krasheninnikov, S.I.; Connor, J.W.
1997-01-01
The breakdown of two body scaling laws is illustrated by using the two dimensional plasma code UEDGE coupled to an advanced Navier-Stokes neutrals transport package to model attached and detached regimes in a simplified geometry. Two body similarity scalings are used as benchmarks for runs retaining non-two body modifications due to the effects of (i) multi-step processes altering ionization and radiation via the excited states of atomic hydrogen and (ii) three body recombination. Preliminary investigations indicate that two body scaling interpretations of experimental data fail due to (i) multi-step processes when a significant region of the plasma exceeds a plasma density of 10 19 m -3 , or (ii) three body recombination when there is a significant region in which the temperature is ≤1 eV while the plasma density is ≥10 20 m -3 . These studies demonstrate that two body scaling arguments are often inappropriate in the divertor and the first results for alternate scalings are presented. (orig.)
Scales of Natural Flood Management
Nicholson, Alex; Quinn, Paul; Owen, Gareth; Hetherington, David; Piedra Lara, Miguel; O'Donnell, Greg
2016-04-01
The scientific field of Natural flood Management (NFM) is receiving much attention and is now widely seen as a valid solution to sustainably manage flood risk whilst offering significant multiple benefits. However, few examples exist looking at NFM on a large scale (>10km2). Well-implemented NFM has the effect of restoring more natural catchment hydrological and sedimentological processes, which in turn can have significant flood risk and WFD benefits for catchment waterbodies. These catchment scale improvements in-turn allow more 'natural' processes to be returned to rivers and streams, creating a more resilient system. Although certain NFM interventions may appear distant and disconnected from main stem waterbodies, they will undoubtedly be contributing to WFD at the catchment waterbody scale. This paper offers examples of NFM, and explains how they can be maximised through practical design across many scales (from feature up to the whole catchment). New tools to assist in the selection of measures and their location, and to appreciate firstly, the flooding benefit at the local catchment scale and then show a Flood Impact Model that can best reflect the impacts of local changes further downstream. The tools will be discussed in the context of our most recent experiences on NFM projects including river catchments in the north east of England and in Scotland. This work has encouraged a more integrated approach to flood management planning that can use both traditional and novel NFM strategies in an effective and convincing way.
Visions of Atomic Scale Tomography
Kelly, T.F.; Miller, Michael K.; Rajan, Krishna; Ringer, S.P.
2012-01-01
A microscope, by definition, provides structural and analytical information about objects that are too small to see with the unaided eye. From the very first microscope, efforts to improve its capabilities and push them to ever-finer length scales have been pursued. In this context, it would seem that the concept of an ultimate microscope would have received much attention by now; but has it really ever been defined? Human knowledge extends to structures on a scale much finer than atoms, so it might seem that a proton-scale microscope or a quark-scale microscope would be the ultimate. However, we argue that an atomic-scale microscope is the ultimate for the following reason: the smallest building block for either synthetic structures or natural structures is the atom. Indeed, humans and nature both engineer structures with atoms, not quarks. So far as we know, all building blocks (atoms) of a given type are identical; it is the assembly of the building blocks that makes a useful structure. Thus, would a microscope that determines the position and identity of every atom in a structure with high precision and for large volumes be the ultimate microscope? We argue, yes. In this article, we consider how it could be built, and we ponder the answer to the equally important follow-on questions: who would care if it is built, and what could be achieved with it?
Pair plasma relaxation time scales.
Aksenov, A G; Ruffini, R; Vereshchagin, G V
2010-04-01
By numerically solving the relativistic Boltzmann equations, we compute the time scale for relaxation to thermal equilibrium for an optically thick electron-positron plasma with baryon loading. We focus on the time scales of electromagnetic interactions. The collisional integrals are obtained directly from the corresponding QED matrix elements. Thermalization time scales are computed for a wide range of values of both the total-energy density (over 10 orders of magnitude) and of the baryonic loading parameter (over 6 orders of magnitude). This also allows us to study such interesting limiting cases as the almost purely electron-positron plasma or electron-proton plasma as well as intermediate cases. These results appear to be important both for laboratory experiments aimed at generating optically thick pair plasmas as well as for astrophysical models in which electron-positron pair plasmas play a relevant role.
Scaling and prescaling in quarkonium
Warner, R.C.; Joshi, G.C.
1979-01-01
Recent experiments in the upsilon region indicate the quark mass dependence of quark-antiquark bound state properties. Classes of quark-antiquark potentials exhibiting scaling of energy level spacing with quark mass are presented, and the importance of mass dependence of bound state properties in investigating the nature of the potential is emphasised. The scaling potentials considered are V=V(msup(1/2)r), which exhibits constant level spacing, and V=bmsup(α)rsup(β), and its generalizations, which has scaling of energy levels controlled by the exponents α and β. The class of potentials yielding constant level spacing is shown to be consistent with the interpretation of the state recently observed at 9.46 GeV in e + e - annihilations as a bound state of a new quark and antiquark with esub(p)=1/3
The Satisfaction With Life Scale.
Diener, E; Emmons, R A; Larsen, R J; Griffin, S
1985-02-01
This article reports the development and validation of a scale to measure global life satisfaction, the Satisfaction With Life Scale (SWLS). Among the various components of subjective well-being, the SWLS is narrowly focused to assess global life satisfaction and does not tap related constructs such as positive affect or loneliness. The SWLS is shown to have favorable psychometric properties, including high internal consistency and high temporal reliability. Scores on the SWLS correlate moderately to highly with other measures of subjective well-being, and correlate predictably with specific personality characteristics. It is noted that the SWLS is Suited for use with different age groups, and other potential uses of the scale are discussed.
Bench-scale/field-scale interpretations: Session overview
Cunningham, A.B.; Peyton, B.M.
1995-04-01
In situ bioremediation involves complex interactions between biological, chemical, and physical processes and requires integration of phenomena operating at scales ranging from that of a microbial cell (10 -6 ) to that of a remediation site (10 to 1000 m). Laboratory investigations of biodegradation are usually performed at a relatively small scale, governed by convenience, cost, and expedience. However, extending the results from a laboratory-scale experimental system to the design and operation of a field-scale system introduces (1) additional mass transport mechanisms and limitations; (2) the presence of multiple phases, contants, and competing microorganisms (3) spatial geologic heterogeneities; and (4) subsurface environmental factors that may inhibit bacterial growth such as temperature, pH, nutrient, or redox conditions. Field bioremediation rates may be limited by the availability of one of the necessary constituents for biotransformation: substrate, contaminant, electron acceptor, nutrients, or microorganisms capable of degrading the target compound. The factor that limits the rate of bioremediation may not be the same in the laboratory as it is in the field, thereby leading, to development of unsuccessful remediation strategies
Politzer, H.D.
1977-01-01
The logic of the xi-scaling analysis of inclusive lepton-hadron scattering is reviewed with the emphasis on clarifying what is assumed and what is predicted. The physics content of several recent papers, which purport to criticize this analysis, in fact confirm its validity and utility. For clarity, concentration is placed on the orthodox operator product analysis of electroproduction, local duality and precocious scaling. Other physics discussed includes the successes of QCD in the rate of charm production in muon inelastic scattering and in the energy--momentum sum rule. Gluons
Shadowing in the scaling region
Shaw, G.
1989-01-01
The approximate scaling behaviour of the shadowing effect at small χ, recently observed in deep inelastic muon scattering experiments on nuclei, is shown to arise naturally at very large ν in those hadron dominance models, formulated many years ago, which are dual to the parton model. At smaller ν small scale breaking effects are expected, which sould die away as ν increases. The predicted shadowing decreases rapidly with increasing χ, at a rate which is only weakly dependent on the atomic number A for reasonably large A. (orig.)
Learning From the Furniture Scale
Hvejsel, Marie Frier; Kirkegaard, Poul Henning
2018-01-01
Given its proximity to the human body, the furniture scale holds a particular potential in grasping the fundamental aesthetic potential of architecture to address its inhabitants by means of spatial ‘gestures’. Likewise, it holds a technical germ in realizing this potential given its immediate...... tangibility allowing experimentation with the ‘principles’ of architectural construction. In present paper we explore this dual tectonic potential of the furniture scale as an epistemological foundation in architectural education. In this matter, we discuss the conduct of a master-level course where we...
Pelamis WEC - intermediate scale demonstration
Yemm, R.
2003-07-01
This report describes the successful building and commissioning of an intermediate 1/7th scale model of the Pelamis Wave Energy Converter (WEC) and its testing in the wave climate of the Firth of Forth. Details are given of the design of the semi-submerged articulated structure of cylindrical elements linked by hinged joints. The specific programme objectives and conclusions, development issues addressed, and key remaining risks are discussed along with development milestones to be passed before the Pelamis WEC is ready for full-scale prototype testing.
Organizational Scale and School Success.
Guthrie, James W.
1979-01-01
The relationship between the organizational scale of schooling (school and school district size) and school success is examined. The history of the movement toward larger school units, the evidence of the effects of that movement, and possible research strategies for further investigation of the issue are discussed. (JKS)
Scaling up of renewable chemicals.
Sanford, Karl; Chotani, Gopal; Danielson, Nathan; Zahn, James A
2016-04-01
The transition of promising technologies for production of renewable chemicals from a laboratory scale to commercial scale is often difficult and expensive. As a result the timeframe estimated for commercialization is typically underestimated resulting in much slower penetration of these promising new methods and products into the chemical industries. The theme of 'sugar is the next oil' connects biological, chemical, and thermochemical conversions of renewable feedstocks to products that are drop-in replacements for petroleum derived chemicals or are new to market chemicals/materials. The latter typically offer a functionality advantage and can command higher prices that result in less severe scale-up challenges. However, for drop-in replacements, price is of paramount importance and competitive capital and operating expenditures are a prerequisite for success. Hence, scale-up of relevant technologies must be interfaced with effective and efficient management of both cell and steel factories. Details involved in all aspects of manufacturing, such as utilities, sterility, product recovery and purification, regulatory requirements, and emissions must be managed successfully. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sheridan, B.; Cumpson, P.; Bailey, M.
2006-01-01
Progress in nano technology relies on ever more accurate measurements of quantities such as distance, force and current industry has long depended on accurate measurement. In the 19th century, for example, the performance of steam engines was seriously limited by inaccurately made components, a situation that was transformed by Henry Maudsley's screw micrometer calliper. And early in the 20th century, the development of telegraphy relied on improved standards of electrical resistance. Before this, each country had its own standards and cross border communication was difficult. The same is true today of nano technology if it is to be fully exploited by industry. Principles of measurement that work well at the macroscopic level often become completely unworkable at the nano metre scale - about 100 nm and below. Imaging, for example, is not possible on this scale using optical microscopes, and it is virtually impossible to weigh a nano metre-scale object with any accuracy. In addition to needing more accurate measurements, nano technology also often requires a greater variety of measurements than conventional technology. For example, standard techniques used to make microchips generally need accurate length measurements, but the manufacture of electronics at the molecular scale requires magnetic, electrical, mechanical and chemical measurements as well. (U.K.)
Inviscid criterion for decomposing scales
Zhao, Dongxiao; Aluie, Hussein
2018-05-01
The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.
Geometric scaling in exclusive processes
Munier, S.; Wallon, S.
2003-01-01
We show that according to the present understanding of the energy evolution of the observables measured in deep-inelastic scattering, the photon-proton scattering amplitude has to exhibit geometric scaling at each impact parameter. We suggest a way to test this experimentally at HERA. A qualitative analysis based on published data is presented and discussed. (orig.)
An Assertiveness Scale for Adolescents.
Lee, Dong Yul; And Others
1985-01-01
Developed a 33-item, situation-specific instrument that measures assertiveness of adolescents. Based on data from 682 elementary and secondary school students, adequate reliability and validity of the Assertiveness Scale for Adolescents (ASA) were obtained when tested against several variables about which predictions could be made. (BH)
A Feminist Family Therapy Scale.
Black, Leora; Piercy, Fred P.
1991-01-01
Reports on development and psychometric properties of Feminist Family Therapy Scale (FFTS), a 17-item instrument intended to reflect degree to which family therapists conceptualize process of family therapy from feminist-informed perspective. Found that the instrument discriminated between self-identified feminists and nonfeminists, women and men,…
National Image Interpretablility Rating Scales
2003-01-01
Interactive Media Element This presentation media demonstrates the NIIRS scale and resolution numbers and presents a problem statement to help the student gain an intuitive understanding of the numbers. Last modified: 5/18/2009 ME3XXX Military Applications of Unmanned Air Vehicles/Remotely Operated Aircraft (UAV/ROA)
Animal coloration: sexy spider scales.
Taylor, Lisa A; McGraw, Kevin J
2007-08-07
Many male jumping spiders display vibrant colors that are used in visual communication. A recent microscopic study on a jumping spider from Singapore shows that three-layered 'scale sandwiches' of chitin and air are responsible for producing their brilliant iridescent body coloration.
Salzburger State Reactance Scale (SSR Scale): Validation of a Scale Measuring State Reactance.
Sittenthaler, Sandra; Traut-Mattausch, Eva; Steindl, Christina; Jonas, Eva
This paper describes the construction and empirical evaluation of an instrument for measuring state reactance, the Salzburger State Reactance (SSR) Scale. The results of a confirmatory factor analysis supported a hypothesized three-factor structure: experience of reactance, aggressive behavioral intentions, and negative attitudes. Correlations with divergent and convergent measures support the validity of this structure. The SSR Subscales were strongly related to the other state reactance measures. Moreover, the SSR Subscales showed modest positive correlations with trait measures of reactance. The SSR Subscales correlated only slightly or not at all with neighboring constructs (e.g., autonomy, experience of control). The only exception was fairness scales, which showed moderate correlations with the SSR Subscales. Furthermore, a retest analysis confirmed the temporal stability of the scale. Suggestions for further validation of this questionnaire are discussed.
Scale modelling in LMFBR safety
Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.
1979-01-01
This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the
Scale dependence of deuteron electrodisintegration
More, S. N.; Bogner, S. K.; Furnstahl, R. J.
2017-11-01
Background: Isolating nuclear structure properties from knock-out reactions in a process-independent manner requires a controlled factorization, which is always to some degree scale and scheme dependent. Understanding this dependence is important for robust extractions from experiment, to correctly use the structure information in other processes, and to understand the impact of approximations for both. Purpose: We seek insight into scale dependence by exploring a model calculation of deuteron electrodisintegration, which provides a simple and clean theoretical laboratory. Methods: By considering various kinematic regions of the longitudinal structure function, we can examine how the components—the initial deuteron wave function, the current operator, and the final-state interactions (FSIs)—combine at different scales. We use the similarity renormalization group to evolve each component. Results: When evolved to different resolutions, the ingredients are all modified, but how they combine depends strongly on the kinematic region. In some regions, for example, the FSIs are largely unaffected by evolution, while elsewhere FSIs are greatly reduced. For certain kinematics, the impulse approximation at a high renormalization group resolution gives an intuitive picture in terms of a one-body current breaking up a short-range correlated neutron-proton pair, although FSIs distort this simple picture. With evolution to low resolution, however, the cross section is unchanged but a very different and arguably simpler intuitive picture emerges, with the evolved current efficiently represented at low momentum through derivative expansions or low-rank singular value decompositions. Conclusions: The underlying physics of deuteron electrodisintegration is scale dependent and not just kinematics dependent. As a result, intuition about physics such as the role of short-range correlations or D -state mixing in particular kinematic regimes can be strongly scale dependent
Scaling Irrational Beliefs in the General Attitude and Belief Scale
Lindsay R. Owings
2013-04-01
Full Text Available Accurate measurement of key constructs is essential to the continued development of Rational-Emotive Behavior Therapy (REBT. The General Attitude and Belief Scale (GABS, a contemporary inventory of rational and irrational beliefs based on current REBT theory, is one of the most valid and widely used instruments available, and recent research has continued to improve its psychometric standing. In this study of 544 students, item response theory (IRT methods were used (a to identify the most informative item in each irrational subscale of the GABS, (b to determine the level of irrationality represented by each of those items, and (c to suggest a condensed form of the GABS for further study with clinical populations. Administering only the most psychometrically informative items to clients could result in economies of time and effort. Further research based on the scaling of items could clarify the specific patterns of irrational beliefs associated with particular clinical syndromes.
Universal Scaling Relations in Scale-Free Structure Formation
Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.
2018-04-01
A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM∝M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters and even dark matter halos. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power law tail of dA/d lnΣ∝Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D∝R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM halos) tend to a ρ∝R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation and detailed "full physics" hydrodynamical simulations. We find that these power-laws are good first order descriptions in all cases.
Frequency scaling for angle gathers
Zuberi, M. A H; Alkhalifah, Tariq Ali
2014-01-01
Angle gathers provide an extra dimension to analyze the velocity after migration. Space-shift and time shift-imaging conditions are two methods used to obtain angle gathers, but both are reasonably expensive. By scaling the time-lag axis of the time-shifted images, the computational cost of the time shift imaging condition can be considerably reduced. In imaging and more so Full waveform inversion, frequencydomain Helmholtz solvers are used more often to solve for the wavefields than conventional time domain extrapolators. In such cases, we do not need to extend the image, instead we scale the frequency axis of the frequency domain image to obtain the angle gathers more efficiently. Application on synthetic data demonstrate such features.
Scaling in public transport networks
C. von Ferber
2005-01-01
Full Text Available We analyse the statistical properties of public transport networks. These networks are defined by a set of public transport routes (bus lines and the stations serviced by these. For larger networks these appear to possess a scale-free structure, as it is demonstrated e.g. by the Zipf law distribution of the number of routes servicing a given station or for the distribution of the number of stations which can be visited from a chosen one without changing the means of transport. Moreover, a rather particular feature of the public transport network is that many routes service common subsets of stations. We discuss the possibility of new scaling laws that govern intrinsic properties of such subsets.
Holographic models with anisotropic scaling
Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.
2013-12-01
We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.
Scaling laws for specialized hohlraums
Rosen, M.D.
1993-01-01
The author presents scaling laws for the behavior of hohlraums that are somewhat more complex than a simple sphere or cylinder. In particular the author considers hohlraums that are in what has become known as a open-quotes primaryclose quotes open-quotes secondaryclose quotes configuration, namely geometries in which the laser is absorbed in a primary region of a hohlraum, and only radiation energy is transported to a secondary part of the hohlraum that is shielded from seeing the laser light directly. Such hohlraums have been in use of late for doing LTE opacity experiments on a sample in the secondary and in recently proposed open-quotes shimmedclose quotes hohlraums that use gold disks on axis to block a capsule's view of the cold laser entrance hole. The temperature/drive of the secondary, derived herein, scales somewhat differently than the drive in simple hohlraums
Applied multidimensional scaling and unfolding
Borg, Ingwer; Mair, Patrick
2018-01-01
This book introduces multidimensional scaling (MDS) and unfolding as data analysis techniques for applied researchers. MDS is used for the analysis of proximity data on a set of objects, representing the data as distances between points in a geometric space (usually of two dimensions). Unfolding is a related method that maps preference data (typically evaluative ratings of different persons on a set of objects) as distances between two sets of points (representing the persons and the objects, resp.). This second edition has been completely revised to reflect new developments and the coverage of unfolding has also been substantially expanded. Intended for applied researchers whose main interests are in using these methods as tools for building substantive theories, it discusses numerous applications (classical and recent), highlights practical issues (such as evaluating model fit), presents ways to enforce theoretical expectations for the scaling solutions, and addresses the typical mistakes that MDS/unfoldin...
Anderson, J.T.
1994-01-01
Without the spin interactions the hardron masses within a multiplet are degenerate. The light quark hadron degenerate mulitplet mass spectrum is extended from the 3 quark ground state multiplets at J P =0 - , 1/2 + , 1 - to include the excited states which follow the spinorial decomposition of SU(2)xSU(2). The mass scales for the 4, 5, 6, .. quark hadrons are obtained from the degenerate multiplet mass m 0 /M=n 2 /α with n=4, 5, 6, .. The 4, 5, 6, .. quark hadron degenerate multiplet masses follow by splitting of the heavy quark mass scales according to the spinorial decomposition of SU(2)xSU(2). (orig.)
Impedance Scaling and Impedance Control
Chou, W.; Griffin, J.
1997-06-01
When a machine becomes really large, such as the Very Large Hadron Collider (VLHC), of which the circumference could reach the order of megameters, beam instability could be an essential bottleneck. This paper studies the scaling of the instability threshold vs. machine size when the coupling impedance scales in a ''normal'' way. It is shown that the beam would be intrinsically unstable for the VLHC. As a possible solution to this problem, it is proposed to introduce local impedance inserts for controlling the machine impedance. In the longitudinal plane, this could be done by using a heavily detuned rf cavity (e.g., a biconical structure), which could provide large imaginary impedance with the right sign (i.e., inductive or capacitive) while keeping the real part small. In the transverse direction, a carefully designed variation of the cross section of a beam pipe could generate negative impedance that would partially compensate the transverse impedance in one plane
THE MODERN RACISM SCALE: PSYCHOMETRIC
MANUEL CÁRDENAS
2007-08-01
Full Text Available An adaption of McConahay, Harder and Batts’ (1981 moderm racism scale is presented for Chilean population andits psychometric properties, (reliability and validity are studied, along with its relationship with other relevantpsychosocial variables in studies on prejudice and ethnic discrimination (authoritarianism, religiousness, politicalposition, etc., as well as with other forms of prejudice (gender stereotypes and homophobia. The sample consistedof 120 participants, students of psychology, resident in the city of Antofagasta (a geographical zone with a highnumber of Latin-American inmigrants. Our findings show that the scale seems to be a reliable instrument to measurethe prejudice towards Bolivian immigrants in our social environment. Likewise, important differences among thesubjects are detected with high and low scores in the psychosocial variables used.
Global scale groundwater flow model
Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc
2013-04-01
As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.
Cosmological origin of mass scales
Terazawa, H.
1981-01-01
We discuss the possibility that spontaneous breakdown of conformal invariance due to the very existence of our universe originates all the mass (or length) scales ranging from the Planck mass (approx. 10 19 GeV) to the Hubble constant (approx. 10 -42 GeV) and suggest that the photon may have a curvature-dependent mass which is as small as 10 -42 GeV. We also present a possible clue to Dirac's large number hypothesis. (orig.)
Cosmological origin of mass scales
Terazawa, Hidezumi.
1981-02-01
We discuss the possibility that spontaneous breakdown of conformal invariance due to the very existence of our universe originates all the mass (or length) scales ranging from the Planck mass (--10 19 GeV) to the Hubble constant (--10 -42 GeV) and suggest that the photon may have a curvature-dependent mass which is as small as 10 -42 GeV. We also present a possible clue to the Dirac's large number hypothesis. (author)
Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics
1998-12-31
In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)
Source Code Analysis Laboratory (SCALe)
2012-04-01
products (including services) and processes. The agency has also published ISO / IEC 17025 :2005 General Requirements for the Competence of Testing...SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 also operate in accordance with ISO 9001. • NIST National...assessed by the accreditation body against all of the requirements of ISO / IEC 17025 : 2005 General requirements for the competence of testing and
Scaling Exponents in Financial Markets
Kim, Kyungsik; Kim, Cheol-Hyun; Kim, Soo Yong
2007-03-01
We study the dynamical behavior of four exchange rates in foreign exchange markets. A detrended fluctuation analysis (DFA) is applied to detect the long-range correlation embedded in the non-stationary time series. It is for our case found that there exists a persistent long-range correlation in volatilities, which implies the deviation from the efficient market hypothesis. Particularly, the crossover is shown to exist in the scaling behaviors of the volatilities.
Recent developments in complex scaling
Rescigno, T.N.
1980-01-01
Some recent developments in the use of complex basis function techniques to study resonance as well as certain types of non-resonant, scattering phenomena are discussed. Complex scaling techniques and other closely related methods have continued to attract the attention of computational physicists and chemists and have now reached a point of development where meaningful calculations on many-electron atoms and molecules are beginning to appear feasible
Turbulence Intensity Scaling: A Fugue
Basse, Nils T.
2018-01-01
We study streamwise turbulence intensity definitions using smooth- and rough-wall pipe flow measurements made in the Princeton Superpipe. Scaling of turbulence intensity with the bulk (and friction) Reynolds number is provided for the definitions. The turbulence intensity is proportional to the square root of the friction factor with the same proportionality constant for smooth- and rough-wall pipe flow. Turbulence intensity definitions providing the best description of the measurements are i...
Transition physics and scaling overview
Carlstrom, T.N.
1996-01-01
This paper presents an overview of recent experimental progress towards understanding H-mode transition physics and scaling. Terminology and techniques for studying H-mode are reviewed and discussed. The model of shear E x B flow stabilization of edge fluctuations at the L-H transition is gaining wide acceptance and is further supported by observations of edge rotation on a number of new devices. Observations of poloidal asymmetries of edge fluctuations and dephasing of density and potential fluctuations after the transition pose interesting challenges for understanding H-mode physics. Dedicated scans to determine the scaling of the power threshold have now been performed on many machines. A clear B t dependence is universally observed but dependence on the line averaged density is complicated. Other dependencies are also reported. Studies of the effect of neutrals and error fields on the power threshold are under investigation. The ITER threshold database has matured and offers guidance to the power threshold scaling issues relevant to next-step devices. (author)
A laboratory scale fundamental time?
Mendes, R.V.
2012-01-01
The existence of a fundamental time (or fundamental length) has been conjectured in many contexts. However, the ''stability of physical theories principle'' seems to be the one that provides, through the tools of algebraic deformation theory, an unambiguous derivation of the stable structures that Nature might have chosen for its algebraic framework. It is well-known that c and ℎ are the deformation parameters that stabilize the Galilean and the Poisson algebra. When the stability principle is applied to the Poincare-Heisenberg algebra, two deformation parameters emerge which define two time (or length) scales. In addition there are, for each of them, a plus or minus sign possibility in the relevant commutators. One of the deformation length scales, related to non-commutativity of momenta, is probably related to the Planck length scale but the other might be much larger and already detectable in laboratory experiments. In this paper, this is used as a working hypothesis to look for physical effects that might settle this question. Phase-space modifications, resonances, interference, electron spin resonance and non-commutative QED are considered. (orig.)
Large scale cluster computing workshop
Dane Skow; Alan Silverman
2002-01-01
Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community
Dimensional scaling for quasistationary states
Kais, S.; Herschbach, D.R.
1993-01-01
Complex energy eigenvalues which specify the location and width of quasibound or resonant states are computed to good approximation by a simple dimensional scaling method. As applied to bound states, the method involves minimizing an effective potential function in appropriately scaled coordinates to obtain exact energies in the D→∞ limit, then computing approximate results for D=3 by a perturbation expansion in 1/D about this limit. For resonant states, the same procedure is used, with the radial coordinate now allowed to be complex. Five examples are treated: the repulsive exponential potential (e - r); a squelched harmonic oscillator (r 2 e - r); the inverted Kratzer potential (r -1 repulsion plus r -2 attraction); the Lennard-Jones potential (r -12 repulsion, r -6 attraction); and quasibound states for the rotational spectrum of the hydrogen molecule (X 1 summation g + , v=0, J=0 to 50). Comparisons with numerical integrations and other methods show that the much simpler dimensional scaling method, carried to second-order (terms in 1/D 2 ), yields good results over an extremely wide range of the ratio of level widths to spacings. Other methods have not yet evaluated the very broad H 2 rotational resonances reported here (J>39), which lie far above the centrifugal barrier
Transition physics and scaling overview
Carlstrom, T.N.
1995-12-01
This paper presents an overview of recent experimental progress towards understanding H-mode transition physics and scaling. Terminology and techniques for studying H-mode are reviewed and discussed. The model of shear E x B flow stabilization of edge fluctuations at the L-H transition is gaining wide acceptance and is further supported by observations of edge rotation on a number of new devices. Observations of poloidal asymmetries of edge fluctuations and dephasing of density and potential fluctuations after the transition pose interesting challenges for understanding H-mode physics. Dedicated scans to determine the scaling of the power threshold have now been performed on many machines. A dear B t dependence is universally observed but dependence on the line averaged density is complicated. Other dependencies are also reported. Studies of the effect of neutrals and error fields on the power threshold are under investigation. The ITER threshold database has matured and offers guidance to the power threshold scaling issues relevant to next-step devices
The Principle of Social Scaling
Paulo L. dos Santos
2017-01-01
Full Text Available This paper identifies a general class of economic processes capable of generating the first-moment constraints implicit in the observed cross-sectional distributions of a number of economic variables: processes of social scaling. Across a variety of settings, the outcomes of economic competition reflect the normalization of individual values of certain economic quantities by average or social measures of themselves. The resulting socioreferential processes establish systematic interdependences among individual values of important economic variables, which under certain conditions take the form of emergent first-moment constraints on their distributions. The paper postulates a principle describing this systemic regulation of socially scaled variables and illustrates its empirical purchase by showing how capital- and labor-market competition can give rise to patterns of social scaling that help account for the observed distributions of Tobin’s q and wage income. The paper’s discussion embodies a distinctive approach to understanding and investigating empirically the relationship between individual agency and structural determinations in complex economic systems and motivates the development of observational foundations for aggregative, macrolevel economic analysis.
Large scale cross hole testing
Ball, J.K.; Black, J.H.; Doe, T.
1991-05-01
As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)
Development of emotional stability scale
M Chaturvedi
2010-01-01
Full Text Available Background: Emotional stability remains the central theme in personality studies. The concept of stable emotional behavior at any level is that which reflects the fruits of normal emotional development. The study aims at development of an emotional stability scale. Materials and Methods: Based on available literature the components of emotional stability were identified and 250 items were developed, covering each component. Two-stage elimination of items was carried out, i.e. through judges′ opinions and item analysis. Results: Fifty items with highest ′t′ values covering 5 dimensions of emotional stability viz pessimism vs. optimism, anxiety vs. calm, aggression vs. tolerance., dependence vs. autonomy., apathy vs. empathy were retained in the final scale. Reliability as checked by Cronbach′s alpha was .81 and by split half method it was .79. Content validity and construct validity were checked. Norms are given in the form of cumulative percentages. Conclusion: Based on the psychometric principles a 50 item, self-administered 5 point Lickert type rating scale was developed for measurement of emotional stability.
Interest in Aesthetic Rhinoplasty Scale.
Naraghi, Mohsen; Atari, Mohammad
2017-04-01
Interest in cosmetic surgery is increasing, with rhinoplasty being one of the most popular surgical procedures. It is essential that surgeons identify patients with existing psychological conditions before any procedure. This study aimed to develop and validate the Interest in Aesthetic Rhinoplasty Scale (IARS). Four studies were conducted to develop the IARS and to evaluate different indices of validity (face, content, construct, criterion, and concurrent validities) and reliability (internal consistency, split-half coefficient, and temporal stability) of the scale. The four study samples included a total of 463 participants. Statistical analysis revealed satisfactory psychometric properties in all samples. Scores on the IARS were negatively correlated with self-esteem scores ( r = -0.296; p social dysfunction ( r = 0.268; p < 0.01), and depression ( r = 0.308; p < 0.01). The internal and test-retest coefficients of consistency were found to be high (α = 0.93; intraclass coefficient = 0.94). Rhinoplasty patients were found to have significantly higher IARS scores than nonpatients ( p < 0.001). Findings of the present studies provided evidence for face, content, construct, criterion, and concurrent validities and internal and test-retest reliability of the IARS. This evidence supports the use of the scale in clinical and research settings. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Temporal scaling in information propagation
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Scale-invariant gravity: geometrodynamics
Anderson, Edward; Barbour, Julian; Foster, Brendan; Murchadha, Niall O
2003-01-01
We present a scale-invariant theory, conformal gravity, which closely resembles the geometrodynamical formulation of general relativity (GR). While previous attempts to create scale-invariant theories of gravity have been based on Weyl's idea of a compensating field, our direct approach dispenses with this and is built by extension of the method of best matching w.r.t. scaling developed in the parallel particle dynamics paper by one of the authors. In spatially compact GR, there is an infinity of degrees of freedom that describe the shape of 3-space which interact with a single volume degree of freedom. In conformal gravity, the shape degrees of freedom remain, but the volume is no longer a dynamical variable. Further theories and formulations related to GR and conformal gravity are presented. Conformal gravity is successfully coupled to scalars and the gauge fields of nature. It should describe the solar system observations as well as GR does, but its cosmology and quantization will be completely different
Statistical and Judgmental Criteria for Scale Purification
Wieland, Andreas; Durach, Christian F.; Kembro, Joakim
2017-01-01
of scale purification, to critically analyze the current state of scale purification in supply chain management (SCM) research and to provide suggestions for advancing the scale-purification process. Design/methodology/approach A framework for making scale-purification decisions is developed and used...
Scaling properties of foreign exchange volatility
Gençay, R.; Selçuk, F.; Whitcher, B.
2001-01-01
In this paper, we investigate the scaling properties of foreign exchange volatility. Our methodology is based on a wavelet multi-scaling approach which decomposes the variance of a time series and the covariance between two time series on a scale by scale basis through the application of a discrete
Evaluating the impact of farm scale innovation at catchment scale
van Breda, Phelia; De Clercq, Willem; Vlok, Pieter; Querner, Erik
2014-05-01
Hydrological modelling lends itself to other disciplines very well, normally as a process based system that acts as a catalogue of events taking place. These hydrological models are spatial-temporal in their design and are generally well suited for what-if situations in other disciplines. Scaling should therefore be a function of the purpose of the modelling. Process is always linked with scale or support but the temporal resolution can affect the results if the spatial scale is not suitable. The use of hydrological response units tends to lump area around physical features but disregards farm boundaries. Farm boundaries are often the more crucial uppermost resolution needed to gain more value from hydrological modelling. In the Letaba Catchment of South Africa, we find a generous portion of landuses, different models of ownership, different farming systems ranging from large commercial farms to small subsistence farming. All of these have the same basic right to water but water distribution in the catchment is somewhat of a problem. Since water quantity is also a problem, the water supply systems need to take into account that valuable production areas not be left without water. Clearly hydrological modelling should therefore be sensitive to specific landuse. As a measure of productivity, a system of small farmer production evaluation was designed. This activity presents a dynamic system outside hydrological modelling that is generally not being considered inside hydrological modelling but depends on hydrological modelling. For sustainable development, a number of important concepts needed to be aligned with activities in this region, and the regulatory actions also need to be adhered to. This study aimed at aligning the activities in a region to the vision and objectives of the regulatory authorities. South Africa's system of socio-economic development planning is complex and mostly ineffective. There are many regulatory authorities involved, often with unclear
Development of a Facebook Addiction Scale.
Andreassen, Cecilie Schou; Torsheim, Torbjørn; Brunborg, Geir Scott; Pallesen, Ståle
2012-04-01
The Bergen Facebook Addiction Scale (BFAS), initially a pool of 18 items, three reflecting each of the six core elements of addiction (salience, mood modification, tolerance, withdrawal, conflict, and relapse), was constructed and administered to 423 students together with several other standardized self-report scales (Addictive Tendencies Scale, Online Sociability Scale, Facebook Attitude Scale, NEO-FFI, BIS/BAS scales, and Sleep questions). That item within each of the six addiction elements with the highest corrected item-total correlation was retained in the final scale. The factor structure of the scale was good (RMSEA = .046, CFI = .99) and coefficient alpha was .83. The 3-week test-retest reliability coefficient was .82. The scores converged with scores for other scales of Facebook activity. Also, they were positively related to Neuroticism and Extraversion, and negatively related to Conscientiousness. High scores on the new scale were associated with delayed bedtimes and rising times.
The Origin of Scales and Scaling Laws in Star Formation
Guszejnov, David; Hopkins, Philip; Grudich, Michael
2018-01-01
Star formation is one of the key processes of cosmic evolution as it influences phenomena from the formation of galaxies to the formation of planets, and the development of life. Unfortunately, there is no comprehensive theory of star formation, despite intense effort on both the theoretical and observational sides, due to the large amount of complicated, non-linear physics involved (e.g. MHD, gravity, radiation). A possible approach is to formulate simple, easily testable models that allow us to draw a clear connection between phenomena and physical processes.In the first part of the talk I will focus on the origin of the IMF peak, the characteristic scale of stars. There is debate in the literature about whether the initial conditions of isothermal turbulence could set the IMF peak. Using detailed numerical simulations, I will demonstrate that not to be the case, the initial conditions are "forgotten" through the fragmentation cascade. Additional physics (e.g. feedback) is required to set the IMF peak.In the second part I will use simulated galaxies from the Feedback in Realistic Environments (FIRE) project to show that most star formation theories are unable to reproduce the near universal IMF peak of the Milky Way.Finally, I will present analytic arguments (supported by simulations) that a large number of observables (e.g. IMF slope) are the consequences of scale-free structure formation and are (to first order) unsuitable for differentiating between star formation theories.
A scale invariance criterion for LES parametrizations
Urs Schaefer-Rolffs
2015-01-01
Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.
Proposing a tornado watch scale
Mason, Jonathan Brock
This thesis provides an overview of language used in tornado safety recommendations from various sources, along with developing a rubric for scaled tornado safety recommendations, and subsequent development and testing of a tornado watch scale. The rubric is used to evaluate tornado refuge/shelter adequacy responses of Tuscaloosa residents gathered following the April 27, 2011 Tuscaloosa, Alabama EF4 tornado. There was a significant difference in the counts of refuge adequacy for Tuscaloosa residents when holding the locations during the April 27th tornado constant and comparing adequacy ratings for weak (EF0-EF1), strong (EF2-EF3) and violent (EF4-EF5) tornadoes. There was also a significant difference when comparing future tornado refuge plans of those same participants to the adequacy ratings for weak, strong and violent tornadoes. The tornado refuge rubric is then revised into a six-class, hierarchical Tornado Watch Scale (TWS) from Level 0 to Level 5 based on the likelihood of high-impact or low-impact severe weather events containing weak, strong or violent tornadoes. These levels represent maximum expected tornado intensity and include tornado safety recommendations from the tornado refuge rubric. Audio recordings similar to those used in current National Oceanic and Atmospheric Administration (NOAA) weather radio communications were developed to correspond to three levels of the TWS, a current Storm Prediction Center (SPC) tornado watch and a particularly dangerous situation (PDS) tornado watch. These were then used in interviews of Alabama residents to determine how changes to the information contained in the watch statements would affect each participant's tornado safety actions and perception of event danger. Results from interview participants (n=38) indicate a strong preference (97.37%) for the TWS when compared to current tornado watch and PDS tornado watch statements. Results also show the TWS elicits more adequate safety decisions from participants
Leege, P.F.A. de
1991-05-01
NSLINK is a set of computer codes to couple the NJOY cross-section generation code to the SCALE-3 code system (using AMPX-2 master library format) retaining the Nordheim resolved resonance treatment option. The following codes are included in NSLINK: XLACSR, a stripped-down version of the XLACS-2 code; MILER, converts NJOY output (GENDF format) to AMPX-2 master format; UNITABR, a revised version of the UNITAB code; BONAMI, in order to take into account the combination of Bondarenko and Nordheim resonance treatment, certain subroutines are included in the package which replace some subroutines in the BONAMI code. (author). 6 refs., 1 fig
Sand-Jensen, K.
2006-01-01
Continuous water fl ow is a unique feature of streams and distinguishes them from all other ecosystems. The main fl ow is always downstream but it varies in time and space and can be diffi cult to measure and describe. The interest of hydrologists, geologists, biologists and farmers in water fl ow......, and its physical impact, depends on whether the main focus is on the entire stream system, the adjacent fi elds, the individual reaches or the habitats of different species. It is important to learn how to manage fl ow at all scales, in order to understand the ecology of streams and the biology...
Accentuation-suppression and scaling
Sørensen, Thomas Alrik; Bundesen, Claus
2012-01-01
The limitations of the visual short-term memory (VSTM) system have become an increasingly popular ﬁeld of study. One line of inquiry has focused on the way attention selects objects for encoding into VSTM. Using the framework of the Theory of Visual Attention (TVA; Bundesen, 1990 Psychological...... a scaling mechanism modulating the decision bias of the observer and also through an accentuation-suppression mechanism that modulates the degree of subjective relevance of objects, contracting attention around fewer, highly relevant objects while suppressing less relevant objects. These mechanisms may...
The Scales of Gravitational Lensing
Francesco De Paolis
2016-03-01
Full Text Available After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge of supermassive black holes in the center of ours and nearby galaxies.
Boduch, Adam
2015-01-01
Have you ever come up against an application that felt like it was built on sand? Maybe you've been tasked with creating an application that needs to last longer than a year before a complete re-write? If so, JavaScript at Scale is your missing documentation for maintaining scalable architectures. There's no prerequisite framework knowledge required for this book, however, most concepts presented throughout are adaptations of components found in frameworks such as Backbone, AngularJS, or Ember. All code examples are presented using ECMAScript 6 syntax, to make sure your applications are ready
Scaling law in laboratory astrophysics
Xia Jiangfan; Zhang Jie
2001-01-01
The use of state-of-the-art lasers makes it possible to produce, in the laboratory, the extreme conditions similar to those in astrophysical processes. The introduction of astrophysics-relevant ideas in laser-plasma interaction experiments is propitious to the understanding of astrophysical phenomena. However, the great difference between laser-produced plasma and astrophysical objects makes it awkward to model the latter by laser-plasma experiments. The author presents the physical reasons for modeling astrophysical plasmas by laser plasmas, connecting these two kinds of plasmas by scaling laws. This allows the creation of experimental test beds where observation and models can be quantitatively compared with laboratory data
Outer scale of atmospheric turbulence
Lukin, Vladimir P.
2005-10-01
In the early 70's, the scientists in Italy (A.Consortini, M.Bertolotti, L.Ronchi), USA (R.Buser, Ochs, S.Clifford) and USSR (V.Pokasov, V.Lukin) almost simultaneously discovered the phenomenon of deviation from the power law and the effect of saturation for the structure phase function. During a period of 35 years we have performed successively the investigations of the effect of low-frequency spectral range of atmospheric turbulence on the optical characteristics. The influence of the turbulence models as well as a outer scale of turbulence on the characteristics of telescopes and systems of laser beam formations has been determined too.
Scaling of interfacial jump conditions
Quezada G, S.; Vazquez R, A.; Espinosa P, G.
2015-09-01
To model the behavior of a nuclear reactor accurately is needed to have balance models that take into account the different phenomena occurring in the reactor. These balances have to be coupled together through boundary conditions. The boundary conditions have been studied and different treatments have been given to the interface. In this paper is a brief description of some of the interfacial jump conditions that have been proposed in recent years. Also, the scaling of an interfacial jump condition is proposed, for coupling the different materials that are in contact within a nuclear reactor. (Author)
Drift-Scale Radionuclide Transport
Houseworth, J.
2004-01-01
The purpose of this model report is to document the drift scale radionuclide transport model, taking into account the effects of emplacement drifts on flow and transport in the vicinity of the drift, which are not captured in the mountain-scale unsaturated zone (UZ) flow and transport models ''UZ Flow Models and Submodels'' (BSC 2004 [DIRS 169861]), ''Radionuclide Transport Models Under Ambient Conditions'' (BSC 2004 [DIRS 164500]), and ''Particle Tracking Model and Abstraction of Transport Process'' (BSC 2004 [DIRS 170041]). The drift scale radionuclide transport model is intended to be used as an alternative model for comparison with the engineered barrier system (EBS) radionuclide transport model ''EBS Radionuclide Transport Abstraction'' (BSC 2004 [DIRS 169868]). For that purpose, two alternative models have been developed for drift-scale radionuclide transport. One of the alternative models is a dual continuum flow and transport model called the drift shadow model. The effects of variations in the flow field and fracture-matrix interaction in the vicinity of a waste emplacement drift are investigated through sensitivity studies using the drift shadow model (Houseworth et al. 2003 [DIRS 164394]). In this model, the flow is significantly perturbed (reduced) beneath the waste emplacement drifts. However, comparisons of transport in this perturbed flow field with transport in an unperturbed flow field show similar results if the transport is initiated in the rock matrix. This has led to a second alternative model, called the fracture-matrix partitioning model, that focuses on the partitioning of radionuclide transport between the fractures and matrix upon exiting the waste emplacement drift. The fracture-matrix partitioning model computes the partitioning, between fractures and matrix, of diffusive radionuclide transport from the invert (for drifts without seepage) into the rock water. The invert is the structure constructed in a drift to provide the floor of the
Water content estimated from point scale to plot scale
Akyurek, Z.; Binley, A. M.; Demir, G.; Abgarmi, B.
2017-12-01
Soil moisture controls the portioning of rainfall into infiltration and runoff. Here we investigate measurements of soil moisture using a range of techniques spanning different spatial scales. In order to understand soil water content in a test basin, 512 km2 in area, in the south of Turkey, a Cosmic Ray CRS200B soil moisture probe was installed at elevation of 1459 m and an ML3 ThetaProbe (CS 616) soil moisture sensor was established at 5cm depth used to get continuous soil moisture. Neutron count measurements were corrected for the changes in atmospheric pressure, atmospheric water vapour and intensity of incoming neutron flux. The calibration of the volumetric soil moisture was performed, from the laboratory analysis, the bulk density varies between 1.719 (g/cm3) -1.390 (g/cm3), and the dominant soil texture is silty clay loam and silt loamThe water content reflectometer was calibrated for soil-specific conditions and soil moisture estimates were also corrected with respect to soil temperature. In order to characterize the subsurface, soil electrical resistivity tomography was used. Wenner and Schlumberger array geometries were used with electrode spacing varied from 1m- 5 m along 40 m and 200 m profiles. From the inversions of ERT data it is apparent that within 50 m distance from the CRS200B, the soil is moderately resistive to a depth of 2m and more conductive at greater depths. At greater distances from the CRS200B, the ERT results indicate more resistive soils. In addition to the ERT surveys, ground penetrating radar surveys using a common mid-point configuration was used with 200MHz antennas. The volumetric soil moisture obtained from GPR appears to overestimate those based on TDR observations. The values obtained from CS616 (at a point scale) and CRS200B (at a mesoscale) are compared with the values obtained at a plot scale. For the field study dates (20-22.06.2017) the volumetric moisture content obtained from CS616 were 25.14%, 25.22% and 25
Beichner, Robert
2015-03-01
The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).
The Regret/Disappointment Scale
Francesco Marcatto
2008-01-01
Full Text Available The present article investigates the effectiveness of methods traditionally used to distinguish between the emotions of regret and disappointment and presents a new method --- the Regret and Disappointment Scale (RDS --- for assessing the two emotions in decision making research. The validity of the RDS was tested in three studies. Study 1 used two scenarios, one prototypical of regret and the other of disappointment, to test and compare traditional methods (``How much regret do you feel'' and ``How much disappointment do you feel'' with the RDS. Results showed that only the RDS clearly differentiated between the constructs of regret and disappointment. Study 2 confirmed the validity of the RDS in a real-life scenario, in which both feelings of regret and disappointment could be experienced. Study 2 also demonstrated that the RDS can discriminate between regret and disappointment with results similar to those obtained by using a context-specific scale. Study 3 showed the advantages of the RDS over the traditional methods in gambling situations commonly used in decision making research, and provided evidence for the convergent validity of the RDS.
Dynamic scaling in natural swarms
Cavagna, Andrea; Conti, Daniele; Creato, Chiara; Del Castello, Lorenzo; Giardina, Irene; Grigera, Tomas S.; Melillo, Stefania; Parisi, Leonardo; Viale, Massimiliano
2017-09-01
Collective behaviour in biological systems presents theoretical challenges beyond the borders of classical statistical physics. The lack of concepts such as scaling and renormalization is particularly problematic, as it forces us to negotiate details whose relevance is often hard to assess. In an attempt to improve this situation, we present here experimental evidence of the emergence of dynamic scaling laws in natural swarms of midges. We find that spatio-temporal correlation functions in different swarms can be rescaled by using a single characteristic time, which grows with the correlation length with a dynamical critical exponent z ~ 1, a value not found in any other standard statistical model. To check whether out-of-equilibrium effects may be responsible for this anomalous exponent, we run simulations of the simplest model of self-propelled particles and find z ~ 2, suggesting that natural swarms belong to a novel dynamic universality class. This conclusion is strengthened by experimental evidence of the presence of non-dissipative modes in the relaxation, indicating that previously overlooked inertial effects are needed to describe swarm dynamics. The absence of a purely dissipative regime suggests that natural swarms undergo a near-critical censorship of hydrodynamics.
Creating Large Scale Database Servers
Becla, Jacek
2001-01-01
The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region
Creating Large Scale Database Servers
Becla, Jacek
2001-12-14
The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.
Goethite Bench-scale and Large-scale Preparation Tests
Josephson, Gary B.; Westsik, Joseph H.
2011-10-23
The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the
SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS
Fiscaletti, Daniele
2015-08-23
The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.
Dimensional analysis, scaling and fractals
Timm, L.C.; Reichardt, K.; Oliveira Santos Bacchi, O.
2004-01-01
Dimensional analysis refers to the study of the dimensions that characterize physical entities, like mass, force and energy. Classical mechanics is based on three fundamental entities, with dimensions MLT, the mass M, the length L and the time T. The combination of these entities gives rise to derived entities, like volume, speed and force, of dimensions L 3 , LT -1 , MLT -2 , respectively. In other areas of physics, four other fundamental entities are defined, among them the temperature θ and the electrical current I. The parameters that characterize physical phenomena are related among themselves by laws, in general of quantitative nature, in which they appear as measures of the considered physical entities. The measure of an entity is the result of its comparison with another one, of the same type, called unit. Maps are also drawn in scale, for example, in a scale of 1:10,000, 1 cm 2 of paper can represent 10,000 m 2 in the field. Entities that differ in scale cannot be compared in a simple way. Fractal geometry, in contrast to the Euclidean geometry, admits fractional dimensions. The term fractal is defined in Mandelbrot (1982) as coming from the Latin fractus, derived from frangere which signifies to break, to form irregular fragments. The term fractal is opposite to the term algebra (from the Arabic: jabara) which means to join, to put together the parts. For Mandelbrot, fractals are non topologic objects, that is, objects which have as their dimension a real, non integer number, which exceeds the topologic dimension. For the topologic objects, or Euclidean forms, the dimension is an integer (0 for the point, 1 for a line, 2 for a surface, and 3 for a volume). The fractal dimension of Mandelbrot is a measure of the degree of irregularity of the object under consideration. It is related to the speed by which the estimate of the measure of an object increases as the measurement scale decreases. An object normally taken as uni-dimensional, like a piece of a
Scaling criteria for rock dynamic experiments
Crowley, Barbara K [Lawrence Radiation Laboratory, University of California, Livermore, CA (United States)
1970-05-01
A set of necessary conditions for performing scaled rock dynamics experiments is derived from the conservation equations of continuum mechanics. Performing scaled experiments in two different materials is virtually impossible because of the scaling restrictions imposed by two equations of state. However, performing dynamically scaled experiments in the same material is possible if time and distance use the same scaling factor and if the effects of gravity are insignificant. When gravity becomes significant, dynamic scaling is no longer possible. To illustrate these results, example calculations of megaton and kiloton experiments are considered. (author00.
Scaling laws for modeling nuclear reactor systems
Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.
1979-01-01
Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion
Enabling department-scale supercomputing
Greenberg, D.S.; Hart, W.E.; Phillips, C.A.
1997-11-01
The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.
Drug delivery across length scales.
Delcassian, Derfogail; Patel, Asha K; Cortinas, Abel B; Langer, Robert
2018-02-20
Over the last century, there has been a dramatic change in the nature of therapeutic, biologically active molecules available to treat disease. Therapies have evolved from extracted natural products towards rationally designed biomolecules, including small molecules, engineered proteins and nucleic acids. The use of potent drugs which target specific organs, cells or biochemical pathways, necessitates new tools which can enable controlled delivery and dosing of these therapeutics to their biological targets. Here, we review the miniaturisation of drug delivery systems from the macro to nano-scale, focussing on controlled dosing and controlled targeting as two key parameters in drug delivery device design. We describe how the miniaturisation of these devices enables the move from repeated, systemic dosing, to on-demand, targeted delivery of therapeutic drugs and highlight areas of focus for the future.
Large scale biomimetic membrane arrays
Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg
2009-01-01
To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...
Particle Bed Reactor scaling relationships
Slovik, G.; Araj, K.; Horn, F.L.; Ludewig, H.; Benenati, R.
1987-01-01
Scaling relationships for Particle Bed Reactors (PBRs) are discussed. The particular applications are short duration systems, i.e., for propulsion or burst power. Particle Bed Reactors can use a wide selection of different moderators and reflectors and be designed for such a wide range of power and bed power densities. Additional design considerations include the effect of varying the number of fuel elements, outlet Mach number in hot gas channel, etc. All of these variables and options result in a wide range of reactor weights and performance. Extremely light weight reactors (approximately 1 kg/MW) are possible with the appropriate choice of moderator/reflector and power density. Such systems are very attractive for propulsion systems where parasitic weight has to be minimized
Scale-invariant extended inflation
Holman, R.; Kolb, E.W.; Vadas, S.L.; Wang, Y.
1991-01-01
We propose a model of extended inflation which makes use of the nonlinear realization of scale invariance involving the dilaton coupled to an inflaton field whose potential admits a metastable ground state. The resulting theory resembles the Jordan-Brans-Dicke version of extended inflation. However, quantum effects, in the form of the conformal anomaly, generate a mass for the dilaton, thus allowing our model to evade the problems of the original version of extended inflation. We show that extended inflation can occur for a wide range of inflaton potentials with no fine-tuning of dimensionless parameters required. Furthermore, we also find that it is quite natural for the extended-inflation period to be followed by an epoch of slow-rollover inflation as the dilaton settles down to the minimum of its induced potential
Size scaling of static friction.
Braun, O M; Manini, Nicola; Tosatti, Erio
2013-02-22
Sliding friction across a thin soft lubricant film typically occurs by stick slip, the lubricant fully solidifying at stick, yielding and flowing at slip. The static friction force per unit area preceding slip is known from molecular dynamics (MD) simulations to decrease with increasing contact area. That makes the large-size fate of stick slip unclear and unknown; its possible vanishing is important as it would herald smooth sliding with a dramatic drop of kinetic friction at large size. Here we formulate a scaling law of the static friction force, which for a soft lubricant is predicted to decrease as f(m)+Δf/A(γ) for increasing contact area A, with γ>0. Our main finding is that the value of f(m), controlling the survival of stick slip at large size, can be evaluated by simulations of comparably small size. MD simulations of soft lubricant sliding are presented, which verify this theory.
Cognitive Reserve Scale and ageing
Irene León
2016-01-01
Full Text Available The construct of cognitive reserve attempts to explain why some individuals with brain impairment, and some people during normal ageing, can solve cognitive tasks better than expected. This study aimed to estimate cognitive reserve in a healthy sample of people aged 65 years and over, with special attention to its influence on cognitive performance. For this purpose, it used the Cognitive Reserve Scale (CRS and a neuropsychological battery that included tests of attention and memory. The results revealed that women obtained higher total CRS raw scores than men. Moreover, the CRS predicted the learning curve, short-term and long-term memory, but not attentional and working memory performance. Thus, the CRS offers a new proxy of cognitive reserve based on cognitively stimulating activities performed by healthy elderly people. Following an active lifestyle throughout life was associated with better intellectual performance and positive effects on relevant aspects of quality of life.
Conference on Large Scale Optimization
Hearn, D; Pardalos, P
1994-01-01
On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...
Large scale nuclear structure studies
Faessler, A.
1985-01-01
Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)
Bacterial Communities: Interactions to Scale
Reed M. Stubbendieck
2016-08-01
Full Text Available In the environment, bacteria live in complex multispecies communities. These communities span in scale from small, multicellular aggregates to billions or trillions of cells within the gastrointestinal tract of animals. The dynamics of bacterial communities are determined by pairwise interactions that occur between different species in the community. Though interactions occur between a few cells at a time, the outcomes of these interchanges have ramifications that ripple through many orders of magnitude, and ultimately affect the macroscopic world including the health of host organisms. In this review we cover how bacterial competition influences the structures of bacterial communities. We also emphasize methods and insights garnered from culture-dependent pairwise interaction studies, metagenomic analyses, and modeling experiments. Finally, we argue that the integration of multiple approaches will be instrumental to future understanding of the underlying dynamics of bacterial communities.
Scaling Theory of Polyelectrolyte Nanogels
Qu, Li-Jian
2017-08-01
The present paper develops the scaling theory of polyelectrolyte nanogels in dilute and semidilute solutions. The dependencies of the nanogel dimension on branching topology, charge fraction, subchain length, segment number, solution concentration are obtained. For a single polyelectrolyte nanogel in salt free solution, the nanogel may be swelled by the Coulombic repulsion (the so-called polyelectrolyte regime) or the osmotic counterion pressure (the so-called osmotic regime). Characteristics and boundaries between different regimes of a single polyelectrolyte nanogel are summarized. In dilute solution, the nanogels in polyelectrolyte regime will distribute orderly with the increase of concentration. While the nanogels in osmotic regime will always distribute randomly. Different concentration dependencies of the size of a nanogel in polyelectrolyte regime and in osmotic regime are also explored. Supported by China Earthquake Administration under Grant No. 20150112 and National Natural Science Foundation of China under Grant No. 21504014
Scaling Theory of Polyelectrolyte Nanogels
Qu Li-Jian
2017-01-01
The present paper develops the scaling theory of polyelectrolyte nanogels in dilute and semidilute solutions. The dependencies of the nanogel dimension on branching topology, charge fraction, subchain length, segment number, solution concentration are obtained. For a single polyelectrolyte nanogel in salt free solution, the nanogel may be swelled by the Coulombic repulsion (the so-called polyelectrolyte regime) or the osmotic counterion pressure (the so-called osmotic regime). Characteristics and boundaries between different regimes of a single polyelectrolyte nanogel are summarized. In dilute solution, the nanogels in polyelectrolyte regime will distribute orderly with the increase of concentration. While the nanogels in osmotic regime will always distribute randomly. Different concentration dependencies of the size of a nanogel in polyelectrolyte regime and in osmotic regime are also explored. (paper)
Petts, G.
1994-01-01
Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)
Scaling the Baltic Sea environment
Larsen, Henrik Gutzon
2008-01-01
of this development, this article suggests that environmental politics critically depend on the delineation of relatively bounded spaces that identify and situate particular environmental concerns as spatial objects for politics. These spaces are not simply determined by ‘nature' or some environmental......The Baltic Sea environment has since the early 1970s passed through several phases of spatial objectification in which the ostensibly well-defined semi-enclosed sea has been framed and reframed as a geographical object for intergovernmental environmental politics. Based on a historical analysis......-scientific logic, but should rather be seen as temporal outcomes of scale framing processes, processes that are accentuated by contemporary conceptions of the environment (or nature) in terms of multi-scalar ecosystems. This has implications for how an environmental concern is perceived and politically addressed....
Baryogenesis at the electroweak scale
Dine, M.; Huet, P.; Singleton, R. Jr.
1992-01-01
We explore some issues involved in generating the baryon asymmetry at the electroweak scale. A simple two-dimensional model is analyzed which illustrates the role of the effective action in computing the asymmetry. We stress the fact that baryon production ceases at a very small value of the Higgs field; as a result, certain two-Higgs models which have been studied recently cannot produce sufficient asymmetry, while quite generally models with only doublets can barely produce the observed baryon density; models with gauge singlets are more promising. We also review limits on Higgs masses coming from the requirement that the baryon asymmetry not be wiped out after the phase transition. We note that there are a variety of uncertainties in these calculations, and that even in models with a single Higgs doublet one cannot rule out a Higgs mass below 55 GeV. (orig.)
Small-scale classification schemes
Hertzum, Morten
2004-01-01
Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...
Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian
2018-02-01
This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.
Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian
2018-01-01
Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.
The Adaptive Multi-scale Simulation Infrastructure
Tobin, William R. [Rensselaer Polytechnic Inst., Troy, NY (United States)
2015-09-01
The Adaptive Multi-scale Simulation Infrastructure (AMSI) is a set of libraries and tools developed to support the development, implementation, and execution of general multimodel simulations. Using a minimal set of simulation meta-data AMSI allows for minimally intrusive work to adapt existent single-scale simulations for use in multi-scale simulations. Support for dynamic runtime operations such as single- and multi-scale adaptive properties is a key focus of AMSI. Particular focus has been spent on the development on scale-sensitive load balancing operations to allow single-scale simulations incorporated into a multi-scale simulation using AMSI to use standard load-balancing operations without affecting the integrity of the overall multi-scale simulation.
White Mango Scale, Aulacaspis tubercularis , Distribution and ...
White Mango Scale, Aulacaspis tubercularis , Distribution and Severity Status in East and West Wollega Zones, ... Among the insect pests attacking mango plant, white mango scale is the most devastating insect pest. ... HOW TO USE AJOL.
The Development of Marital Maturity Scale
Muhammed YILDIZ
2017-06-01
Full Text Available In this study, validity, reliability and item analysis studies of the Marital Maturity Scale prepared to test whether individuals are ready for marriage have been done. Studies of the development of the scale were made on 623 individuals, consisting of single adults. In the validity studies of the scale, explanatory and confirmatory factor analyses and criterion related validity studies were performed. Factor analysis revealed that the scale had four dimensions. The four factors in the measurement account for 60.91% of the total variance. The factor loadings of the items in the scale range from 0.42 to 0.86. Inonu Marriage Attitude Scale was used in the criterion related validity studies. Correlation value of the two scales r=0.72 (p=0.000 was found significant. It was determined that the subscales of the scale had a significant correlation with the total scale. The cronbach alpha value of the first dimension of the scale was 0.85, the cronbach alpha value of the second dimension of the scale was 0.68, the cronbach alpha value of the third dimension of the scale was 0.80, the cronbach alpha value of the fourth dimension of the scale was 0.91 and the cronbach alpha value of the total scale was 0.90. Test retest results r=0.70, (p=0.000 were found significant. In the item analysis studies, it was revealed that in the lower 27% group, the individuals in the upper 27% group were significantly different in all items (p=0.000. The item total correlation value of the items in the scale was between 0.40 and 0.63. As a result of the assessments, it was concluded that the Marital Maturity Scale was a reliable and valid instrument to measure marital maturity of single adults
Abusive Supervision Scale Development in Indonesia
Wulani, Fenika; Purwanto, Bernadinus M; Handoko, Hani
2014-01-01
The purpose of this study was to develop a scale of abusive supervision in Indonesia. The study was conducted with a different context and scale development method from Tepper’s (2000) abusive supervision scale. The abusive supervision scale from Tepper (2000) was developed in the U.S., which has a cultural orientation of low power distance. The current study was conducted in Indonesia, which has a high power distance. This study used interview procedures to obtain information about superviso...
Modeling and simulation with operator scaling
Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan
2010-01-01
Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...
One-fifth-scale and full-scale fuel element rocking tests
Nau, P.V.; Olsen, B.E.
1978-06-01
Using 1 / 5 -scale and 1 / 1 -scale (prototype H451) fuel elements, one, two, or three stacked elements on a clamped base element were rocked from an initial release position. Relative displacement, rock-down loads, and dowel pin shear forces were measured. A scaled comparison between 1 / 5 -scale and 1 / 1 -scale results was made to evaluate the model scaling laws, and an error analysis was performed to assess the accuracy and usefulness of the test data
Three scales of motions associated with tornadoes
Forbes, G.S.
1978-03-01
This dissertation explores three scales of motion commonly associated with tornadoes, and the interaction of these scales: the tornado cyclone, the tornado, and the suction vortex. The goal of the research is to specify in detail the character and interaction of these scales of motion to explain tornadic phenomena
Length scale for configurational entropy in microemulsions
Reiss, H.; Kegel, W.K.; Groenewold, J.
1996-01-01
In this paper we study the length scale that must be used in evaluating the mixing entropy in a microemulsion. The central idea involves the choice of a length scale in configuration space that is consistent with the physical definition of entropy in phase space. We show that this scale may be
Continued validation of the Multidimensional Perfectionism Scale.
Clavin, S L; Clavin, R H; Gayton, W F; Broida, J
1996-06-01
Scores on the Multidimensional Perfectionism Scale have been correlated with measures of obsessive-compulsive tendencies for women, so the validity of scores on this scale for 41 men was examined. Scores on the Perfectionism Scale were significantly correlated (.47-.03) with scores on the Maudsley Obsessive-Compulsive Inventory.
Toward seamless hydrologic predictions across spatial scales
Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin; Warrach-Sagi, Kirsten; Attinger, Sabine
2017-01-01
Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the
Scaling analysis in bepu licensing of LWR
D' auria, Francesco; Lanfredini, Marco; Muellner, Nikolaus [University of Pisa, Pisa (Italy)
2012-08-15
'Scaling' plays an important role for safety analyses in the licensing of water cooled nuclear power reactors. Accident analyses, a sub set of safety analyses, is mostly based on nuclear reactor system thermal hydraulics, and therefore based on an adequate experimental data base, and in recent licensing applications, on best estimate computer code calculations. In the field of nuclear reactor technology, only a small set of the needed experiments can be executed at a nuclear power plant; the major part of experiments, either because of economics or because of safety concerns, has to be executed at reduced scale facilities. How to address the scaling issue has been the subject of numerous investigations in the past few decades (a lot of work has been performed in the 80thies and 90thies of the last century), and is still the focus of many scientific studies. The present paper proposes a 'roadmap' to scaling. Key elements are the 'scaling-pyramid', related 'scaling bridges' and a logical path across scaling achievements (which constitute the 'scaling puzzle'). The objective is addressing the scaling issue when demonstrating the applicability of the system codes, the 'key-to-scaling', in the licensing process of a nuclear power plant. The proposed 'road map to scaling' aims at solving the 'scaling puzzle', by introducing a unified approach to the problem.
Scaling analysis in bepu licensing of LWR
D'auria, Francesco; Lanfredini, Marco; Muellner, Nikolaus
2012-01-01
'Scaling' plays an important role for safety analyses in the licensing of water cooled nuclear power reactors. Accident analyses, a sub set of safety analyses, is mostly based on nuclear reactor system thermal hydraulics, and therefore based on an adequate experimental data base, and in recent licensing applications, on best estimate computer code calculations. In the field of nuclear reactor technology, only a small set of the needed experiments can be executed at a nuclear power plant; the major part of experiments, either because of economics or because of safety concerns, has to be executed at reduced scale facilities. How to address the scaling issue has been the subject of numerous investigations in the past few decades (a lot of work has been performed in the 80thies and 90thies of the last century), and is still the focus of many scientific studies. The present paper proposes a 'roadmap' to scaling. Key elements are the 'scaling-pyramid', related 'scaling bridges' and a logical path across scaling achievements (which constitute the 'scaling puzzle'). The objective is addressing the scaling issue when demonstrating the applicability of the system codes, the 'key-to-scaling', in the licensing process of a nuclear power plant. The proposed 'road map to scaling' aims at solving the 'scaling puzzle', by introducing a unified approach to the problem.
Why Online Education Will Attain Full Scale
Sener, John
2010-01-01
Online higher education has attained scale and is poised to take the next step in its growth. Although significant obstacles to a full scale adoption of online education remain, we will see full scale adoption of online higher education within the next five to ten years. Practically all higher education students will experience online education in…
Mineral scale management. Part II, Fundamental chemistry
Alan W. Rudie; Peter W. Hart
2006-01-01
The mineral scale that deposits in digesters and bleach plants is formed by a chemical precipitation process.As such, it is accurately modeled using the solubility product equilibrium constant. Although solubility product identifies the primary conditions that must be met for a scale problem to exist, the acid-base equilibria of the scaling anions often control where...
Mechanics over micro and nano scales
Chakraborty, Suman
2011-01-01
Discusses the fundaments of mechanics over micro and nano scales in a level accessible to multi-disciplinary researchers, with a balance of mathematical details and physical principles Covers life sciences and chemistry for use in emerging applications related to mechanics over small scales Demonstrates the explicit interconnection between various scale issues and the mechanics of miniaturized systems
21 CFR 880.2720 - Patient scale.
2010-04-01
... Patient scale. (a) Identification. A patient scale is a device intended for medical purposes that is used... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Patient scale. 880.2720 Section 880.2720 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES...
76 FR 50881 - Required Scale Tests
2011-08-17
... RIN 0580-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration... required scale tests. Those documents defined ``limited seasonal basis'' incorrectly. This document... 20, 2011 (76 FR 3485) and on April 4, 2011 (76 FR 18348), concerning required scale tests. Those...
76 FR 3485 - Required Scale Tests
2011-01-20
...-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration, USDA. ACTION... their scales tested at least twice each calendar year at intervals of approximately 6 months. This final rule requires that regulated entities complete the first of the two scale tests between January 1 and...
76 FR 18348 - Required Scale Tests
2011-04-04
... RIN 0580-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration... published a document in the Federal Register on January 20, 2011 (76 FR 3485), defining required scale tests... the last sentence of paragraph (a) to read as follows: Sec. 201.72 Scales; testing of. (a...
Scale dependent inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Ecological relationships between patterns and processes are highly scale dependent. This paper reports the first formal exploration of how changing scale of research away from the scale of the processes governing gene flow affects the results of landscape genetic analysis. We used an individual-based, spatially explicit simulation model to generate patterns of genetic...
COVERS Neonatal Pain Scale: Development and Validation
Ivan L. Hand
2010-01-01
Full Text Available Newborns and infants are often exposed to painful procedures during hospitalization. Several different scales have been validated to assess pain in specific populations of pediatric patients, but no single scale can easily and accurately assess pain in all newborns and infants regardless of gestational age and disease state. A new pain scale was developed, the COVERS scale, which incorporates 6 physiological and behavioral measures for scoring. Newborns admitted to the Neonatal Intensive Care Unit or Well Baby Nursery were evaluated for pain/discomfort during two procedures, a heel prick and a diaper change. Pain was assessed using indicators from three previously established scales (CRIES, the Premature Infant Pain Profile, and the Neonatal Infant Pain Scale, as well as the COVERS Scale, depending upon gestational age. Premature infant testing resulted in similar pain assessments using the COVERS and PIPP scales with an r=0.84. For the full-term infants, the COVERS scale and NIPS scale resulted in similar pain assessments with an r=0.95. The COVERS scale is a valid pain scale that can be used in the clinical setting to assess pain in newborns and infants and is universally applicable to all neonates, regardless of their age or physiological state.
Desiront, A.
2003-01-01
For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both
Computational applications of DNA physical scales
Baldi, Pierre; Chauvin, Yves; Brunak, Søren
1998-01-01
that these scales provide an alternative or complementary compact representation of DNA sequences. As an example we construct a strand invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combinations with hidden Markov models......The authors study from a computational standpoint several different physical scales associated with structural features of DNA sequences, including dinucleotide scales such as base stacking energy and propellor twist, and trinucleotide scales such as bendability and nucleosome positioning. We show...
Computational applications of DNA structural scales
Baldi, P.; Chauvin, Y.; Brunak, Søren
1998-01-01
that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models......Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...
Scaling solutions for dilaton quantum gravity
Henz, T.; Pawlowski, J.M., E-mail: j.pawlowski@thphys.uni-heidelberg.de; Wetterich, C.
2017-06-10
Scaling solutions for the effective action in dilaton quantum gravity are investigated within the functional renormalization group approach. We find numerical solutions that connect ultraviolet and infrared fixed points as the ratio between scalar field and renormalization scale k is varied. In the Einstein frame the quantum effective action corresponding to the scaling solutions becomes independent of k. The field equations derived from this effective action can be used directly for cosmology. Scale symmetry is spontaneously broken by a non-vanishing cosmological value of the scalar field. For the cosmology corresponding to our scaling solutions, inflation arises naturally. The effective cosmological constant becomes dynamical and vanishes asymptotically as time goes to infinity.
Scaling laws for coastal overwash morphology
Lazarus, Eli D.
2016-12-01
Overwash is a physical process of coastal sediment transport driven by storm events and is essential to landscape resilience in low-lying barrier environments. This work establishes a comprehensive set of scaling laws for overwash morphology: unifying quantitative descriptions with which to compare overwash features by their morphological attributes across case examples. Such scaling laws also help relate overwash features to other morphodynamic phenomena. Here morphometric data from a physical experiment are compared with data from natural examples of overwash features. The resulting scaling relationships indicate scale invariance spanning several orders of magnitude. Furthermore, these new relationships for overwash morphology align with classic scaling laws for fluvial drainages and alluvial fans.
SCALE criticality safety verification and validation package
Bowman, S.M.; Emmett, M.B.; Jordan, W.C.
1998-01-01
Verification and validation (V and V) are essential elements of software quality assurance (QA) for computer codes that are used for performing scientific calculations. V and V provides a means to ensure the reliability and accuracy of such software. As part of the SCALE QA and V and V plans, a general V and V package for the SCALE criticality safety codes has been assembled, tested and documented. The SCALE criticality safety V and V package is being made available to SCALE users through the Radiation Safety Information Computational Center (RSICC) to assist them in performing adequate V and V for their SCALE applications
Fluctuation scaling, Taylor's law, and crime.
Quentin S Hanley
Full Text Available Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026 while burglary exhibited a greater exponent (α = 1.292 ± 0.029 indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs to 2.094 ± 0081 (Other Crimes. Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.
Fluctuation scaling, Taylor's law, and crime.
Hanley, Quentin S; Khatun, Suniya; Yosef, Amal; Dyer, Rachel-May
2014-01-01
Fluctuation scaling relationships have been observed in a wide range of processes ranging from internet router traffic to measles cases. Taylor's law is one such scaling relationship and has been widely applied in ecology to understand communities including trees, birds, human populations, and insects. We show that monthly crime reports in the UK show complex fluctuation scaling which can be approximated by Taylor's law relationships corresponding to local policing neighborhoods and larger regional and countrywide scales. Regression models applied to local scale data from Derbyshire and Nottinghamshire found that different categories of crime exhibited different scaling exponents with no significant difference between the two regions. On this scale, violence reports were close to a Poisson distribution (α = 1.057 ± 0.026) while burglary exhibited a greater exponent (α = 1.292 ± 0.029) indicative of temporal clustering. These two regions exhibited significantly different pre-exponential factors for the categories of anti-social behavior and burglary indicating that local variations in crime reports can be assessed using fluctuation scaling methods. At regional and countrywide scales, all categories exhibited scaling behavior indicative of temporal clustering evidenced by Taylor's law exponents from 1.43 ± 0.12 (Drugs) to 2.094 ± 0081 (Other Crimes). Investigating crime behavior via fluctuation scaling gives insight beyond that of raw numbers and is unique in reporting on all processes contributing to the observed variance and is either robust to or exhibits signs of many types of data manipulation.
Modelling of rate effects at multiple scales
Pedersen, R.R.; Simone, A.; Sluys, L. J.
2008-01-01
, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...
Invariant relationships deriving from classical scaling transformations
Bludman, Sidney; Kennedy, Dallas C.
2011-01-01
Because scaling symmetries of the Euler-Lagrange equations are generally not variational symmetries of the action, they do not lead to conservation laws. Instead, an extension of Noether's theorem reduces the equations of motion to evolutionary laws that prove useful, even if the transformations are not symmetries of the equations of motion. In the case of scaling, symmetry leads to a scaling evolutionary law, a first-order equation in terms of scale invariants, linearly relating kinematic and dynamic degrees of freedom. This scaling evolutionary law appears in dynamical and in static systems. Applied to dynamical central-force systems, the scaling evolutionary equation leads to generalized virial laws, which linearly connect the kinetic and potential energies. Applied to barotropic hydrostatic spheres, the scaling evolutionary equation linearly connects the gravitational and internal energy densities. This implies well-known properties of polytropes, describing degenerate stars and chemically homogeneous nondegenerate stellar cores.
The scaling issue: scientific opportunities
Orbach, Raymond L.
2009-07-01
A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.
The scaling issue: scientific opportunities
Orbach, Raymond L
2009-01-01
A brief history of the Leadership Computing Facility (LCF) initiative is presented, along with the importance of SciDAC to the initiative. The initiative led to the initiation of the Innovative and Novel Computational Impact on Theory and Experiment program (INCITE), open to all researchers in the US and abroad, and based solely on scientific merit through peer review, awarding sizeable allocations (typically millions of processor-hours per project). The development of the nation's LCFs has enabled available INCITE processor-hours to double roughly every eight months since its inception in 2004. The 'top ten' LCF accomplishments in 2009 illustrate the breadth of the scientific program, while the 75 million processor hours allocated to American business since 2006 highlight INCITE contributions to US competitiveness. The extrapolation of INCITE processor hours into the future brings new possibilities for many 'classic' scaling problems. Complex systems and atomic displacements to cracks are but two examples. However, even with increasing computational speeds, the development of theory, numerical representations, algorithms, and efficient implementation are required for substantial success, exhibiting the crucial role that SciDAC will play.
Large Scale Glazed Concrete Panels
Bache, Anja Margrethe
2010-01-01
Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...
Scaling Agile Infrastructure to People
Jones, B; Traylen, S; Arias, N Barrientos
2015-01-01
When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow ...
Scaling Agile Infrastructure to People
Jones, B.; McCance, G.; Traylen, S.; Barrientos Arias, N.
2015-12-01
When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.
Steinhaus Thomas
2007-01-01
Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.
Schmiegel, Jürgen
We study the statistical properties of the star-shaped approximation of in vitro tumor profiles. The emphasis is on the two-point correlation structure of the radii of the tumor as a function of time and angle. In particular, we show that spatial two-point correlators follow a cosine law. Further......We study the statistical properties of the star-shaped approximation of in vitro tumor profiles. The emphasis is on the two-point correlation structure of the radii of the tumor as a function of time and angle. In particular, we show that spatial two-point correlators follow a cosine law....... Furthermore, we observe self-scaling behaviour of two-point correlators of different orders, i.e. correlators of a given order are a power law of the correlators of some other order. This power-law dependence is similar to what has been observed for the statistics of the energy-dissipation in a turbulent flow....... Based on this similarity, we provide a Lévy based model that captures the correlation structure of the radii of the star-shaped tumor profiles....
Universal scaling in sports ranking
Deng Weibing; Li Wei; Cai Xu; Bulou, Alain; Wang Qiuping A
2012-01-01
Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters. (paper)
Scales and scaling in turbulent ocean sciences; physics-biology coupling
Schmitt, Francois
2015-04-01
Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.
SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS
Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.
2015-01-01
from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.
Thermodynamic scaling behavior in genechips
Van Hummelen Paul
2009-01-01
Full Text Available Abstract Background Affymetrix Genechips are characterized by probe pairs, a perfect match (PM and a mismatch (MM probe differing by a single nucleotide. Most of the data preprocessing algorithms neglect MM signals, as it was shown that MMs cannot be used as estimators of the non-specific hybridization as originally proposed by Affymetrix. The aim of this paper is to study in detail on a large number of experiments the behavior of the average PM/MM ratio. This is taken as an indicator of the quality of the hybridization and, when compared between different chip series, of the quality of the chip design. Results About 250 different GeneChip hybridizations performed at the VIB Microarray Facility for Homo sapiens, Drosophila melanogaster, and Arabidopsis thaliana were analyzed. The investigation of such a large set of data from the same source minimizes systematic experimental variations that may arise from differences in protocols or from different laboratories. The PM/MM ratios are derived theoretically from thermodynamic laws and a link is made with the sequence of PM and MM probe, more specifically with their central nucleotide triplets. Conclusion The PM/MM ratios subdivided according to the different central nucleotides triplets follow qualitatively those deduced from the hybridization free energies in solution. It is shown also that the PM and MM histograms are related by a simple scale transformation, in agreement with what is to be expected from hybridization thermodynamics. Different quantitative behavior is observed on the different chip organisms analyzed, suggesting that some organism chips have superior probe design compared to others.
Spatial scale separation in regional climate modelling
Feser, F.
2005-07-01
In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter
Scaling: From quanta to nuclear reactors
Zuber, Novak, E-mail: rohatgi@bnl.go [703 New Mark Esplanade, Rockville, MD 20850 (United States)
2010-08-15
This paper has three objectives. The first objective is to show how the Einstein-de Broglie equation (EdB) can be extended to model and scale, via fractional scaling, both conservative and dissipative processes ranging in scale from quanta to nuclear reactors. The paper also discusses how and why a single equation and associated fractional scaling method generate for each process of change the corresponding scaling criterion. The versatility and capability of fractional scaling are demonstrated by applying it to: (a) particle dynamics, (b) conservative (Bernoulli) and dissipative (hydraulic jump) flows, (c) viscous and turbulent flows through rough and smooth pipes, and (d) momentum diffusion in a semi-infinite medium. The capability of fractional scaling to scale a process over a vast range of temporal and spatial scales is demonstrated by applying it to fluctuating processes. The application shows that the modeling of fluctuations in fluid mechanics is analogous to that in relativistic quantum field theory. Thus, Kolmogorov dissipation frequency and length are the analogs of the characteristic time and length of quantum fluctuations. The paper briefly discusses the applicability of the fractional scaling approach (FSA) to nanotechnology and biology. It also notes the analogy between FSA and the approach used to scale polymers. These applications demonstrate the power of scaling as well as the validity of Pierre-Gilles de Gennes' ideas concerning scaling, analogies and simplicity. They also demonstrate the usefulness and efficiency of his approach to solving scientific problems. The second objective is to note and discuss the benefits of applying FSA to NPP technology. The third objective is to present a state of the art assessment of thermal-hydraulics (T/H) capabilities and needs relevant to NPP.
Scaling: From quanta to nuclear reactors
Zuber, Novak
2010-01-01
This paper has three objectives. The first objective is to show how the Einstein-de Broglie equation (EdB) can be extended to model and scale, via fractional scaling, both conservative and dissipative processes ranging in scale from quanta to nuclear reactors. The paper also discusses how and why a single equation and associated fractional scaling method generate for each process of change the corresponding scaling criterion. The versatility and capability of fractional scaling are demonstrated by applying it to: (a) particle dynamics, (b) conservative (Bernoulli) and dissipative (hydraulic jump) flows, (c) viscous and turbulent flows through rough and smooth pipes, and (d) momentum diffusion in a semi-infinite medium. The capability of fractional scaling to scale a process over a vast range of temporal and spatial scales is demonstrated by applying it to fluctuating processes. The application shows that the modeling of fluctuations in fluid mechanics is analogous to that in relativistic quantum field theory. Thus, Kolmogorov dissipation frequency and length are the analogs of the characteristic time and length of quantum fluctuations. The paper briefly discusses the applicability of the fractional scaling approach (FSA) to nanotechnology and biology. It also notes the analogy between FSA and the approach used to scale polymers. These applications demonstrate the power of scaling as well as the validity of Pierre-Gilles de Gennes' ideas concerning scaling, analogies and simplicity. They also demonstrate the usefulness and efficiency of his approach to solving scientific problems. The second objective is to note and discuss the benefits of applying FSA to NPP technology. The third objective is to present a state of the art assessment of thermal-hydraulics (T/H) capabilities and needs relevant to NPP.
Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy
2012-11-01
Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright
Scaling Consumers' Purchase Involvement: A New Approach
Jörg Kraigher-Krainer
2012-06-01
Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.
Scaling analysis of meteorite shower mass distributions
Oddershede, Lene; Meibom, A.; Bohr, Jakob
1998-01-01
Meteorite showers are the remains of extraterrestrial objects which are captivated by the gravitational field of the Earth. We have analyzed the mass distribution of fragments from 16 meteorite showers for scaling. The distributions exhibit distinct scaling behavior over several orders of magnetude......; the observed scaling exponents vary from shower to shower. Half of the analyzed showers show a single scaling region while the orther half show multiple scaling regimes. Such an analysis can provide knowledge about the fragmentation process and about the original meteoroid. We also suggest to compare...... the observed scaling exponents to exponents observed in laboratory experiments and discuss the possibility that one can derive insight into the original shapes of the meteoroids....
Validation of the Early Functional Abilities scale
Poulsen, Ingrid; Kreiner, Svend; Engberg, Aase W
2018-01-01
model item analysis. A secondary objective was to examine the relationship between the Early Functional Abilities scale and the Functional Independence Measurement™, in order to establish the criterion validity of the Early Functional Abilities scale and to compare the sensitivity of measurements using......), facio-oral, sensorimotor and communicative/cognitive functions. Removal of one item from the sensorimotor scale confirmed unidimensionality for each of the 4 subscales, but not for the entire scale. The Early Functional Abilities subscales are sensitive to differences between patients in ranges in which......OBJECTIVE: The Early Functional Abilities scale assesses the restoration of brain function after brain injury, based on 4 dimensions. The primary objective of this study was to evaluate the validity, objectivity, reliability and measurement precision of the Early Functional Abilities scale by Rasch...
A scale distortion theory of anchoring.
Frederick, Shane W; Mochon, Daniel
2012-02-01
We propose that anchoring is often best interpreted as a scaling effect--that the anchor changes how the response scale is used, not how the focal stimulus is perceived. Of importance, we maintain that this holds true even for so-called objective scales (e.g., pounds, calories, meters, etc.). In support of this theory of scale distortion, we show that prior exposure to a numeric standard changes respondents' use of that specific response scale but does not generalize to conceptually affiliated judgments rendered on similar scales. Our findings highlight the necessity of distinguishing response language effects from representational effects in places where the need for that distinction has often been assumed away.
Scale and the acceptability of nuclear energy
Wilbanks, T.J.
1984-01-01
A rather speculative exploration is presented of scale as it may affect the acceptability of nuclear energy. In our utilization of this energy option, how does large vs. small relate to attitudes toward it, and what can we learn from this about technology choices in the United States more generally. In order to address such a question, several stepping-stones are needed. First, scale is defined for the purposes of the paper. Second, recent experience with nuclear energy is reviewed: trends in the scale of use, the current status of nuclear energy as an option, and the social context for its acceptance problems. Third, conventional notions about the importance of scale in electricity generation are summarized. With these preliminaries out of the way, the paper then discusses apparent relationships between scale and the acceptance of nuclear energy and suggests some policy implications of these preliminary findings. Finally, some comments are offered about general relationships between scale and technology choice
Measuring Tourism motivation: Do Scales matter?
Huang, Songshan (Sam)
2009-01-01
Measuring tourist motivation has always been a challenging task for tourism researchers. This paper aimed to increase the understanding of tourist motivation measurement by comparing two frequently adopted motivation measurement approaches: self-perception (SP) and importance-rating (IR) approaches. Results indicated that both SP and IR scales were highly reliable in terms of internal consistency. However, respondents tended to rate more positively in the SP scale than in the IR scale. Factor...
Further validation of the Indecisiveness Scale.
Gayton, W F; Clavin, R H; Clavin, S L; Broida, J
1994-12-01
Scores on the Indecisiveness Scale have been shown to be correlated with scores on measures of obsessive-compulsive tendencies and perfectionism for women. This study examined the validity of the Indecisiveness Scale with 41 men whose mean age was 21.1 yr. Indecisiveness scores were significantly correlated with scores on measures of obsessive-compulsive tendencies and perfectionism. Also, undeclared majors had a significantly higher mean on the Indecisiveness Scale than did declared majors.
Arler, Finn
2006-01-01
The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...
Resource Complementarity and IT Economies of Scale
Woudstra, Ulco; Berghout, Egon; Tan, Chee-Wee
2017-01-01
In this study, we explore economies of scale for IT infrastructure and application services. An in-depth appreciation of economies of scale is imperative for an adequate understanding of the impact of IT investments. Our findings indicate that even low IT spending organizations can make...... a difference by devoting at least 60% of their total IT budget on IT infrastructure in order to foster economies of scale and extract strategic benefits....
On BLM scale fixing in exclusive processes
Anikin, I.V.; Pire, B.; Szymanowski, L.; Teryaev, O.V.; Wallon, S.
2005-01-01
We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x B . We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)
On BLM scale fixing in exclusive processes
Anikin, I.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Universite Paris-Sud, LPT, Orsay (France); Pire, B. [Ecole Polytechnique, CPHT, Palaiseau (France); Szymanowski, L. [Soltan Institute for Nuclear Studies, Warsaw (Poland); Univ. de Liege, Inst. de Physique, Liege (Belgium); Teryaev, O.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Wallon, S. [Universite Paris-Sud, LPT, Orsay (France)
2005-07-01
We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x{sub B}. We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)
Void probability scaling in hadron nucleus interactions
Ghosh, Dipak; Deb, Argha; Bhattacharyya, Swarnapratim; Ghosh, Jayita; Bandyopadhyay, Prabhat; Das, Rupa; Mukherjee, Sima
2002-01-01
Heygi while investigating with the rapidity gap probability (that measures the chance of finding no particle in the pseudo-rapidity interval Δη) found that a scaling behavior in the rapidity gap probability has a close correspondence with the scaling of a void probability in galaxy correlation study. The main aim in this paper is to study the scaling behavior of the rapidity gap probability
On the frequency scalings of RF guns
Lin, L.C.; Chen, S.C.; Wurtele, J.S.
1995-01-01
A frequency scaling law for RF guns is derived from the normalized Vlasov-Maxwell equations. It shows that higher frequency RF guns can generate higher brightness beams under the assumption that the accelerating gradient and all beam and structure parameters are scaled with the RF frequency. Numerical simulation results using MAGIC confirm the scaling law. A discussion of the range of applicability of the law is presented. copyright 1995 American Institute of Physics
Scale-up of precipitation processes
Zauner, R.
1999-01-01
This thesis concerns the scale-up of precipitation processes aimed at predicting product particle characteristics. Although precipitation is widely used in the chemical and pharmaceutical industry, successful scale-up is difficult due to the absence of a validated methodology. It is found that none of the conventional scale-up criteria reported in the literature (equal power input per unit mass, equal tip speed, equal stirring rate) is capable of predicting the experimentally o...
Using Imagers for Scaling Ecological Observations
Graham, Eric; Hicks, John; Riordan, Erin; Wang, Eric; Yuen, Eric
2009-01-01
Stationary and mobile ground-based cameras can be used to scale ecological observations, relating pixel information in images to in situ measurements. Currently there are four CENS projects that involve using cameras for scaling ecological observations: 1. Scaling from one individual to the landscape. Pan-Tilt-Zoom cameras can be zoomed in on a tight focus on individual plants and parts of individuals and then zoomed out to get a landscape view, composed of the same and similar species. 2...
Pichon, Max
2011-01-01
Full text: The University of Queensland has switched on what it says is Australia's largest solar photovoltaic installation, a 1.2MW system that spans 11 rooftops at the St Lucia campus. The UQ Solar Array, which effectively coats four buildings with more than 5,000 polycrystalline silicon solar panels, will generate about 1,850MWh a year. “During the day, the system will provide up to six per cent of the university's power requirements, reducing greenhouse gas emissions by approximately 1,650 tonnes of CO 2 -e per annum,”said Rodger Whitby, the GM of generation for renewables company Ingenero. It also underpins a number of cutting-edge research projects in diverse fields, according to Professor Paul Meredith, who oversaw the design and installation of the solar array. “A major objective of our array research program is to provide a clearer understanding of how to integrate megawatt- scale renewable energy sources into an urban grid,” said Professor Meredith, of the School of Mathematics and Physics and Global Change Institute. “Mid-size, commercial-scale renewable power generating systems like UQ's will become increasingly common in urban and remote areas. Addressing the engineering issues around how these systems can feed into and integrate with the grid is essential so that people can really understand and calculate their value as we transition to lower-emission forms of energy.” Electricity retailer Energex contributed $90,000 to the research project through state-of-the- art equipment to allow high-quality monitoring and analysis of the power feed. Another key research project addresses one of the most common criticisms of solar power: that it cannot replace baseload grid power. Through a partnership with Brisbane electricity storage technology company RedFlow, a 200kW battery bank will be connected to a 339kW section of the solar array. “The RedFlow system uses next-generation zinc bromine batteries,” Professor Meredith said.
Functional nanometer-scale structures
Chan, Tsz On Mario
Nanometer-scale structures have properties that are fundamentally different from their bulk counterparts. Much research effort has been devoted in the past decades to explore new fabrication techniques, model the physical properties of these structures, and construct functional devices. The ability to manipulate and control the structure of matter at the nanoscale has made many new classes of materials available for the study of fundamental physical processes and potential applications. The interplay between fabrication techniques and physical understanding of the nanostructures and processes has revolutionized the physical and material sciences, providing far superior properties in materials for novel applications that benefit society. This thesis consists of two major aspects of my graduate research in nano-scale materials. In the first part (Chapters 3--6), a comprehensive study on the nanostructures based on electrospinning and thermal treatment is presented. Electrospinning is a well-established method for producing high-aspect-ratio fibrous structures, with fiber diameter ranging from 1 nm--1 microm. A polymeric solution is typically used as a precursor in electrospinning. In our study, the functionality of the nanostructure relies on both the nanostructure and material constituents. Metallic ions containing precursors were added to the polymeric precursor following a sol-gel process to prepare the solution suitable for electrospinning. A typical electrospinning process produces as-spun fibers containing both polymer and metallic salt precursors. Subsequent thermal treatments of the as-spun fibers were carried out in various conditions to produce desired structures. In most cases, polymer in the solution and the as-spun fibers acted as a backbone for the structure formation during the subsequent heat treatment, and were thermally removed in the final stage. Polymers were also designed to react with the metallic ion precursors during heat treatment in some
Socially responsible marketing decisions - scale development
Dina Lončarić
2009-07-01
Full Text Available The purpose of this research is to develop a measurement scale for evaluating the implementation level of the concept of social responsibility in taking marketing decisions, in accordance with a paradigm of the quality-of-life marketing. A new scale of "socially responsible marketing decisions" has been formed and its content validity, reliability and dimensionality have been analyzed. The scale has been tested on a sample of the most successful Croatian firms. The research results lead us to conclude that the scale has satisfactory psychometric characteristics but that it is necessary to improve it by generating new items and by testing it on a greater number of samples.
International scaling of nuclear and radiological events
Wang Yuhui; Wang Haidan
2014-01-01
Scales are inherent forms of measurement used in daily life, just like Celsius or Fahrenheit scales for temperature and Richter for scale for earthquakes. Jointly developed by the IAEA and OECD/NEA in 1990, the purpose of International Nuclear and Radiological Event Scale (INES) is to help nuclear and radiation safety authorities and the nuclear industry worldwide to rate nuclear and radiological events and to communicate their safety significance to the general public, the media and the technical community. INES was initially used to classify events at nuclear power plants only. It was subsequently extended to rate events associated with the transport, storage and use of radioactive material and radiation sources, from those occurring at nuclear facilities to those associated with industrial use. Since its inception, it has been adopted in 69 countries. Events are classified on the scale at seven levels: Levels 1-3 are called 'incidents' and Levels 4-7 'accidents'. The scale is designed so that the severity of an event is about ten times greater for each increase in level on the scale. Events without safety significance are called 'deviations' and are classified Below Scale/Level 0. INES classifies nuclear and radiological accidents and incidents by considering three areas of impact: People and the Environment; Radiological Barriers and Control; Defence-in-Depth. By now, two nuclear accidents were on the highest level of the scale: Chernobyl and Fukumashi. (authors)
Self-adapted sliding scale spectroscopy ADC
Xu Qichun; Wang Jingjin
1992-01-01
The traditional sliding scale technique causes a disabled range that is equal to the sliding length, thus reduces the analysis range of a MCA. A method for reduce ADC's DNL, which is called self-adapted sliding scale method, has been designed and tested. With this method, the disabled range caused by a traditional sliding scale method can be eliminated by a random trial scale and there is no need of an additional amplitude discriminator with swing threshold. A special trial-and-correct logic is presented. The tested DNL of the spectroscopy ADC described here is less than 0.5%
Selecting numerical scales for pairwise comparisons
Elliott, Michael A.
2010-01-01
It is often desirable in decision analysis problems to elicit from an individual the rankings of a population of attributes according to the individual's preference and to understand the degree to which each attribute is preferred to the others. A common method for obtaining this information involves the use of pairwise comparisons, which allows an analyst to convert subjective expressions of preference between two attributes into numerical values indicating preferences across the entire population of attributes. Key to the use of pairwise comparisons is the underlying numerical scale that is used to convert subjective linguistic expressions of preference into numerical values. This scale represents the psychological manner in which individuals perceive increments of preference among abstract attributes and it has important implications about the distribution and consistency of an individual's preferences. Three popular scale types, the traditional integer scales, balanced scales and power scales are examined. Results of a study of 64 individuals responding to a hypothetical decision problem show that none of these scales can accurately capture the preferences of all individuals. A study of three individuals working on an actual engineering decision problem involving the design of a decay heat removal system for a nuclear fission reactor show that the choice of scale can affect the preferred decision. It is concluded that applications of pairwise comparisons would benefit from permitting participants to choose the scale that best models their own particular way of thinking about the relative preference of attributes.
New SCALE graphical interface for criticality safety
Bowman, Stephen M.; Horwedel, James E.
2003-01-01
The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. SCALE includes the well-known KENO V.a and KENO-VI three-dimensional (3-D) Monte Carlo criticality computer codes. One of the current development efforts aimed at making SCALE easier to use is the SCALE Graphically Enhanced Editing Wizard (GeeWiz). GeeWiz is compatible with SCALE 5 and runs on Windows personal computers. GeeWiz provides input menus and context-sensitive help to guide users through the setup of their input. It includes a direct link to KENO3D to allow the user to view the components of their geometry model as it is constructed. Once the input is complete, the user can click a button to run SCALE and another button to view the output. KENO3D has also been upgraded for compatibility with SCALE 5 and interfaces directly with GeeWiz. GeeWiz and KENO3D for SCALE 5 are planned for release in late 2003. The presentation of this paper is designed as a live demonstration of GeeWiz and KENO3D for SCALE 5. (author)
Ergodicity breakdown and scaling from single sequences
Kalashyan, Armen K. [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Buiatti, Marco [Laboratoire de Neurophysique et Physiologie, CNRS UMR 8119 Universite Rene Descartes - Paris 5 45, rue des Saints Peres, 75270 Paris Cedex 06 (France); Cognitive Neuroimaging Unit - INSERM U562, Service Hospitalier Frederic Joliot, CEA/DRM/DSV, 4 Place du general Leclerc, 91401 Orsay Cedex (France); Grigolini, Paolo [Center for Nonlinear Science, University of North Texas, P.O. Box 311427, Denton, TX 76203-1427 (United States); Dipartimento di Fisica ' E.Fermi' - Universita di Pisa and INFM, Largo Pontecorvo 3, 56127 Pisa (Italy); Istituto dei Processi Chimico, Fisici del CNR Area della Ricerca di Pisa, Via G. Moruzzi 1, 56124 Pisa (Italy)], E-mail: grigo@df.unipi.it
2009-01-30
In the ergodic regime, several methods efficiently estimate the temporal scaling of time series characterized by long-range power-law correlations by converting them into diffusion processes. However, in the condition of ergodicity breakdown, the same methods give ambiguous results. We show that in such regime, two different scaling behaviors emerge depending on the age of the windows used for the estimation. We explain the ambiguity of the estimation methods by the different influence of the two scaling behaviors on each method. Our results suggest that aging drastically alters the scaling properties of non-ergodic processes.
Ergodicity breakdown and scaling from single sequences
Kalashyan, Armen K.; Buiatti, Marco; Grigolini, Paolo
2009-01-01
In the ergodic regime, several methods efficiently estimate the temporal scaling of time series characterized by long-range power-law correlations by converting them into diffusion processes. However, in the condition of ergodicity breakdown, the same methods give ambiguous results. We show that in such regime, two different scaling behaviors emerge depending on the age of the windows used for the estimation. We explain the ambiguity of the estimation methods by the different influence of the two scaling behaviors on each method. Our results suggest that aging drastically alters the scaling properties of non-ergodic processes.
Scale Mismatches in Management of Urban Landscapes
Sara T. Borgström
2006-12-01
Full Text Available Urban landscapes constitute the future environment for most of the world's human population. An increased understanding of the urbanization process and of the effects of urbanization at multiple scales is, therefore, key to ensuring human well-being. In many conventional natural resource management regimes, incomplete knowledge of ecosystem dynamics and institutional constraints often leads to institutional management frameworks that do not match the scale of ecological patterns and processes. In this paper, we argue that scale mismatches are particularly pronounced in urban landscapes. Urban green spaces provide numerous important ecosystem services to urban citizens, and the management of these urban green spaces, including recognition of scales, is crucial to the well-being of the citizens. From a qualitative study of the current management practices in five urban green spaces within the Greater Stockholm Metropolitan Area, Sweden, we found that 1 several spatial, temporal, and functional scales are recognized, but the cross-scale interactions are often neglected, and 2 spatial and temporal meso-scales are seldom given priority. One potential effect of the neglect of ecological cross-scale interactions in these highly fragmented landscapes is a gradual reduction in the capacity of the ecosystems to provide ecosystem services. Two important strategies for overcoming urban scale mismatches are suggested: 1 development of an integrative view of the whole urban social-ecological landscape, and 2 creation of adaptive governance systems to support practical management.
Inflation in a Scale Invariant Universe
Ferreira, Pedro G. [Oxford U.; Hill, Christopher T. [Fermilab; Noller, Johannes [Zurich U.; Ross, Graham G. [Oxford U., Theor. Phys.
2018-02-16
A scale-invariant universe can have a period of accelerated expansion at early times: inflation. We use a frame-invariant approach to calculate inflationary observables in a scale invariant theory of gravity involving two scalar fields - the spectral indices, the tensor to scalar ratio, the level of isocurvature modes and non-Gaussianity. We show that scale symmetry leads to an exact cancellation of isocurvature modes and that, in the scale-symmetry broken phase, this theory is well described by a single scalar field theory. We find the predictions of this theory strongly compatible with current observations.
Water scaling in the North Sea oil and gas fields and scale prediction: An overview
Yuan, M
1997-12-31
Water-scaling is a common and major production chemistry problem in the North Sea oil and gas fields and scale prediction has been an important means to assess the potential and extent of scale deposition. This paper presents an overview of sulphate and carbonate scaling problems in the North Sea and a review of several widely used and commercially available scale prediction software. In the paper, the water chemistries and scale types and severities are discussed relative of the geographical distribution of the fields in the North Sea. The theories behind scale prediction are then briefly described. Five scale or geochemical models are presented and various definitions of saturation index are compared and correlated. Views are the expressed on how to predict scale precipitation under some extreme conditions such as that encountered in HPHT reservoirs. 15 refs., 7 figs., 9 tabs.
Dispersion and Cluster Scales in the Ocean
Kirwan, A. D., Jr.; Chang, H.; Huntley, H.; Carlson, D. F.; Mensa, J. A.; Poje, A. C.; Fox-Kemper, B.
2017-12-01
Ocean flow space scales range from centimeters to thousands of kilometers. Because of their large Reynolds number these flows are considered turbulent. However, because of rotation and stratification constraints they do not conform to classical turbulence scaling theory. Mesoscale and large-scale motions are well described by geostrophic or "2D turbulence" theory, however extending this theory to submesoscales has proved to be problematic. One obvious reason is the difficulty in obtaining reliable data over many orders of magnitude of spatial scales in an ocean environment. The goal of this presentation is to provide a preliminary synopsis of two recent experiments that overcame these obstacles. The first experiment, the Grand LAgrangian Deployment (GLAD) was conducted during July 2012 in the eastern half of the Gulf of Mexico. Here approximately 300 GPS-tracked drifters were deployed with the primary goal to determine whether the relative dispersion of an initially densely clustered array was driven by processes acting at local pair separation scales or by straining imposed by mesoscale motions. The second experiment was a component of the LAgrangian Submesoscale Experiment (LASER) conducted during the winter of 2016. Here thousands of bamboo plates were tracked optically from an Aerostat. Together these two deployments provided an unprecedented data set on dispersion and clustering processes from 1 to 106 meter scales. Calculations of statistics such as two point separations, structure functions, and scale dependent relative diffusivities showed: inverse energy cascade as expected for scales above 10 km, a forward energy cascade at scales below 10 km with a possible energy input at Langmuir circulation scales. We also find evidence from structure function calculations for surface flow convergence at scales less than 10 km that account for material clustering at the ocean surface.
Liu Lianshou; Zhang Yang; Wu Yuanfang
1996-01-01
The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)
Price Discrimination, Economies of Scale, and Profits.
Park, Donghyun
2000-01-01
Demonstrates that it is possible for economies of scale to induce a price-discriminating monopolist to sell in an unprofitable market where the average cost always exceeds the price. States that higher profits in the profitable market caused by economies of scale may exceed losses incurred in the unprofitable market. (CMK)
Geometrical scaling, furry branching and minijets
Hwa, R.C.
1988-01-01
Scaling properties and their violations in hadronic collisions are discussed in the framework of the geometrical branching model. Geometrical scaling supplemented by Furry branching characterizes the soft component, while the production of jets specifies the hard component. Many features of multiparticle production processes are well described by this model. 21 refs
The Resiliency Scale for Young Adults
Prince-Embury, Sandra; Saklofske, Donald H.; Nordstokke, David W.
2017-01-01
The Resiliency Scale for Young Adults (RSYA) is presented as an upward extension of the Resiliency Scales for Children and Adolescents (RSCA). The RSYA is based on the "three-factor model of personal resiliency" including "mastery," "relatedness," and "emotional reactivity." Several stages of scale…
Strontium Removal: Full-Scale Ohio Demonstrations
The objectives of this presentation are to present a brief overview of past bench-scale research to evaluate the impact lime softening on strontium removal from drinking water and present full-scale drinking water treatment studies to impact of lime softening and ion exchange sof...
Evaluation of a constipation risk assessment scale.
Zernike, W; Henderson, A
1999-06-01
This project was undertaken in order to evaluate the utility of a constipation risk assessment scale and the accompanying bowel management protocol. The risk assessment scale was primarily introduced to teach and guide staff in managing constipation when caring for patients. The intention of the project was to reduce the incidence of constipation in patients during their admission to hospital.
Scaling solutions for dilaton quantum gravity
T. Henz
2017-06-01
The field equations derived from this effective action can be used directly for cosmology. Scale symmetry is spontaneously broken by a non-vanishing cosmological value of the scalar field. For the cosmology corresponding to our scaling solutions, inflation arises naturally. The effective cosmological constant becomes dynamical and vanishes asymptotically as time goes to infinity.
Scale invariant Volkov–Akulov supergravity
S. Ferrara
2015-10-01
Full Text Available A scale invariant goldstino theory coupled to supergravity is obtained as a standard supergravity dual of a rigidly scale-invariant higher-curvature supergravity with a nilpotent chiral scalar curvature. The bosonic part of this theory describes a massless scalaron and a massive axion in a de Sitter Universe.
Crown ratio influences allometric scaling in trees
Annikki Makela; Harry T. Valentine
2006-01-01
Allometric theories suggest that the size and shape of organisms follow universal rules, with a tendency toward quarter-power scaling. In woody plants, however, structure is influenced by branch death and shedding, which leads to decreasing crown ratios, accumulation of heartwood, and stem and branch tapering. This paper examines the impacts on allometric scaling of...
Large Scale Computations in Air Pollution Modelling
Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Scaling and critical behaviour in nuclear fragmentation
Campi, X.
1990-09-01
These notes review recent results on nuclear fragmentation. An analysis of experimental data from exclusive experiments is made in the framework of modern theories of fragmentation of finite size objects. We discuss the existence of a critical regime of fragmentation and the relevance of scaling and finite size scaling
Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-05-01
The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.
Statistics for Locally Scaled Point Patterns
Prokesová, Michaela; Hahn, Ute; Vedel Jensen, Eva B.
2006-01-01
scale factor. The main emphasis of the present paper is on analysis of such models. Statistical methods are developed for estimation of scaling function and template parameters as well as for model validation. The proposed methods are assessed by simulation and used in the analysis of a vegetation...
Scale-sensitive governance of the environment
Padt, F.; Opdam, P.F.M.; Polman, N.B.P.; Termeer, C.J.A.M.
2014-01-01
Sensitivity to scales is one of the key challenges in environmental governance. Climate change, food production, energy supply, and natural resource management are examples of environmental challenges that stretch across scales and require action at multiple levels. Governance systems are typically
Designing the Nuclear Energy Attitude Scale.
Calhoun, Lawrence; And Others
1988-01-01
Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)
Moral regulation: historical geography and scale
Legg, Stephen; Brown, Michael
2013-01-01
This paper introduces a special issue on the historical geography of moral regulation and scale. The paper examines the rich and varied work of geographers on moral geographies before looking at wider work on moral regulation influenced by Michel Foucault. Highlighting the significance of the\\ud neglected dimension of scale, the paper introduces the themes examined in the subsequent papers.
Scaling with known uncertainty: a synthesis
Jianguo Wu; Harbin Li; K. Bruce Jones; Orie L. Loucks
2006-01-01
Scale is a fundamental concept in ecology and all sciences (Levin 1992, Wu and Loucks 1995, Barenblatt 1996), which has received increasing attention in recent years. The previous chapters have demonstrated an immerse diversity of scaling issues present in different areas of ecology, covering species distribution, population dynamics, ecosystem processes, and...
Speculation about near-wall turbulence scales
Yurchenko, N F
2008-01-01
A strategy to control near-wall turbulence modifying scales of fluid motion is developed. The boundary-layer flow is shown to respond selectively to the scale of streamwise vortices initiated, e.g. with the spanwise regular temperature distribution over a model surface. It is used to generate sustainable streamwise vortices and thus to optimize integral flow characteristics.
Reliability of Multi-Category Rating Scales
Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.
2013-01-01
The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…
Getting to Scale: Evidence, Professionalism, and Community
Slavin, Robert E.
2016-01-01
Evidence-based reform, in which proven programs are scaled up to reach many students, is playing an increasing role in American education. This article summarizes articles in this issue to explain how Reading Recovery has managed to sustain itself and go to scale over more than 30 years. It argues that Reading Recovery has succeeded due to a focus…
Automating large-scale reactor systems
Kisner, R.A.
1985-01-01
This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig
Stochastic time scale for the Universe
Szydlowski, M.; Golda, Z.
1986-01-01
An intrinsic time scale is naturally defined within stochastic gradient dynamical systems. It should be interpreted as a ''relaxation time'' to a local potential minimum after the system has been randomly perturbed. It is shown that for a flat Friedman-like cosmological model this time scale is of order of the age of the Universe. 7 refs. (author)
[Development of an Atypical Response Scale.
Mendelsohn, Mark; Linden, James
The development of an objective diagnostic scale to measure atypical behavior is discussed. The Atypical Response Scale (ARS) is a structured projective test consisting of 17 items, each weighted 1, 2, or 3, that were tested for convergence and reliability. ARS may be individually or group administered in 10-15 minutes; hand scoring requires 90…
Some nonlinear dynamic inequalities on time scales
In 1988, Stefan Hilger [10] introduced the calculus on time scales which unifies continuous and discrete analysis. Since then many authors have expounded on various aspects of the theory of dynamic equations on time scales. Recently, there has been much research activity concerning the new theory. For example, we ...
Negative Life Events Scale for Students (NLESS)
Buri, John R.; Cromett, Cristina E.; Post, Maria C.; Landis, Anna Marie; Alliegro, Marissa C.
2015-01-01
Rationale is presented for the derivation of a new measure of stressful life events for use with students [Negative Life Events Scale for Students (NLESS)]. Ten stressful life events questionnaires were reviewed, and the more than 600 items mentioned in these scales were culled based on the following criteria: (a) only long-term and unpleasant…
Empirical scaling for present ohmic heated tokamaks
Daughney, C.
1975-06-01
Empirical scaling laws are given for the average electron temperature and electron energy confinement time as functions of plasma current, average electron density, effective ion charge, toroidal magnetic field, and major and minor plasma radius. The ohmic heating is classical, and the electron energy transport is anomalous. The present scaling indicates that ohmic-heating becomes ineffective with larger experiments. (U.S.)
Working out Migratory Attitudes Scale of Personality
S A Kuznetsova
2014-12-01
Full Text Available In the article the first cycle of working out the migratory attitude scale is presented. The results of the research of the migratory attitudes in Magadan young people show the adequacy of theoretical hypotheses and the validity of the estimation procedure. The scale application has rendered possible to obtain the data about the age features of migratory attitudes.
Test Review: Autism Spectrum Rating Scales
Simek, Amber N.; Wahlberg, Andrea C.
2011-01-01
This article reviews Autism Spectrum Rating Scales (ASRS) which are designed to measure behaviors in children between the ages of 2 and 18 that are associated with disorders on the autism spectrum as rated by parents/caregivers and/or teachers. The rating scales include items related to behaviors associated with Autism, Asperger's Disorder, and…
Broken Scale Invariance and Anomalous Dimensions
Wilson, K. G.
1970-05-01
Mack and Kastrup have proposed that broken scale invariance is a symmetry of strong interactions. There is evidence from the Thirring model and perturbation theory that the dimensions of fields defined by scale transformations will be changed by the interaction from their canonical values. We review these ideas and their consequences for strong interactions.
Large-scale perspective as a challenge
Plomp, M.G.A.
2012-01-01
1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that
Visuomotor Dissociation in Cerebral Scaling of Size
Potgieser, Adriaan R. E.; de Jong, Bauke M.
2016-01-01
Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in
The Dispositions for Culturally Responsive Pedagogy Scale
Whitaker, Manya C.; Valtierra, Kristina Marie
2018-01-01
Purpose: The purpose of this study is to develop and validate the dispositions for culturally responsive pedagogy scale (DCRPS). Design/methodology/approach: Scale development consisted of a six-step process including item development, expert review, exploratory factor analysis, factor interpretation, confirmatory factor analysis and convergent…
A Review of Reading Motivation Scales
Davis, Marcia H.; Tonks, Stephen M.; Hock, Michael; Wang, Wenhao; Rodriguez, Aldo
2018-01-01
Reading motivation is a critical contributor to reading achievement and has the potential to influence its development. Educators, researchers, and evaluators need to select the best reading motivation scales for their research and classroom. The goals of this review were to identify a set of reading motivation student self-report scales used in…
Some Problems of Industrial Scale-Up.
Jackson, A. T.
1985-01-01
Scientific ideas of the biological laboratory are turned into economic realities in industry only after several problems are solved. Economics of scale, agitation, heat transfer, sterilization of medium and air, product recovery, waste disposal, and future developments are discussed using aerobic respiration as the example in the scale-up…
Determining the scale in lattice QCD
Bornyakov, V.G. [Institute for High Energy Physics, Protvino (Russian Federation); Institute of Theoretical and Experimental Physics, Moscow (Russian Federation); Far Eastern Federal Univ., Vladivostok (Russian Federation). School of Biomedicine; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Hudspith, R. [York Univ., Toronto, ON (Canada). Dept. of Physics and Astronomy; and others
2015-12-15
We discuss scale setting in the context of 2+1 dynamical fermion simulations where we approach the physical point in the quark mass plane keeping the average quark mass constant. We have simulations at four beta values, and after determining the paths and lattice spacings, we give an estimation of the phenomenological values of various Wilson flow scales.
Scale dependence of effective media properties
Tidwell, V.C.; VonDoemming, J.D.; Martinez, K.
1992-01-01
For problems where media properties are measured at one scale and applied at another, scaling laws or models must be used in order to define effective properties at the scale of interest. The accuracy of such models will play a critical role in predicting flow and transport through the Yucca Mountain Test Site given the sensitivity of these calculations to the input property fields. Therefore, a research programhas been established to gain a fundamental understanding of how properties scale with the aim of developing and testing models that describe scaling behavior in a quantitative-manner. Scaling of constitutive rock properties is investigated through physical experimentation involving the collection of suites of gas permeability data measured over a range of discrete scales. Also, various physical characteristics of property heterogeneity and the means by which the heterogeneity is measured and described are systematically investigated to evaluate their influence on scaling behavior. This paper summarizes the approach that isbeing taken toward this goal and presents the results of a scoping study that was conducted to evaluate the feasibility of the proposed research
Scale invariant Volkov–Akulov supergravity
Ferrara, S., E-mail: sergio.ferrara@cern.ch [Th-Ph Department, CERN, CH-1211 Geneva 23 (Switzerland); INFN – Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Italy); Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Porrati, M., E-mail: mp9@nyu.edu [Th-Ph Department, CERN, CH-1211 Geneva 23 (Switzerland); CCPP, Department of Physics, NYU, 4 Washington Pl., New York, NY 10003 (United States); Sagnotti, A., E-mail: sagnotti@sns.it [Th-Ph Department, CERN, CH-1211 Geneva 23 (Switzerland); Scuola Normale Superiore and INFN, Piazza dei Cavalieri 7, 56126 Pisa (Italy)
2015-10-07
A scale invariant goldstino theory coupled to supergravity is obtained as a standard supergravity dual of a rigidly scale-invariant higher-curvature supergravity with a nilpotent chiral scalar curvature. The bosonic part of this theory describes a massless scalaron and a massive axion in a de Sitter Universe.
Decentralized Large-Scale Power Balancing
Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad
2013-01-01
problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...
One-scale supersymmetric inflationary models
Bertolami, O.; Ross, G.G.
1986-01-01
The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)
Can stroke patients use visual analogue scales?
Price, C I; Curless, R H; Rodgers, H
1999-07-01
Visual analogue scales (VAS) have been used for the subjective measurement of mood, pain, and health status after stroke. In this study we investigated how stroke-related impairments could alter the ability of subjects to answer accurately. Consent was obtained from 96 subjects with a clinical stroke (mean age, 72.5 years; 50 men) and 48 control subjects without cerebrovascular disease (mean age, 71.5 years; 29 men). Patients with reduced conscious level or severe dysphasia were excluded. Subjects were asked to rate the tightness that they could feel on the (unaffected) upper arm after 3 low-pressure inflations with a standard sphygmomanometer cuff, which followed a predetermined sequence (20 mm Hg, 40 mm Hg, 0 mm Hg). Immediately after each change, they rated the perceived tightness on 5 scales presented in a random order: 4-point rating scale (none, mild, moderate, severe), 0 to 10 numerical rating scale, mechanical VAS, horizontal VAS, and vertical VAS. Standard tests recorded deficits in language, cognition, and visuospatial awareness. Inability to complete scales with the correct pattern was associated with any stroke (P<0.001). There was a significant association between success using scales and milder clinical stroke subtype (P<0.01). Within the stroke group, logistic regression analysis identified significant associations (P<0.05) between impairments (cognitive and visuospatial) and inability to complete individual scales correctly. Many patients after a stroke are unable to successfully complete self-report measurement scales, including VAS.
Developing a News Media Literacy Scale
Ashley, Seth; Maksl, Adam; Craft, Stephanie
2013-01-01
Using a framework previously applied to other areas of media literacy, this study developed and assessed a measurement scale focused specifically on critical news media literacy. Our scale appears to successfully measure news media literacy as we have conceptualized it based on previous research, demonstrated through assessments of content,…
Quantum implications of a scale invariant regularization
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
Multiple time scale methods in tokamak magnetohydrodynamics
Jardin, S.C.
1984-01-01
Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed
Adjustment of the Internal Tax Scale
2013-01-01
In application of Article R V 2.03 of the Staff Regulations, the internal tax scale has been adjusted with effect on 1 January 2012. The new scale may be consulted via the CERN Admin e-guide. The notification of internal annual tax certificate for the financial year 2012 takes into account this adjustment. HR Department (Tel. 73907)
Vacuum alignment and radiatively induced Fermi scale
Alanne Tommi
2017-01-01
Full Text Available We extend the discussion about vacuum misalignment by quantum corrections in models with composite pseudo-Goldstone Higgs boson to renormalisable models with elementary scalars. As a concrete example, we propose a framework, where the hierarchy between the unification and the Fermi scale emerges radiatively. This scenario provides an interesting link between the unification and Fermi scale physics.
Uniform Statistical Convergence on Time Scales
Yavuz Altin
2014-01-01
Full Text Available We will introduce the concept of m- and (λ,m-uniform density of a set and m- and (λ,m-uniform statistical convergence on an arbitrary time scale. However, we will define m-uniform Cauchy function on a time scale. Furthermore, some relations about these new notions are also obtained.
Multi-Scale Models for the Scale Interaction of Organized Tropical Convection
Yang, Qiu
Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.
The Chinese version of the Myocardial Infarction Dimensional Assessment Scale (MIDAS: Mokken scaling
Watson Roger
2012-01-01
Full Text Available Abstract Background Hierarchical scales are very useful in clinical practice due to their ability to discriminate precisely between individuals, and the original English version of the Myocardial Infarction Dimensional Assessment Scale has been shown to contain a hierarchy of items. The purpose of this study was to analyse a Mandarin Chinese translation of the Myocardial Infarction Dimensional Assessment Scale for a hierarchy of items according to the criteria of Mokken scaling. Data from 180 Chinese participants who completed the Chinese translation of the Myocardial Infarction Dimensional Assessment Scale were analysed using the Mokken Scaling Procedure and the 'R' statistical programme using the diagnostics available in these programmes. Correlation between Mandarin Chinese items and a Chinese translation of the Short Form (36 Health Survey was also analysed. Findings Fifteen items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale were retained in a strong and reliable Mokken scale; invariant item ordering was not evident and the Mokken scaled items of the Chinese Myocardial Infarction Dimensional Assessment Scale correlated with the Short Form (36 Health Survey. Conclusions Items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale form a Mokken scale and this offers further insight into how the items of the Myocardial Infarction Dimensional Assessment Scale relate to the measurement of health-related quality of life people with a myocardial infarction.
Development and Standardization of the Gratitude Scale
Mohammad Anas
2016-12-01
Full Text Available The Gratitude Scale (GS developed by the authors was administered to 456 adults to determine the psychometric characteristics i.e. reliability and validity. Cronbach’s Alpha of the scale was found 0.91. Content validity of the scale was verified by some experts, academicians, and professionals. For testing multicollinearity and singularity ‘Determinant’ of the R-matrix was estimated and it was greater than 0.00001. The items having factor loading greater than or equal to 0.40 were selected. Total 26 items with five dimensions emerged through Exploratory Factor Analysis explaining 58.14% of the variance, which provided the evidence of factorial/construct validity of the scale. The scale can be used for research and human resource development programs in school/university and organizations.
Phylogeny and palm diversity across scales
Eiserhardt, Wolf L.; Svenning, J.-C.; Baker, William J.
spatial scales. Among others, we ask the following questions: To what extent can niche conservatism explain large-scale distribution patterns? Which assembly mechanisms are responsible for palm community composition on different spatial scales? What is the role of phylogenetic history for spatial patterns...... clustering. On a continental scale in the New World, we inferred from phylogenetic turnover that palms diversified mainly within seven biogeographic regions. This pattern of in situ diversification is strongly driven by a combination of phylogenetic niche conservatism, environmental filtering and dispersal...... limitation. Niche conservatism with respect to temperature seasonality and extremes emerges as an important determinant of palm species and clade distributions and thus there is concern that palms might be considerably vulnerable to climate change. On a regional to local scale in the Western Amazon...
Organization and scaling in water supply networks
Cheng, Likwan; Karney, Bryan W.
2017-12-01
Public water supply is one of the society's most vital resources and most costly infrastructures. Traditional concepts of these networks capture their engineering identity as isolated, deterministic hydraulic units, but overlook their physics identity as related entities in a probabilistic, geographic ensemble, characterized by size organization and property scaling. Although discoveries of allometric scaling in natural supply networks (organisms and rivers) raised the prospect for similar findings in anthropogenic supplies, so far such a finding has not been reported in public water or related civic resource supplies. Examining an empirical ensemble of large number and wide size range, we show that water supply networks possess self-organized size abundance and theory-explained allometric scaling in spatial, infrastructural, and resource- and emission-flow properties. These discoveries establish scaling physics for water supply networks and may lead to novel applications in resource- and jurisdiction-scale water governance.
Scaling relations for eddy current phenomena
Dodd, C.V.; Deeds, W.E.
1975-11-01
Formulas are given for various electromagnetic quantities for coils in the presence of conductors, with the scaling parameters factored out so that small-scale model experiments can be related to large-scale apparatus. Particular emphasis is given to such quantities as eddy current heating, forces, power, and induced magnetic fields. For axially symmetric problems, closed-form integrals are available for the vector potential and all the other quantities obtainable from it. For unsymmetrical problems, a three-dimensional relaxation program can be used to obtain the vector potential and then the derivable quantities. Data on experimental measurements are given to verify the validity of the scaling laws for forces, inductances, and impedances. Indirectly these also support the validity of the scaling of the vector potential and all of the other quantities obtained from it
Abusive Supervision Scale Development in Indonesia
Fenika Wulani
2014-02-01
Full Text Available The purpose of this study was to develop a scale of abusive supervision in Indonesia. The study was conducted with a different context and scale development method from Tepper’s (2000 abusive supervision scale. The abusive supervision scale from Tepper (2000 was developed in the U.S., which has a cultural orientation of low power distance. The current study was conducted in Indonesia, which has a high power distance. This study used interview procedures to obtain information about supervisor’s abusive behavior, and it was also assessed by experts. The results of this study indicated that abusive supervision was a 3-dimensional construct. There were anger-active abuse (6 items, humiliation-active abuse (4 items, and passive abuse (15 items. These scales have internal reliabilities of 0.947, 0.922, and 0.845, in sequence.
Multi-scale modeling of composites
Azizi, Reza
A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....
Generalized probabilistic scale space for image restoration.
Wong, Alexander; Mishra, Akshaya K
2010-10-01
A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.
Scale invariance from phase transitions to turbulence
Lesne, Annick
2012-01-01
During a century, from the Van der Waals mean field description (1874) of gases to the introduction of renormalization group (RG techniques 1970), thermodynamics and statistical physics were just unable to account for the incredible universality which was observed in numerous critical phenomena. The great success of RG techniques is not only to solve perfectly this challenge of critical behaviour in thermal transitions but to introduce extremely useful tools in a wide field of daily situations where a system exhibits scale invariance. The introduction of scaling, scale invariance and universality concepts has been a significant turn in modern physics and more generally in natural sciences. Since then, a new "physics of scaling laws and critical exponents", rooted in scaling approaches, allows quantitative descriptions of numerous phenomena, ranging from phase transitions to earthquakes, polymer conformations, heartbeat rhythm, diffusion, interface growth and roughening, DNA sequence, dynamical systems, chaos ...
The Phenomenology of Small-Scale Turbulence
Sreenivasan, K. R.; Antonia, R. A.
I have sometimes thought that what makes a man's work classic is often just this multiplicity [of interpretations], which invites and at the same time resists our craving for a clear understanding. Wright (1982, p. 34), on Wittgenstein's philosophy Small-scale turbulence has been an area of especially active research in the recent past, and several useful research directions have been pursued. Here, we selectively review this work. The emphasis is on scaling phenomenology and kinematics of small-scale structure. After providing a brief introduction to the classical notions of universality due to Kolmogorov and others, we survey the existing work on intermittency, refined similarity hypotheses, anomalous scaling exponents, derivative statistics, intermittency models, and the structure and kinematics of small-scale structure - the latter aspect coming largely from the direct numerical simulation of homogeneous turbulence in a periodic box.
The scaling of experiments on volcanic systems
Olivier eMERLE
2015-06-01
Full Text Available In this article, the basic principles of the scaling procedure are first reviewed by a presentation of scale factors. Then, taking an idealized example of a brittle volcanic cone intruded by a viscous magma, the way to choose appropriate analogue materials for both the brittle and ductile parts of the cone is explained by the use of model ratios. Lines of similarity are described to show that an experiment simulates a range of physical processes instead of a unique natural case. The pi theorem is presented as an alternative scaling procedure and discussed through the same idealized example to make the comparison with the model ratio procedure. The appropriateness of the use of gelatin as analogue material for simulating dyke formation is investigated. Finally, the scaling of some particular experiments such as pyroclastic flows or volcanic explosions is briefly presented to show the diversity of scaling procedures in volcanology.
Scale Dependence of Dark Energy Antigravity
Perivolaropoulos, L.
2002-09-01
We investigate the effects of negative pressure induced by dark energy (cosmological constant or quintessence) on the dynamics at various astrophysical scales. Negative pressure induces a repulsive term (antigravity) in Newton's law which dominates on large scales. Assuming a value of the cosmological constant consistent with the recent SnIa data we determine the critical scale $r_c$ beyond which antigravity dominates the dynamics ($r_c \\sim 1Mpc $) and discuss some of the dynamical effects implied. We show that dynamically induced mass estimates on the scale of the Local Group and beyond are significantly modified due to negative pressure. We also briefly discuss possible dynamical tests (eg effects on local Hubble flow) that can be applied on relatively small scales (a few $Mpc$) to determine the density and equation of state of dark energy.
Opportunities for reactor scale experimental physics
1999-01-01
A reactor scale tokamak plasma will exhibit three areas of physics phenomenology not accessible by contemporary experimental facilities. These are: (1) instabilities generated by energetic alpha particles; (2) self-heating phenomena; and (3) reactor scale physics, which includes integration of diverse physics phenomena, each with its own scaling properties. In each area, selected examples are presented that demonstrate the importance and uniqueness of physics results from reactor scale facilities for both inductive and steady state reactor options. It is concluded that the physics learned in such investigations will be original physics not attainable with contemporary facilities. In principle, a reactor scale facility could have a good measure of flexibility to optimize the tokamak approach to magnetic fusion energy. (author)
Chiral battery, scaling laws and magnetic fields
Anand, Sampurn; Bhatt, Jitesh R.; Pandey, Arun Kumar, E-mail: sampurn@prl.res.in, E-mail: jeet@prl.res.in, E-mail: arunp@prl.res.in [Physical Research Laboratory, Ahmedabad, 380009 (India)
2017-07-01
We study the generation and evolution of magnetic field in the presence of chiral imbalance and gravitational anomaly which gives an additional contribution to the vortical current. The contribution due to gravitational anomaly is proportional to T {sup 2} which can generate seed magnetic field irrespective of plasma being chirally charged or neutral. We estimate the order of magnitude of the magnetic field to be 10{sup 30} G at T ∼ 10{sup 9} GeV, with a typical length scale of the order of 10{sup −18} cm, which is much smaller than the Hubble radius at that temperature (10{sup −8} cm). Moreover, such a system possess scaling symmetry. We show that the T {sup 2} term in the vorticity current along with scaling symmetry leads to more power transfer from lower to higher length scale as compared to only chiral anomaly without scaling symmetry.
Small scale structure on cosmic strings
Albrecht, A.
1989-01-01
I discuss our current understanding of cosmic string evolution, and focus on the question of small scale structure on strings, where most of the disagreements lie. I present a physical picture designed to put the role of the small scale structure into more intuitive terms. In this picture one can see how the small scale structure can feed back in a major way on the overall scaling solution. I also argue that it is easy for small scale numerical errors to feed back in just such a way. The intuitive discussion presented here may form the basis for an analytic treatment of the small structure, which I argue in any case would be extremely valuable in filling the gaps in our resent understanding of cosmic string evolution. 24 refs., 8 figs
Modified dispersion relations, inflation, and scale invariance
Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward
2018-02-01
For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.
Lario, J.; Bardaji, T.; Silva, P.G.; Zazo, C.; Goy, J.L.
2016-07-01
This paper discusses possibilities to improve the Environmental Seismic Intensity Scale (ESI-07 scale), a scale based on the effects of earthquakes in the environment. This scale comprises twelve intensity degrees and considers primary and secondary effects, one of them the occurrence of tsunamis. Terminology and physical tsunami parameters corresponding to different intensity levels are often misleading and confusing. The present work proposes: i) a revised and updated catalogue of environmental and geological effects of tsunamis, gathering all the available information on Tsunami Environmental Effects (TEEs) produced by recent earthquake-tsunamis; ii) a specific intensity scale (TEE-16) for the effects of tsunamis in the natural environment at coastal areas. The proposed scale could be used in future tsunami events and, in historic and paleo-tsunami studies. The new TEE- 16 scale incorporates the size specific parameters already considered in the ESI-07 scale, such as wave height, run-up and inland extension of inundation, and a comprehensive and more accurate terminology that covers all the different intensity levels identifiable in the geological record (intensities VI-XII). The TEE-16 scale integrates the description and quantification of the potential sedimentary and erosional features (beach scours, transported boulders and classical tsunamites) derived from different tsunami events at diverse coastal environments (e.g. beaches, estuaries, rocky cliffs,). This new approach represents an innovative advance in relation to the tsunami descriptions provided by the ESI-07 scale, and allows the full application of the proposed scale in paleoseismological studies. The analysis of the revised and updated tsunami environmental damage suggests that local intensities recorded in coastal areas do not correlate well with the TEE-16 intensity (normally higher), but shows a good correlation with the earthquake magnitude (Mw). Tsunamis generated by earthquakes can then be
Scale interactions in a mixing layer – the role of the large-scale gradients
Fiscaletti, D.
2016-02-15
© 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.
Multi-scale approximation of Vlasov equation
Mouton, A.
2009-09-01
One of the most important difficulties of numerical simulation of magnetized plasmas is the existence of multiple time and space scales, which can be very different. In order to produce good simulations of these multi-scale phenomena, it is recommended to develop some models and numerical methods which are adapted to these problems. Nowadays, the two-scale convergence theory introduced by G. Nguetseng and G. Allaire is one of the tools which can be used to rigorously derive multi-scale limits and to obtain new limit models which can be discretized with a usual numerical method: this procedure is so-called a two-scale numerical method. The purpose of this thesis is to develop a two-scale semi-Lagrangian method and to apply it on a gyrokinetic Vlasov-like model in order to simulate a plasma submitted to a large external magnetic field. However, the physical phenomena we have to simulate are quite complex and there are many questions without answers about the behaviour of a two-scale numerical method, especially when such a method is applied on a nonlinear model. In a first part, we develop a two-scale finite volume method and we apply it on the weakly compressible 1D isentropic Euler equations. Even if this mathematical context is far from a Vlasov-like model, it is a relatively simple framework in order to study the behaviour of a two-scale numerical method in front of a nonlinear model. In a second part, we develop a two-scale semi-Lagrangian method for the two-scale model developed by E. Frenod, F. Salvarani et E. Sonnendrucker in order to simulate axisymmetric charged particle beams. Even if the studied physical phenomena are quite different from magnetic fusion experiments, the mathematical context of the one-dimensional paraxial Vlasov-Poisson model is very simple for establishing the basis of a two-scale semi-Lagrangian method. In a third part, we use the two-scale convergence theory in order to improve M. Bostan's weak-* convergence results about the finite
Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications
Liu, Yubao; Warner, Tom; Liu, Yuewei
2011-01-01
This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...
Psychometric properties of the Positive Mental Health Scale (PMH-scale)
Lukat, J.; Margraf, J.; Lutz, R.; Veld, W.M. van der; Becker, E.S.
2016-01-01
Background: In recent years, it has been increasingly recognized that the absence of mental disorder is not the same as the presence of positive mental health (PMH). With the PMH-scale we propose a short, unidimensional scale for the assessment of positive mental health. The scale consists of 9
Mokken scale analysis : Between the Guttman scale and parametric item response theory
van Schuur, Wijbrandt H.
2003-01-01
This article introduces a model of ordinal unidimensional measurement known as Mokken scale analysis. Mokken scaling is based on principles of Item Response Theory (IRT) that originated in the Guttman scale. I compare the Mokken model with both Classical Test Theory (reliability or factor analysis)
Brown, Elissa J.; And Others
1997-01-01
The psychometric adequacy of the Social Interaction Scale and the Social Phobia Scale (both by R. P. Mattick and J. C. Clark, 1989) was studied with 165 patients with anxiety disorders and 21 people without anxiety. Results support the usefulness of the scales for screening and treatment design and evaluation. (SLD)
Time Scale in Least Square Method
Özgür Yeniay
2014-01-01
Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.
Frost Multidimensional Perfectionism Scale: the portuguese version
Ana Paula Monteiro Amaral
2013-01-01
Full Text Available BACKGROUND: The Frost Multidimensional Perfectionism Scale is one of the most world widely used measures of perfectionism. OBJECTIVE: To analyze the psychometric properties of the Portuguese version of the Frost Multidimensional Perfectionism Scale. METHODS: Two hundred and seventeen (178 females students from two Portuguese Universities filled in the scale, and a subgroup (n = 166 completed a retest with a four weeks interval. RESULTS: The scale reliability was good (Cronbach alpha = .857. Corrected item-total correlations ranged from .019 to .548. The scale test-retest reliability suggested a good temporal stability with a test-retest correlation of .765. A principal component analysis with Varimax rotation was performed and based on the Scree plot, two robust factorial structures were found (four and six factors. The principal component analyses, using Monte Carlo PCA for parallel analyses confirmed the six factor solution. The concurrent validity with Hewitt and Flett MPS was high, as well as the discriminant validity of positive and negative affect (Profile of Mood Stats-POMS. DISCUSSION: The two factorial structures (of four and six dimensions of the Portuguese version of Frost Multidimensional Perfectionism Scale replicate the results from different authors, with different samples and cultures. This suggests this scale is a robust instrument to assess perfectionism, in several clinical and research settings as well as in transcultural studies.
Rearden, Bradley T.; Jessee, Matthew Anderson
2016-01-01
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE's graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.
Rearden, Bradley T.
2010-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides a 'plug-and-play' framework with nearly 80 computational modules, including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE's graphical user interfaces assist with accurate system modeling and convenient access to desired results. SCALE 6.1, scheduled for release in the fall of 2010, provides improved reliability and introduces a number of enhanced features, some of which are briefly described here. SCALE 6.1 provides state-of-the-art capabilities for criticality safety, reactor physics, and radiation shielding in a robust yet user-friendly package. The new features and improved reliability of this latest release of SCALE are intended to improve safety and efficiency throughout the nuclear community.
Multi-scale biomedical systems: measurement challenges
Summers, R
2016-01-01
Multi-scale biomedical systems are those that represent interactions in materials, sensors, and systems from a holistic perspective. It is possible to view such multi-scale activity using measurement of spatial scale or time scale, though in this paper only the former is considered. The biomedical application paradigm comprises interactions that range from quantum biological phenomena at scales of 10-12 for one individual to epidemiological studies of disease spread in populations that in a pandemic lead to measurement at a scale of 10+7. It is clear that there are measurement challenges at either end of this spatial scale, but those challenges that relate to the use of new technologies that deal with big data and health service delivery at the point of care are also considered. The measurement challenges lead to the use, in many cases, of model-based measurement and the adoption of virtual engineering. It is these measurement challenges that will be uncovered in this paper. (paper)
Mirror dark matter and large scale structure
Ignatiev, A.Yu.; Volkas, R.R.
2003-01-01
Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter
Multitude scaling laws in axisymmetric turbulent wake
Layek, G. C.; Sunita
2018-03-01
We establish theoretically multitude scaling laws of a self-similar (statistical) axisymmetric turbulent wake. At infinite Reynolds number limit, the flow evolves as general power law and a new exponential law of streamwise distance, consistent with the criterion of equilibrium similarity hypothesis. We found power law scalings for components of the homogeneous dissipation rate (ɛ) obeying the non-Richardson-Kolmogorov cascade as ɛu˜ku3 /2/(l R elm ) , ɛv˜kv3 /2/l , kv˜ku/R el2 m, 0 stress, l is the local length scale, and Rel is the Reynolds number. The Richardson-Kolmogorov cascade corresponds to m = 0. For m ≈ 1, the power law agrees with non-equilibrium scaling laws observed in recent experiments of the axisymmetric wake. On the contrary, the exponential scaling law follows the above dissipation law with different regions of existence for power index m = 3. At finite Reynolds number with kinematic viscosity ν, scalings obey the dissipation laws ɛu ˜ νku/l2 and ɛv ˜ νkv/l2 with kv˜ku/R eln. The value of n is preferably 0 and 2. Different possibilities of scaling laws and symmetry breaking process are discussed at length.
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we
Scaled CMOS Technology Reliability Users Guide
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is
Preferential flow from pore to landscape scales
Koestel, J. K.; Jarvis, N.; Larsbo, M.
2017-12-01
In this presentation, we give a brief personal overview of some recent progress in quantifying preferential flow in the vadose zone, based on our own work and those of other researchers. One key challenge is to bridge the gap between the scales at which preferential flow occurs (i.e. pore to Darcy scales) and the scales of interest for management (i.e. fields, catchments, regions). We present results of recent studies that exemplify the potential of 3-D non-invasive imaging techniques to visualize and quantify flow processes at the pore scale. These studies should lead to a better understanding of how the topology of macropore networks control key state variables like matric potential and thus the strength of preferential flow under variable initial and boundary conditions. Extrapolation of this process knowledge to larger scales will remain difficult, since measurement technologies to quantify macropore networks at these larger scales are lacking. Recent work suggests that the application of key concepts from percolation theory could be useful in this context. Investigation of the larger Darcy-scale heterogeneities that generate preferential flow patterns at the soil profile, hillslope and field scales has been facilitated by hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help to parameterize models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).
New Empirical Earthquake Source‐Scaling Laws
Thingbaijam, Kiran Kumar S.
2017-12-13
We develop new empirical scaling laws for rupture width W, rupture length L, rupture area A, and average slip D, based on a large database of rupture models. The database incorporates recent earthquake source models in a wide magnitude range (M 5.4–9.2) and events of various faulting styles. We apply general orthogonal regression, instead of ordinary least-squares regression, to account for measurement errors of all variables and to obtain mutually self-consistent relationships. We observe that L grows more rapidly with M compared to W. The fault-aspect ratio (L/W) tends to increase with fault dip, which generally increases from reverse-faulting, to normal-faulting, to strike-slip events. At the same time, subduction-inter-face earthquakes have significantly higher W (hence a larger rupture area A) compared to other faulting regimes. For strike-slip events, the growth of W with M is strongly inhibited, whereas the scaling of L agrees with the L-model behavior (D correlated with L). However, at a regional scale for which seismogenic depth is essentially fixed, the scaling behavior corresponds to the W model (D not correlated with L). Self-similar scaling behavior with M − log A is observed to be consistent for all the cases, except for normal-faulting events. Interestingly, the ratio D/W (a proxy for average stress drop) tends to increase with M, except for shallow crustal reverse-faulting events, suggesting the possibility of scale-dependent stress drop. The observed variations in source-scaling properties for different faulting regimes can be interpreted in terms of geological and seismological factors. We find substantial differences between our new scaling relationships and those of previous studies. Therefore, our study provides critical updates on source-scaling relations needed in seismic–tsunami-hazard analysis and engineering applications.
Large scale network-centric distributed systems
Sarbazi-Azad, Hamid
2014-01-01
A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu
Grizzly bear habitat selection is scale dependent.
Ciarniello, Lana M; Boyce, Mark S; Seip, Dale R; Heard, Douglas C
2007-07-01
The purpose of our study is to show how ecologists' interpretation of habitat selection by grizzly bears (Ursus arctos) is altered by the scale of observation and also how management questions would be best addressed using predetermined scales of analysis. Using resource selection functions (RSF) we examined how variation in the spatial extent of availability affected our interpretation of habitat selection by grizzly bears inhabiting mountain and plateau landscapes. We estimated separate models for females and males using three spatial extents: within the study area, within the home range, and within predetermined movement buffers. We employed two methods for evaluating the effects of scale on our RSF designs. First, we chose a priori six candidate models, estimated at each scale, and ranked them using Akaike Information Criteria. Using this method, results changed among scales for males but not for females. For female bears, models that included the full suite of covariates predicted habitat use best at each scale. For male bears that resided in the mountains, models based on forest successional stages ranked highest at the study-wide and home range extents, whereas models containing covariates based on terrain features ranked highest at the buffer extent. For male bears on the plateau, each scale estimated a different highest-ranked model. Second, we examined differences among model coefficients across the three scales for one candidate model. We found that both the magnitude and direction of coefficients were dependent upon the scale examined; results varied between landscapes, scales, and sexes. Greenness, reflecting lush green vegetation, was a strong predictor of the presence of female bears in both landscapes and males that resided in the mountains. Male bears on the plateau were the only animals to select areas that exposed them to a high risk of mortality by humans. Our results show that grizzly bear habitat selection is scale dependent. Further, the
Scaling properties of the transverse mass spectra
Schaffner-Bielich, J.
2002-01-01
Motivated from the formation of an initial state of gluon-saturated matter, we discuss scaling relations for the transverse mass spectra at BNL's relativistic heavy-ion collider (RHIC). We show on linear plots, that the transverse mass spectra for various hadrons can be described by an universal function in m t . The transverse mass spectra for different centralities can be rescaled into each other. Finally, we demonstrate that m t -scaling is also present in proton-antiproton collider data and compare it to m t -scaling at RHIC. (orig.)
Store Image: Scale implementation Part 3
Ronel du Preez
2008-10-01
Full Text Available This paper is the final in the three-part series regarding store image. The purposes of this article are to (1 implement the developed scale to assess whether it illustrates acceptable psychometric properties of reliability and validity, (2 assess the model fit of the developed scale and (3 formulate recommendations for future research. Results indicated that the Apparel Store Image Scale (ASIS show acceptable reliability and model fit. A refined definition of store image was proposed together with a Final Model of Apparel Store Image. Recommendations for future research are made.
Corroded scale analysis from water distribution pipes
Rajaković-Ognjanović Vladana N.
2011-01-01
Full Text Available The subject of this study was the steel pipes that are part of Belgrade's drinking water supply network. In order to investigate the mutual effects of corrosion and water quality, the corrosion scales on the pipes were analyzed. The idea was to improve control of corrosion processes and prevent impact of corrosion on water quality degradation. The instrumental methods for corrosion scales characterization used were: scanning electron microscopy (SEM, for the investigation of corrosion scales of the analyzed samples surfaces, X-ray diffraction (XRD, for the analysis of the presence of solid forms inside scales, scanning electron microscopy (SEM, for the microstructural analysis of the corroded scales, and BET adsorption isotherm for the surface area determination. Depending on the composition of water next to the pipe surface, corrosion of iron results in the formation of different compounds and solid phases. The composition and structure of the iron scales in the drinking water distribution pipes depends on the type of the metal and the composition of the aqueous phase. Their formation is probably governed by several factors that include water quality parameters such as pH, alkalinity, buffer intensity, natural organic matter (NOM concentration, and dissolved oxygen (DO concentration. Factors such as water flow patterns, seasonal fluctuations in temperature, and microbiological activity as well as water treatment practices such as application of corrosion inhibitors can also influence corrosion scale formation and growth. Therefore, the corrosion scales found in iron and steel pipes are expected to have unique features for each site. Compounds that are found in iron corrosion scales often include goethite, lepidocrocite, magnetite, hematite, ferrous oxide, siderite, ferrous hydroxide, ferric hydroxide, ferrihydrite, calcium carbonate and green rusts. Iron scales have characteristic features that include: corroded floor, porous core that contains
A Fast Mellin and Scale Transform
Davide Rocchesso
2007-01-01
Full Text Available A fast algorithm for the discrete-scale (and β-Mellin transform is proposed. It performs a discrete-time discrete-scale approximation of the continuous-time transform, with subquadratic asymptotic complexity. The algorithm is based on a well-known relation between the Mellin and Fourier transforms, and it is practical and accurate. The paper gives some theoretical background on the Mellin, β-Mellin, and scale transforms. Then the algorithm is presented and analyzed in terms of computational complexity and precision. The effects of different interpolation procedures used in the algorithm are discussed.
A Fast Mellin and Scale Transform
Rocchesso Davide
2007-01-01
Full Text Available A fast algorithm for the discrete-scale (and -Mellin transform is proposed. It performs a discrete-time discrete-scale approximation of the continuous-time transform, with subquadratic asymptotic complexity. The algorithm is based on a well-known relation between the Mellin and Fourier transforms, and it is practical and accurate. The paper gives some theoretical background on the Mellin, -Mellin, and scale transforms. Then the algorithm is presented and analyzed in terms of computational complexity and precision. The effects of different interpolation procedures used in the algorithm are discussed.
Factorial correlators: angular scaling within QCD jets
Peschanski, R.
2001-01-01
Factorial correlators measure the amount of dynamical correlation in the multiplicity between two separated phase-space windows. We present the analytical derivation of factorial correlators for a QCD jet described at the double logarithmic (DL) accuracy. We obtain a new angular scaling property for properly normalized correlators between two solid-angle cells or two rings around the jet axis. Normalized QCD factorial correlators scale with the angular distance and are independent of the window size. Scaling violations are expected beyond the DL approximation, in particular from the subject structure. Experimental tests are feasible, and thus would be welcome. (orig.)
Scaling properties of Polish rain series
Licznar, P.
2009-04-01
Scaling properties as well as multifractal nature of precipitation time series have not been studied for local Polish conditions until recently due to lack of long series of high-resolution data. The first Polish study of precipitation time series scaling phenomena was made on the base of pluviograph data from the Wroclaw University of Environmental and Life Sciences meteorological station located at the south-western part of the country. The 38 annual rainfall records from years 1962-2004 were converted into digital format and transformed into a standard format of 5-minute time series. The scaling properties and multifractal character of this material were studied by means of several different techniques: power spectral density analysis, functional box-counting, probability distribution/multiple scaling and trace moment methods. The result proved the general scaling character of time series at the range of time scales ranging form 5 minutes up to at least 24 hours. At the same time some characteristic breaks at scaling behavior were recognized. It is believed that the breaks were artificial and arising from the pluviograph rain gauge measuring precision limitations. Especially strong limitations at the precision of low-intensity precipitations recording by pluviograph rain gauge were found to be the main reason for artificial break at energy spectra, as was reported by other authors before. The analysis of co-dimension and moments scaling functions showed the signs of the first-order multifractal phase transition. Such behavior is typical for dressed multifractal processes that are observed by spatial or temporal averaging on scales larger than the inner-scale of those processes. The fractal dimension of rainfall process support derived from codimension and moments scaling functions geometry analysis was found to be 0.45. The same fractal dimension estimated by means of the functional box-counting method was equal to 0.58. At the final part of the study
Economical scale of nuclear energy application
2001-01-01
The nuclear energy industry is supported by two wheels of radiation and energy applications. When comparing both, they have some different sides, such as numbers of employees and researchers, numbers and scales of works, effect on society, affecting effects and regions of industrial actions, problems on safety, viewpoint on nuclear proliferation protection and safety guarantee, energy security, relationship to environmental problem, efforts on wastes disposal, and so on. Here described on economical scale of radiation application in fields of industry, agriculture, and medicine and medical treatment, and on economical scale of energy application in nuclear power generation and its instruments and apparatus. (G.K.)
A New Class of Scaling Correction Methods
Mei Li-Jie; Wu Xin; Liu Fu-Yao
2012-01-01
When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)
The scale of biomass production in Japan
Matsumura, Yukihiko [School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashihiroshima-shi 739-8527 (Japan); Inoue, Takashi; Fukuda, Katsura [Global Warming Research Department, Mitsubishi Research Institute, Inc., 2-3-6 Ohtemachi, Chiyoda-ku, Tokyo 100-8141 (Japan); Komoto, Keiichi; Hada, Kenichiro [Renewable energy Team, Environment, Natural Resources and Energy Division, Mizuho Information and Research Institute, Inc., 2-3 Kanda-nishikicho, Chiyoda-ku, Tokyo 101-8443 (Japan); Hirata, Satoshi [Technical Institute, Kawasaki Heavy Industries, Ltd., 1-1 Kawasakicho, Akashi-shi, Hyogo 673-8666 (Japan); Minowa, Tomoaki [Biomass Recycle Research Laboratory, National Institute of Advanced and Industrial Science and Technology, 2-2-2 Hiro, Suehiro, Kure-shi, Hiroshima 737-0197 (Japan); Yamamoto, Hiromi [Socioeconomic Research Center, Central Research Institute of Electric Power Industry, 1-6-1 Ohtemachi, Chiyoda-ku, Tokyo 100-8126 (Japan)
2005-11-01
Policymakers working to introduce and promote the use of bioenergy in Japan require detailed information on the scales of the different types of biomass resources generated. In this research, the first of its type in Japan, the investigators reviewed various statistical resources to quantify the scale distribution of forest residues, waste wood from manufacturing, waste wood from construction, cattle manure, sewage sludge, night soil, household garbage, and waste food oil. As a result, the scale of biomass generation in Japan was found to be relatively small, on the average is no more than several tons in dry weight per day. (author)
JY1 time scale: a new Kalman-filter time scale designed at NIST
Yao, Jian; Parker, Thomas E; Levine, Judah
2017-01-01
We report on a new Kalman-filter hydrogen-maser time scale (i.e. JY1 time scale) designed at the National Institute of Standards and Technology (NIST). The JY1 time scale is composed of a few hydrogen masers and a commercial Cs clock. The Cs clock is used as a reference clock to ease operations with existing data. Unlike other time scales, the JY1 time scale uses three basic time-scale equations, instead of only one equation. Also, this time scale can detect a clock error (i.e. time error, frequency error, or frequency drift error) automatically. These features make the JY1 time scale stiff and less likely to be affected by an abnormal clock. Tests show that the JY1 time scale deviates from the UTC by less than ±5 ns for ∼100 d, when the time scale is initially aligned to the UTC and then is completely free running. Once the time scale is steered to a Cs fountain, it can maintain the time with little error even if the Cs fountain stops working for tens of days. This can be helpful when we do not have a continuously operated fountain or when the continuously operated fountain accidentally stops, or when optical clocks run occasionally. (paper)
Refining and validating the Social Interaction Anxiety Scale and the Social Phobia Scale.
Carleton, R Nicholas; Collimore, Kelsey C; Asmundson, Gordon J G; McCabe, Randi E; Rowa, Karen; Antony, Martin M
2009-01-01
The Social Interaction Anxiety Scale and Social Phobia Scale are companion measures for assessing symptoms of social anxiety and social phobia. The scales have good reliability and validity across several samples, however, exploratory and confirmatory factor analyses have yielded solutions comprising substantially different item content and factor structures. These discrepancies are likely the result of analyzing items from each scale separately or simultaneously. The current investigation sets out to assess items from those scales, both simultaneously and separately, using exploratory and confirmatory factor analyses in an effort to resolve the factor structure. Participants consisted of a clinical sample (n 5353; 54% women) and an undergraduate sample (n 5317; 75% women) who completed the Social Interaction Anxiety Scale and Social Phobia Scale, along with additional fear-related measures to assess convergent and discriminant validity. A three-factor solution with a reduced set of items was found to be most stable, irrespective of whether the items from each scale are assessed together or separately. Items from the Social Interaction Anxiety Scale represented one factor, whereas items from the Social Phobia Scale represented two other factors. Initial support for scale and factor validity, along with implications and recommendations for future research, is provided. (c) 2009 Wiley-Liss, Inc.
Large-scale numerical simulations of plasmas
Hamaguchi, Satoshi
2004-01-01
The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)
A pedagogical look at Jeans' density scale
Chu, K-H W
2007-01-01
We illustrate the derivations of Jeans' criteria for the gravitational instabilities in a static homogeneous Newtonian system for pedagogical objectives. The critical Jeans density surface is presented in terms of dimensionless sound speeds and (characteristic) length scales
Small scale structure formation in chameleon cosmology
Brax, Ph.; Bruck, C. van de; Davis, A.C.; Green, A.M.
2006-01-01
Chameleon fields are scalar fields whose mass depends on the ambient matter density. We investigate the effects of these fields on the growth of density perturbations on sub-galactic scales and the formation of the first dark matter halos. Density perturbations on comoving scales R<1 pc go non-linear and collapse to form structure much earlier than in standard ΛCDM cosmology. The resulting mini-halos are hence more dense and resilient to disruption. We therefore expect (provided that the density perturbations on these scales have not been erased by damping processes) that the dark matter distribution on small scales would be more clumpy in chameleon cosmology than in the ΛCDM model
Geometrical scaling of jet fragmentation photons
Hattori, Koichi, E-mail: koichi.hattori@riken.jp [RIKEN BNL Research Center, Brookhaven National Laboratory, Upton NY 11973 (United States); Theoretical Research Division, Nishina Center, RIKEN, Wako, Saitama 351-0198 (Japan); McLerran, Larry, E-mail: mclerran@bnl.gov [RIKEN BNL Research Center, Brookhaven National Laboratory, Upton NY 11973 (United States); Physics Dept., Bdg. 510A, Brookhaven National Laboratory, Upton, NY-11973 (United States); Physics Dept., China Central Normal University, Wuhan (China); Schenke, Björn, E-mail: bschenke@bnl.gov [Physics Dept., Bdg. 510A, Brookhaven National Laboratory, Upton, NY-11973 (United States)
2016-12-15
We discuss jet fragmentation photons in ultrarelativistic heavy-ion collisions. We argue that, if the jet distribution satisfies geometrical scaling and an anisotropic spectrum, these properties are transferred to photons during the jet fragmentation.
A biological rationale for musical scales.
Gill, Kamraan Z; Purves, Dale
2009-12-03
Scales are collections of tones that divide octaves into specific intervals used to create music. Since humans can distinguish about 240 different pitches over an octave in the mid-range of hearing, in principle a very large number of tone combinations could have been used for this purpose. Nonetheless, compositions in Western classical, folk and popular music as well as in many other musical traditions are based on a relatively small number of scales that typically comprise only five to seven tones. Why humans employ only a few of the enormous number of possible tone combinations to create music is not known. Here we show that the component intervals of the most widely used scales throughout history and across cultures are those with the greatest overall spectral similarity to a harmonic series. These findings suggest that humans prefer tone combinations that reflect the spectral characteristics of conspecific vocalizations. The analysis also highlights the spectral similarity among the scales used by different cultures.
Hardy type inequalities on time scales
Agarwal, Ravi P; Saker, Samir H
2016-01-01
The book is devoted to dynamic inequalities of Hardy type and extensions and generalizations via convexity on a time scale T. In particular, the book contains the time scale versions of classical Hardy type inequalities, Hardy and Littlewood type inequalities, Hardy-Knopp type inequalities via convexity, Copson type inequalities, Copson-Beesack type inequalities, Liendeler type inequalities, Levinson type inequalities and Pachpatte type inequalities, Bennett type inequalities, Chan type inequalities, and Hardy type inequalities with two different weight functions. These dynamic inequalities contain the classical continuous and discrete inequalities as special cases when T = R and T = N and can be extended to different types of inequalities on different time scales such as T = hN, h > 0, T = qN for q > 1, etc.In this book the authors followed the history and development of these inequalities. Each section in self-contained and one can see the relationship between the time scale versions of the inequalities and...
Scaling for Dynamical Systems in Biology.
Ledder, Glenn
2017-11-01
Asymptotic methods can greatly simplify the analysis of all but the simplest mathematical models and should therefore be commonplace in such biological areas as ecology and epidemiology. One essential difficulty that limits their use is that they can only be applied to a suitably scaled dimensionless version of the original dimensional model. Many books discuss nondimensionalization, but with little attention given to the problem of choosing the right scales and dimensionless parameters. In this paper, we illustrate the value of using asymptotics on a properly scaled dimensionless model, develop a set of guidelines that can be used to make good scaling choices, and offer advice for teaching these topics in differential equations or mathematical biology courses.
Superconducting materials for large scale applications
Dew-Hughes, D.
1975-01-01
Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)
LABORATORY SCALE STEAM INJECTION TREATABILITY STUDIES
Laboratory scale steam injection treatability studies were first developed at The University of California-Berkeley. A comparable testing facility has been developed at USEPA's Robert S. Kerr Environmental Research Center. Experience has already shown that many volatile organic...
Development of The Harmony Restoration Measurement Scale ...
Development of The Harmony Restoration Measurement Scale (Cosmogram) Part 1. ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search ... is one, who is in harmony or at peace with his world of relationships (Cosmos).
Large-scale computing with Quantum Espresso
Giannozzi, P.; Cavazzoni, C.
2009-01-01
This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.
MMPI screening scales for somatization disorder.
Wetzel, R D; Brim, J; Guze, S B; Cloninger, C R; Martin, R L; Clayton, P J
1999-08-01
44 items on the MMPI were identified which appear to correspond to some of the symptoms in nine of the 10 groups on the Perley-Guze checklist for somatization disorder (hysteria). This list was organized into two scales, one reflecting the total number of symptoms endorsed and the other the number of organ systems with at least one endorsed symptom. Full MMPIs were then obtained from 29 women with primary affective disorder and 37 women with somatization disorder as part of a follow-up study of a consecutive series of 500 psychiatric clinic patients seen at Washington University. Women with the diagnosis of somatization disorder scored significantly higher on the somatization disorder scales created from the 44 items than did women with only major depression. These new scales appeared to be slightly more effective in identifying somatization disorder than the use of the standard MMPI scales for hypochondriasis and hysteria. Further development is needed.
Successful adaptation to climate change across scales
Adger, W.N.; Arnell, N.W.; University of Southampton; Tompkins, E.L.; University of East Anglia, Norwich; University of Southampton
2005-01-01
Climate change impacts and responses are presently observed in physical and ecological systems. Adaptation to these impacts is increasingly being observed in both physical and ecological systems as well as in human adjustments to resource availability and risk at different spatial and societal scales. We review the nature of adaptation and the implications of different spatial scales for these processes. We outline a set of normative evaluative criteria for judging the success of adaptations at different scales. We argue that elements of effectiveness, efficiency, equity and legitimacy are important in judging success in terms of the sustainability of development pathways into an uncertain future. We further argue that each of these elements of decision-making is implicit within presently formulated scenarios of socio-economic futures of both emission trajectories and adaptation, though with different weighting. The process by which adaptations are to be judged at different scales will involve new and challenging institutional processes. (author)
SENSITIVITY EVALUATION OF THE ELECTRONIC CAR SCALE
Tomasz KĄDZIOŁKA
2014-06-01
Full Text Available During every day activities related with performed work or homework duties we meet a lot of equipment and we don't realize about their complex construction. This can be different type of equipment making every day work easier, such as food processors, dishwashers, ecologic furnaces, gas stoves or mechanisms, which are intended for protection against rain, which are personal umbrellas or devices ensuring higher safety such as ABS in a car or equipment used for measuring different values such as thermometers, barometers and finally scales. Scales due to performed tasks should be rated to measuring equipment which in metrology for example are used to specify linear and angular dimensions or deviations from nominal dimensions. In this article we present results of electronic scales sensitivity tests. For comparison of test results two electronic scales with different measuring range were analyzed
H2@Scale Resource and Market Analysis
Ruth, Mark
2017-05-04
The 'H2@Scale' concept is based on the potential for wide-scale utilization of hydrogen as an energy intermediate where the hydrogen is produced from low cost energy resources and it is used in both the transportation and industrial sectors. H2@Scale has the potential to address grid resiliency, energy security, and cross-sectoral emissions reductions. This presentation summarizes the status of an ongoing analysis effort to quantify the benefits of H2@Scale. It includes initial results regarding market potential, resource potential, and impacts of when electrolytic hydrogen is produced with renewable electricity to meet the potential market demands. It also proposes additional analysis efforts to better quantify each of the factors.
Biomass for energy - small scale technologies
Salvesen, F.; Joergensen, P.F. [KanEnergi, Rud (Norway)
1997-12-31
The bioenergy markets and potential in EU region, the different types of biofuels, the energy technology, and the relevant applications of these for small-scale energy production are reviewed in this presentation
Scaling Research Results: Design and Evaluation | IDRC ...
Design and evaluation The project will provide helpful guidance to IDRC management and ... scaling and programming for scalable research Offer the monograph in multiple forms, ... Asian outlook: New growth dependent on new productivity.
Comments on intermediate-scale models
Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.
1987-04-23
Some superstring-inspired models employ intermediate scales m/sub I/ of gauge symmetry breaking. Such scales should exceed 10/sup 16/ GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m/sub I/. However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m/sub W/), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m/sub I/. We also comment on the danger of baryon and lepton number violation in the effective low-energy theory.
Comments on intermediate-scale models
Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.
1987-01-01
Some superstring-inspired models employ intermediate scales m I of gauge symmetry breaking. Such scales should exceed 10 16 GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m I . However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m W ), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m I . We also comment on the danger of baryon and lepton number violation in the effective low-energy theory. (orig.)
Useful scaling parameters for the pulse tube
Lee, J.M.; Kittel, P.; Timmerhaus, K.D.
1996-01-01
A set of dimensionless scaling parameters for use in correlating performance data for Pulse Tube Refrigerators is presented. The dimensionless groups result after scaling the mass and energy conservation equations, and the equation of motion for an axisymmetric, two-dimensional ideal gas system. Allowed are viscous effects and conduction heat transfer between the gas and the tube wall. The scaling procedure results in reducing the original 23 dimensional variables to a set of 11 dimensionless scaling groups. Dimensional analysis is used to verify that the 11 dimensionless groups obtained is the minimum number needed to describe the system. The authors also examine 6 limiting cases which progressively reduce the number of dimensionless groups from 11 to 3. The physical interpretation of the parameters are described, and their usefulness is outlined for understanding how heat transfer and mass streaming affect ideal enthalpy flow
RFQ scaling-law implications and examples
Wadlinger, E.A.
1986-01-01
We demonstrate the utility of the RFQ scaling laws that have been previously derived. These laws are relations between accelerator parameters (electric field, fr frequency, etc.) and beam parameters (current, energy, emittance, etc.) that act as guides for designing radio-frequency quadrupoles (RFQs) by showing the various tradeoffs involved in making RFQ designs. These scaling laws give a unique family of curves, at any given synchronous particle phase, that relates the beam current, emittance, particle mass, and space-charge tune depression with the RFQ frequency and maximum vane-tip electric field when assuming equipartitioning and equal longitudinal and transverse tune depressions. These scaling curves are valid at any point in any given RFQ where there is a bunched and equipartitioned beam. We show several examples for designing RFQs, examine the performance characteristics of an existing device, and study various RFQ performance limitations required by the scaling laws
Validating the Rett Syndrome Gross Motor Scale
Downs, Jenny; Stahlhut, Michelle; Wong, Kingsley
2016-01-01
.93-0.98). The standard error of measurement for the total score was 2 points and we would be 95% confident that a change 4 points in the 45-point scale would be greater than within-subject measurement error. The Rett Syndrome Gross Motor Scale could be an appropriate measure of gross motor skills in clinical practice......Rett syndrome is a pervasive neurodevelopmental disorder associated with a pathogenic mutation on the MECP2 gene. Impaired movement is a fundamental component and the Rett Syndrome Gross Motor Scale was developed to measure gross motor abilities in this population. The current study investigated...... the validity and reliability of the Rett Syndrome Gross Motor Scale. Video data showing gross motor abilities supplemented with parent report data was collected for 255 girls and women registered with the Australian Rett Syndrome Database, and the factor structure and relationships between motor scores, age...
Steffensen's Integral Inequality on Time Scales
Ozkan Umut Mutlu
2007-01-01
Full Text Available We establish generalizations of Steffensen's integral inequality on time scales via the diamond- dynamic integral, which is defined as a linear combination of the delta and nabla integrals.
Scaling up Effects in the Organic Laboratory
Persson, Anna; Lindstrom, Ulf M.
2004-01-01
A simple and effective way of exposing chemistry students to some of the effects of scaling up an organic reaction is described. It gives the student an experience that may encounter in an industrial setting.
TMD Evolution at Moderate Hard Scales
Rogers, Ted [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Old Dominion Univ., Norfolk, VA (United States); Collins, John C. [Pennsylvania State Univ., University Park, PA (United States)
2016-01-01
We summarize some of our recent work on non-perturbative transverse momentum dependent (TMD) evolution, emphasizing aspects that are necessary for dealing with moderately low scale processes like semi-inclusive deep inelastic scattering.
Challenging the assumptions for thermal sensation scales
Schweiker, Marcel; Fuchs, Xaver; Becker, Susanne
2016-01-01
Scales are widely used to assess the personal experience of thermal conditions in built environments. Most commonly, thermal sensation is assessed, mainly to determine whether a particular thermal condition is comfortable for individuals. A seven-point thermal sensation scale has been used...... extensively, which is suitable for describing a one-dimensional relationship between physical parameters of indoor environments and subjective thermal sensation. However, human thermal comfort is not merely a physiological but also a psychological phenomenon. Thus, it should be investigated how scales for its...... assessment could benefit from a multidimensional conceptualization. The common assumptions related to the usage of thermal sensation scales are challenged, empirically supported by two analyses. These analyses show that the relationship between temperature and subjective thermal sensation is non...
Contact engineering for nano-scale CMOS
Hussain, Muhammad Mustafa; Fahad, Hossain M.; Qaisi, Ramy M.
2012-01-01
. One of the critical requirements of transistor structure and fabrication is efficient contact engineering. To catch up with high performance information processing, transistors are going through continuous scaling process. However, it also imposes new
Scaling of load in communications networks.
Narayan, Onuttom; Saniee, Iraj
2010-09-01
We show that the load at each node in a preferential attachment network scales as a power of the degree of the node. For a network whose degree distribution is p(k)∼k{-γ} , we show that the load is l(k)∼k{η} with η=γ-1 , implying that the probability distribution for the load is p(l)∼1/l{2} independent of γ . The results are obtained through scaling arguments supported by finite size scaling studies. They contradict earlier claims, but are in agreement with the exact solution for the special case of tree graphs. Results are also presented for real communications networks at the IP layer, using the latest available data. Our analysis of the data shows relatively poor power-law degree distributions as compared to the scaling of the load versus degree. This emphasizes the importance of the load in network analysis.
Chemical Transfer (Single Small-Scale) Facility
Federal Laboratory Consortium — Description/History: Chemistry laboratoryThe Chemical Transfer Facility (CTF) is the only U.S. single small-scale facility, a single repository for the Army’s...
Large-scale regions of antimatter
Grobov, A. V.; Rubin, S. G.
2015-01-01
Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era
Large-scale regions of antimatter
Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)
2015-07-15
Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.
Biomass for energy - small scale technologies
Salvesen, F; Joergensen, P F [KanEnergi, Rud (Norway)
1998-12-31
The bioenergy markets and potential in EU region, the different types of biofuels, the energy technology, and the relevant applications of these for small-scale energy production are reviewed in this presentation
Attia, S.; Paterson, S. R.; Jiang, D.; Miller, R. B.
2017-12-01
Structural studies of orogenic deformation fields are mostly based on small-scale structures ubiquitous in field exposures, hand samples, and under microscopes. Relating deformation histories derived from such structures to changing lithospheric-scale deformation and boundary conditions is not trivial due to vast scale separation (10-6 107 m) between characteristic lengths of small-scale structures and lithospheric plates. Rheological heterogeneity over the range of orogenic scales will lead to deformation partitioning throughout intervening scales of structural development. Spectacular examples of structures documenting deformation partitioning are widespread within hot (i.e., magma-rich) orogens such as the well-studied central Sierra Nevada and Cascades core of western North America: (1) deformation partitioned into localized, narrow, triclinic shear zones separated by broad domains of distributed pure shear at micro- to 10 km scales; (2) deformation partitioned between plutons and surrounding metamorphic host rocks as shown by pluton-wide magmatic fabrics consistently oriented differently than coeval host rock fabrics; (3) partitioning recorded by different fabric intensities, styles, and orientations established from meter-scale grid mapping to 100 km scale domainal analyses; and (4) variations in the causes of strain and kinematics within fold-dominated domains. These complex, partitioned histories require synthesized mapping, geochronology, and structural data at all scales to evaluate partitioning and in the absence of correct scaling can lead to incorrect interpretations of histories. Forward modeling capable of addressing deformation partitioning in materials containing multiple scales of rheologically heterogeneous elements of varying characteristic lengths provides the ability to upscale the large synthesized datasets described above to plate-scale tectonic processes and boundary conditions. By comparing modeling predictions from the recently developed
Campbell, Alistair; Hemsley, Samantha
2009-01-01
The validity and reliability of the Outcome Rating Scale (ORS) and the Session Rating Scale (SRS) were evaluated against existing longer measures, including the Outcome Questionnaire-45, Working Alliance Inventory, Depression Anxiety Stress Scale-21, Quality of Life Scale, Rosenberg Self-Esteem Scale and General Self-efficacy Scale. The measures…
Stellar candles for the extragalactic distance scale
Gieren, Wolfgang
2003-01-01
This volume reviews the current status with respect to both theory and observation of the extragalactic distance scale. A sufficient accuracy is required both for a precise determination of the cosmological parameters and also in order to achieve a better understanding of physical processes in extragalactic systems. The "standard candles", used to set up the extragalactic distance scale, reviewed in this book include cepheid variables, RR Lyrae variables, novae, Type Ia and Type II supernovae as well as globular clusters and planetary nebulae.
Development of the intoxicated personality scale.
Ward, Rose Marie; Brinkman, Craig S; Miller, Ashlin; Doolittle, James J
2015-01-01
To develop the Intoxicated Personality Scale (IPS). Data were collected from 436 college students via an online survey. Through an iterative measurement development process, the resulting IPS was created. The 5 subscales (Good Time, Risky Choices, Risky Sex, Emotional, and Introvert) of the IPS positively related to alcohol consumption, alcohol problems, drinking motives, alcohol expectancies, and personality. The results suggest that the Intoxicated Personality Scale may be a useful tool for predicting problematic alcohol consumption, alcohol expectancies, and drinking motives.
A Fractal Perspective on Scale in Geography
Bin Jiang
2016-06-01
Full Text Available Scale is a fundamental concept that has attracted persistent attention in geography literature over the past several decades. However, it creates enormous confusion and frustration, particularly in the context of geographic information science, because of scale-related issues such as image resolution and the modifiable areal unit problem (MAUP. This paper argues that the confusion and frustration arise from traditional Euclidean geometric thinking, in which locations, directions, and sizes are considered absolute, and it is now time to revise this conventional thinking. Hence, we review fractal geometry, together with its underlying way of thinking, and compare it to Euclidean geometry. Under the paradigm of Euclidean geometry, everything is measurable, no matter how big or small. However, most geographic features, due to their fractal nature, are essentially unmeasurable or their sizes depend on scale. For example, the length of a coastline, the area of a lake, and the slope of a topographic surface are all scale-dependent. Seen from the perspective of fractal geometry, many scale issues, such as the MAUP, are inevitable. They appear unsolvable, but can be dealt with. To effectively deal with scale-related issues, we present topological and scaling analyses illustrated by street-related concepts such as natural streets, street blocks, and natural cities. We further contend that one of the two spatial properties, spatial heterogeneity, is de facto the fractal nature of geographic features, and it should be considered the first effect among the two, because it is global and universal across all scales, which should receive more attention from practitioners of geography.
Finite size scaling and lattice gauge theory
Berg, B.A.
1986-01-01
Finite size (Fisher) scaling is investigated for four dimensional SU(2) and SU(3) lattice gauge theories without quarks. It allows to disentangle violations of (asymptotic) scaling and finite volume corrections. Mass spectrum, string tension, deconfinement temperature and lattice β-function are considered. For appropriate volumes, Monte Carlo investigations seem to be able to control the finite volume continuum limit. Contact is made with Luescher's small volume expansion and possibly also with the asymptotic large volume behavior. 41 refs., 19 figs
Accurate multiplicity scaling in isotopically conjugate reactions
Golokhvastov, A.I.
1989-01-01
The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs
Genome-scale neurogenetics: methodology and meaning.
McCarroll, Steven A; Feng, Guoping; Hyman, Steven E
2014-06-01
Genetic analysis is currently offering glimpses into molecular mechanisms underlying such neuropsychiatric disorders as schizophrenia, bipolar disorder and autism. After years of frustration, success in identifying disease-associated DNA sequence variation has followed from new genomic technologies, new genome data resources, and global collaborations that could achieve the scale necessary to find the genes underlying highly polygenic disorders. Here we describe early results from genome-scale studies of large numbers of subjects and the emerging significance of these results for neurobiology.
Cardinal Scales for Public Health Evaluation
Harvey, Charles M.; Østerdal, Lars Peter
Policy studies often evaluate health for a population by summing the individuals' health as measured by a scale that is ordinal or that depends on risk attitudes. We develop a method using a different type of preferences, called preference intensity or cardinal preferences, to construct scales...... that measure changes in health. The method is based on a social welfare model that relates preferences between changes in an individual's health to preferences between changes in health for a population...
Composite asymptotic expansions and scaling wall turbulence.
Panton, Ronald L
2007-03-15
In this article, the assumptions and reasoning that yield composite asymptotic expansions for wall turbulence are discussed. Particular attention is paid to the scaling quantities that are used to render the variables non-dimensional and of order one. An asymptotic expansion is proposed for the streamwise Reynolds stress that accounts for the active and inactive turbulence by using different scalings. The idea is tested with the data from the channel flows and appears to have merit.
Economy of scale still holds true
Anon.
1985-01-01
The economic merits of larger generating units have been questioned and have become subject to doubt and controversy. A 1980 study by Sargent and Lundy concluded that economy of scale still held. But some of the basic factors and major assumptions used in that study have changed. An update of those results, which also looks at whether reduced load growth rates affect the study's conclusions, finds economy of scale still applies
Mouse Activity across Time Scales: Fractal Scenarios
Lima, G. Z. dos Santos; Lobão-Soares, B.; do Nascimento, G. C.; França, Arthur S. C.; Muratori, L.; Ribeiro, S.; Corso, G.
2014-01-01
In this work we devise a classification of mouse activity patterns based on accelerometer data using Detrended Fluctuation Analysis. We use two characteristic mouse behavioural states as benchmarks in this study: waking in free activity and slow-wave sleep (SWS). In both situations we find roughly the same pattern: for short time intervals we observe high correlation in activity - a typical 1/f complex pattern - while for large time intervals there is anti-correlation. High correlation of short intervals ( to : waking state and to : SWS) is related to highly coordinated muscle activity. In the waking state we associate high correlation both to muscle activity and to mouse stereotyped movements (grooming, waking, etc.). On the other side, the observed anti-correlation over large time scales ( to : waking state and to : SWS) during SWS appears related to a feedback autonomic response. The transition from correlated regime at short scales to an anti-correlated regime at large scales during SWS is given by the respiratory cycle interval, while during the waking state this transition occurs at the time scale corresponding to the duration of the stereotyped mouse movements. Furthermore, we find that the waking state is characterized by longer time scales than SWS and by a softer transition from correlation to anti-correlation. Moreover, this soft transition in the waking state encompass a behavioural time scale window that gives rise to a multifractal pattern. We believe that the observed multifractality in mouse activity is formed by the integration of several stereotyped movements each one with a characteristic time correlation. Finally, we compare scaling properties of body acceleration fluctuation time series during sleep and wake periods for healthy mice. Interestingly, differences between sleep and wake in the scaling exponents are comparable to previous works regarding human heartbeat. Complementarily, the nature of these sleep-wake dynamics could lead to a better
Large-scale Complex IT Systems
Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard
2011-01-01
This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...
Large-scale complex IT systems
Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard
2012-01-01
12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...
Scaling violations at ultra-high energies
Tung, W.K.
1979-01-01
The paper discusses some of the features of high energy lepton-hadron scattering, including the observed (Bjorken) scaling behavior. The cross-sections where all hadron final states are summed over, are examined and the general formulas for the differential cross-section are examined. The subjects of scaling, breaking and phenomenological consequences are studied, and a list of what ultra-high energy neutrino physics can teach QCD is given
Scaling Sparse Matrices for Optimization Algorithms
Gajulapalli Ravindra S; Lasdon Leon S
2006-01-01
To iteratively solve large scale optimization problems in various contexts like planning, operations, design etc., we need to generate descent directions that are based on linear system solutions. Irrespective of the optimization algorithm or the solution method employed for the linear systems, ill conditioning introduced by problem characteristics or the algorithm or both need to be addressed. In [GL01] we used an intuitive heuristic approach in scaling linear systems that improved performan...
Competition and Outsourcing with Scale Economies
Gérard P. Cachon; Patrick T. Harker
2002-01-01
Scale economies are commonplace in operations, yet because of analytical challenges, relatively little is known about how firms should compete in their presence. This paper presents a model of competition between two firms that face scale economies; (i.e., each firm's cost per unit of demand is decreasing in demand). A general framework is used, which incorporates competition between two service providers with price- and time-sensitive demand (a queuing game), and competition between two reta...
Large-scale grid management; Storskala Nettforvaltning
Langdal, Bjoern Inge; Eggen, Arnt Ove
2003-07-01
The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.
Toward seamless hydrologic predictions across spatial scales
Samaniego, Luis; Kumar, Rohini; Thober, Stephan; Rakovec, Oldrich; Zink, Matthias; Wanders, Niko; Eisner, Stephanie; Müller Schmied, Hannes; Sutanudjaja, Edwin H.; Warrach-Sagi, Kirsten; Attinger, Sabine
2017-09-01
Land surface and hydrologic models (LSMs/HMs) are used at diverse spatial resolutions ranging from catchment-scale (1-10 km) to global-scale (over 50 km) applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the chosen resolution, i.e., fulfills a flux-matching condition across scales. An analysis of state-of-the-art LSMs and HMs reveals that most do not have consistent hydrologic parameter fields. Multiple experiments with the mHM, Noah-MP, PCR-GLOBWB, and WaterGAP models demonstrate the pitfalls of deficient parameterization practices currently used in most operational models, which are insufficient to satisfy the flux-matching condition. These examples demonstrate that J. Dooge's 1982 statement on the unsolved problem of parameterization in these models remains true. Based on a review of existing parameter regionalization techniques, we postulate that the multiscale parameter regionalization (MPR) technique offers a practical and robust method that provides consistent (seamless) parameter and flux fields across scales. Herein, we develop a general model protocol to describe how MPR can be applied to a particular model and present an example application using the PCR-GLOBWB model. Finally, we discuss potential advantages and limitations of MPR in obtaining the seamless prediction of hydrological fluxes and states across spatial scales.
Software challenges in extreme scale systems
Sarkar, Vivek; Harrod, William; Snavely, Allan E
2009-01-01
Computer systems anticipated in the 2015 - 2020 timeframe are referred to as Extreme Scale because they will be built using massive multi-core processors with 100's of cores per chip. The largest capability Extreme Scale system is expected to deliver Exascale performance of the order of 10 18 operations per second. These systems pose new critical challenges for software in the areas of concurrency, energy efficiency and resiliency. In this paper, we discuss the implications of the concurrency and energy efficiency challenges on future software for Extreme Scale Systems. From an application viewpoint, the concurrency and energy challenges boil down to the ability to express and manage parallelism and locality by exploring a range of strong scaling and new-era weak scaling techniques. For expressing parallelism and locality, the key challenges are the ability to expose all of the intrinsic parallelism and locality in a programming model, while ensuring that this expression of parallelism and locality is portable across a range of systems. For managing parallelism and locality, the OS-related challenges include parallel scalability, spatial partitioning of OS and application functionality, direct hardware access for inter-processor communication, and asynchronous rather than interrupt-driven events, which are accompanied by runtime system challenges for scheduling, synchronization, memory management, communication, performance monitoring, and power management. We conclude by discussing the importance of software-hardware co-design in addressing the fundamental challenges for application enablement on Extreme Scale systems.
Toward seamless hydrologic predictions across spatial scales
L. Samaniego
2017-09-01
Full Text Available Land surface and hydrologic models (LSMs/HMs are used at diverse spatial resolutions ranging from catchment-scale (1–10 km to global-scale (over 50 km applications. Applying the same model structure at different spatial scales requires that the model estimates similar fluxes independent of the chosen resolution, i.e., fulfills a flux-matching condition across scales. An analysis of state-of-the-art LSMs and HMs reveals that most do not have consistent hydrologic parameter fields. Multiple experiments with the mHM, Noah-MP, PCR-GLOBWB, and WaterGAP models demonstrate the pitfalls of deficient parameterization practices currently used in most operational models, which are insufficient to satisfy the flux-matching condition. These examples demonstrate that J. Dooge's 1982 statement on the unsolved problem of parameterization in these models remains true. Based on a review of existing parameter regionalization techniques, we postulate that the multiscale parameter regionalization (MPR technique offers a practical and robust method that provides consistent (seamless parameter and flux fields across scales. Herein, we develop a general model protocol to describe how MPR can be applied to a particular model and present an example application using the PCR-GLOBWB model. Finally, we discuss potential advantages and limitations of MPR in obtaining the seamless prediction of hydrological fluxes and states across spatial scales.
Linear scaling of density functional algorithms
Stechel, E.B.; Feibelman, P.J.; Williams, A.R.
1993-01-01
An efficient density functional algorithm (DFA) that scales linearly with system size will revolutionize electronic structure calculations. Density functional calculations are reliable and accurate in determining many condensed matter and molecular ground-state properties. However, because current DFA's, including methods related to that of Car and Parrinello, scale with the cube of the system size, density functional studies are not routinely applied to large systems. Linear scaling is achieved by constructing functions that are both localized and fully occupied, thereby eliminating the need to calculate global eigenfunctions. It is, however, widely believed that exponential localization requires the existence of an energy gap between the occupied and unoccupied states. Despite this, the authors demonstrate that linear scaling can still be achieved for metals. Using a linear scaling algorithm, they have explicitly constructed localized, almost fully occupied orbitals for the quintessential metallic system, jellium. The algorithm is readily generalizable to any system geometry and Hamiltonian. They will discuss the conceptual issues involved, convergence properties and scaling for their new algorithm
Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-08-01
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.
Scale calculus and the Schroedinger equation
Cresson, Jacky
2003-01-01
This paper is twofold. In a first part, we extend the classical differential calculus to continuous nondifferentiable functions by developing the notion of scale calculus. The scale calculus is based on a new approach of continuous nondifferentiable functions by constructing a one parameter family of differentiable functions f(t,ε) such that f(t,ε)→f(t) when ε goes to zero. This led to several new notions as representations: fractal functions and ε-differentiability. The basic objects of the scale calculus are left and right quantum operators and the scale operator which generalizes the classical derivative. We then discuss some algebraic properties of these operators. We define a natural bialgebra, called quantum bialgebra, associated with them. Finally, we discuss a convenient geometric object associated with our study. In a second part, we define a first quantization procedure of classical mechanics following the scale relativity theory developed by Nottale. We obtain a nonlinear Schroedinger equation via the classical Newton's equation of dynamics using the scale operator. Under special assumptions we recover the classical Schroedinger equation and we discuss the relevance of these assumptions
Validated assessment scales for the lower face.
Narins, Rhoda S; Carruthers, Jean; Flynn, Timothy C; Geister, Thorin L; Görtelmeyer, Roman; Hardas, Bhushan; Himmrich, Silvia; Jones, Derek; Kerscher, Martina; de Maio, Maurício; Mohrmann, Cornelia; Pooth, Rainer; Rzany, Berthold; Sattler, Gerhard; Buchner, Larry; Benter, Ursula; Breitscheidel, Lusine; Carruthers, Alastair
2012-02-01
Aging in the lower face leads to lines, wrinkles, depression of the corners of the mouth, and changes in lip volume and lip shape, with increased sagging of the skin of the jawline. Refined, easy-to-use, validated, objective standards assessing the severity of these changes are required in clinical research and practice. To establish the reliability of eight lower face scales assessing nasolabial folds, marionette lines, upper and lower lip fullness, lip wrinkles (at rest and dynamic), the oral commissure and jawline, aesthetic areas, and the lower face unit. Four 5-point rating scales were developed to objectively assess upper and lower lip wrinkles, oral commissures, and the jawline. Twelve experts rated identical lower face photographs of 50 subjects in two separate rating cycles using eight 5-point scales. Inter- and intrarater reliability of responses was assessed. Interrater reliability was substantial or almost perfect for all lower face scales, aesthetic areas, and the lower face unit. Intrarater reliability was high for all scales, areas and the lower face unit. Our rating scales are reliable tools for valid and reproducible assessment of the aging process in lower face areas. © 2012 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.
Political consultation and large-scale research
Bechmann, G.; Folkers, H.
1977-01-01
Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de
Automatically Determining Scale Within Unstructured Point Clouds
Kadamen, Jayren; Sithole, George
2016-06-01
Three dimensional models obtained from imagery have an arbitrary scale and therefore have to be scaled. Automatically scaling these models requires the detection of objects in these models which can be computationally intensive. Real-time object detection may pose problems for applications such as indoor navigation. This investigation poses the idea that relational cues, specifically height ratios, within indoor environments may offer an easier means to obtain scales for models created using imagery. The investigation aimed to show two things, (a) that the size of objects, especially the height off ground is consistent within an environment, and (b) that based on this consistency, objects can be identified and their general size used to scale a model. To test the idea a hypothesis is first tested on a terrestrial lidar scan of an indoor environment. Later as a proof of concept the same test is applied to a model created using imagery. The most notable finding was that the detection of objects can be more readily done by studying the ratio between the dimensions of objects that have their dimensions defined by human physiology. For example the dimensions of desks and chairs are related to the height of an average person. In the test, the difference between generalised and actual dimensions of objects were assessed. A maximum difference of 3.96% (2.93cm) was observed from automated scaling. By analysing the ratio between the heights (distance from the floor) of the tops of objects in a room, identification was also achieved.
Scaling Climate Change Communication for Behavior Change
Rodriguez, V. C.; Lappé, M.; Flora, J. A.; Ardoin, N. M.; Robinson, T. N.
2014-12-01
Ultimately, effective climate change communication results in a change in behavior, whether the change is individual, household or collective actions within communities. We describe two efforts to promote climate-friendly behavior via climate communication and behavior change theory. Importantly these efforts are designed to scale climate communication principles focused on behavior change rather than soley emphasizing climate knowledge or attitudes. Both cases are embedded in rigorous evaluations (randomized controlled trial and quasi-experimental) of primary and secondary outcomes as well as supplementary analyses that have implications for program refinement and program scaling. In the first case, the Girl Scouts "Girls Learning Environment and Energy" (GLEE) trial is scaling the program via a Massive Open Online Course (MOOC) for Troop Leaders to teach the effective home electricity and food and transportation energy reduction programs. The second case, the Alliance for Climate Education (ACE) Assembly Program, is advancing the already-scaled assembly program by using communication principles to further engage youth and their families and communities (school and local communities) in individual and collective actions. Scaling of each program uses online learning platforms, social media and "behavior practice" videos, mastery practice exercises, virtual feedback and virtual social engagement to advance climate-friendly behavior change. All of these communication practices aim to simulate and advance in-person train-the-trainers technologies.As part of this presentation we outline scaling principles derived from these two climate change communication and behavior change programs.
Scaling Principles for Understanding and Exploiting Adhesion
Crosby, Alfred
A grand challenge in the science of adhesion is the development of a general design paradigm for adhesive materials that can sustain large forces across an interface yet be detached with minimal force upon command. Essential to this challenge is the generality of achieving this performance under a wide set of external conditions and across an extensive range of forces. Nature has provided some guidance through various examples, e.g. geckos, for how to meet this challenge; however, a single solution is not evident upon initial investigation. To help provide insight into nature's ability to scale reversible adhesion and adapt to different external constraints, we have developed a general scaling theory that describes the force capacity of an adhesive interface in the context of biological locomotion. We have demonstrated that this scaling theory can be used to understand the relative performance of a wide range of organisms, including numerous gecko species and insects, as well as an extensive library of synthetic adhesive materials. We will present the development and testing of this scaling theory, and how this understanding has helped guide the development of new composite materials for high capacity adhesives. We will also demonstrate how this scaling theory has led to the development of new strategies for transfer printing and adhesive applications in manufacturing processes. Overall, the developed scaling principles provide a framework for guiding the design of adhesives.
Large-Scale Outflows in Seyfert Galaxies
Colbert, E. J. M.; Baum, S. A.
1995-12-01
\\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.
Mesoscale to Synoptic Scale Cloud Variability
Rossow, William B.
1998-01-01
The atmospheric circulation and its interaction with the oceanic circulation involve non-linear and non-local exchanges of energy and water over a very large range of space and time scales. These exchanges are revealed, in part, by the related variations of clouds, which occur on a similar range of scales as the atmospheric motions that produce them. Collection of comprehensive measurements of the properties of the atmosphere, clouds and surface allows for diagnosis of some of these exchanges. The use of a multi-satellite-network approach by the International Satellite Cloud Climatology Project (ISCCP) comes closest to providing complete coverage of the relevant range space and time scales over which the clouds, atmosphere and ocean vary. A nearly 15-yr dataset is now available that covers the range from 3 hr and 30 km to decade and planetary. This paper considers three topics: (1) cloud variations at the smallest scales and how they may influence radiation-cloud interactions, and (2) cloud variations at "moderate" scales and how they may cause natural climate variability, and (3) cloud variations at the largest scales and how they affect the climate. The emphasis in this discussion is on the more mature subject of cloud-radiation interactions. There is now a need to begin similar detailed diagnostic studies of water exchange processes.
Nanometer scale materials - characterization and fabrication
Murday, J.S.; Colton, R.J.; Rath, B.B.
1993-01-01
Materials and solid state scientists have made excellent progress in understanding material behavior in length scales from microns to meters. Below a micron, the lack of analytical prowess has been a deterrent. At the atomic scale, chemistry and atomic/molecular physics have also contributed significant understanding of matter. The maturity of these three communities, materials, solid state physics, atomic/molecular physics/chemistry, coupled with the development of analytical capability for nanometer-sized structures, promises to broaden our grasp of materials behavior into the last realm of unexplored size scales-nanometer. The motivation for this effort is driven both by the expectation of novel properties as well as by the potential solution to long standing technological issues. Critical scale lengths for many material properties fall in the nanometer range, examples include superconductor coherence lengths, electron inelastic mean free paths, electron wavelengths in solids, critical lengths for dislocation generation. Structures of nanometer size will undoubtedly show behavior unexpected from experience at the larger and smaller scales. Many technological problems such as adhesion, friction, corrosion, elasticity and fracture are believed to depend critically on nanometer scale phenomena. The millennia-old efforts to improve materials behavior have undoubtedly been slowed by our inability to 'observe' in this size range. (orig.)
Scaling the heterogeneously heated convective boundary layer
Van Heerwaarden, C.; Mellado, J.; De Lozar, A.
2013-12-01
We have studied the heterogeneously heated convective boundary layer (CBL) by means of large-eddy simulations (LES) and direct numerical simulations (DNS). What makes our study different from previous studies on this subject are our very long simulations in which the system travels through multiple states and that from there we have derived scaling laws. In our setup, a stratified atmosphere is heated from below by square patches with a high surface buoyancy flux, surrounded by regions with no or little flux. By letting a boundary layer grow in time we let the system evolve from the so-called meso-scale to the micro-scale regime. In the former the heterogeneity is large and strong circulations can develop, while in the latter the heterogeneity is small and does no longer influence the boundary layer structure. Within each simulation we can now observe the formation of a peak in kinetic energy, which represents the 'optimal' heterogeneity size in the meso-scale, and the subsequent decay of the peak and the development towards the transition to the micro-scale. We have created a non-dimensional parameter space that describes all properties of this system. By studying the previously described evolution for different combinations of parameters, we have derived three important conclusions. First, there exists a horizontal length scale of the heterogeneity (L) that is a function of the boundary layer height (h) and the Richardson (Ri) number of the inversion at the top of the boundary layer. This relationship has the form L = h Ri^(3/8). Second, this horizontal length scale L allows for expressing the time evolution, and thus the state of the system, as a ratio of this length scale and the distance between two patches Xp. This ratio thus describes to which extent the circulation fills up the space that exists between two patch centers. The timings of the transition from the meso- to the micro-scale collapse under this scaling for all simulations sharing the same flux
BENCH SCALE SALTSTONE PROCESS DEVELOPMENT MIXING STUDY
Cozzi, A.; Hansen, E.
2011-08-03
The Savannah River National Laboratory (SRNL) was requested to develop a bench scale test facility, using a mixer, transfer pump, and transfer line to determine the impact of conveying the grout through the transfer lines to the vault on grout properties. Bench scale testing focused on the effect the transfer line has on the rheological property of the grout as it was processed through the transfer line. Rheological and other physical properties of grout samples were obtained prior to and after pumping through a transfer line. The Bench Scale Mixing Rig (BSMR) consisted of two mixing tanks, grout feed tank, transfer pump and transfer hose. The mixing tanks were used to batch the grout which was then transferred into the grout feed tank. The contents of the feed tank were then pumped through the transfer line (hose) using a progressive cavity pump. The grout flow rate and pump discharge pressure were monitored. Four sampling stations were located along the length of the transfer line at the 5, 105 and 205 feet past the transfer pump and at 305 feet, the discharge of the hose. Scaling between the full scale piping at Saltstone to bench scale testing at SRNL was performed by maintaining the same shear rate and total shear at the wall of the transfer line. The results of scaling down resulted in a shorter transfer line, a lower average velocity, the same transfer time and similar pressure drops. The condition of flow in the bench scale transfer line is laminar. The flow in the full scale pipe is in the transition region, but is more laminar than turbulent. The resulting plug in laminar flow in the bench scale results in a region of no-mixing. Hence mixing, or shearing, at the bench scale should be less than that observed in the full scale, where this plug is non existent due to the turbulent flow. The bench scale tests should be considered to be conservative due to the highly laminar condition of flow that exists. Two BSMR runs were performed. In both cases, wall
Ekin, Jack W; Goodrich, Loren; Splett, Jolene; Bordini, Bernardo; Richter, David
2016-01-01
A scaling study of several thousand Nb$_{3}$Sn critical-current $(I_c)$ measurements is used to derive the Extrapolative Scaling Expression (ESE), a relation that can quickly and accurately extrapolate limited datasets to obtain full three-dimensional dependences of I c on magnetic field (B), temperature (T), and mechanical strain (ε). The relation has the advantage of being easy to implement, and offers significant savings in sample characterization time and a useful tool for magnet design. Thorough data-based analysis of the general parameterization of the Unified Scaling Law (USL) shows the existence of three universal scaling constants for practical Nb$_{3}$Sn conductors. The study also identifies the scaling parameters that are conductor specific and need to be fitted to each conductor. This investigation includes two new, rare, and very large I c(B,T,ε) datasets (each with nearly a thousand I c measurements spanning magnetic fields from 1 to 16 T, temperatures from ~2.26 to 14 K, and intrinsic strain...
Effect of wettability on scale-up of multiphase flow from core-scale to reservoir fine-grid-scale
Chang, Y.C.; Mani, V.; Mohanty, K.K. [Univ. of Houston, TX (United States)
1997-08-01
Typical field simulation grid-blocks are internally heterogeneous. The objective of this work is to study how the wettability of the rock affects its scale-up of multiphase flow properties from core-scale to fine-grid reservoir simulation scale ({approximately} 10{prime} x 10{prime} x 5{prime}). Reservoir models need another level of upscaling to coarse-grid simulation scale, which is not addressed here. Heterogeneity is modeled here as a correlated random field parameterized in terms of its variance and two-point variogram. Variogram models of both finite (spherical) and infinite (fractal) correlation length are included as special cases. Local core-scale porosity, permeability, capillary pressure function, relative permeability functions, and initial water saturation are assumed to be correlated. Water injection is simulated and effective flow properties and flow equations are calculated. For strongly water-wet media, capillarity has a stabilizing/homogenizing effect on multiphase flow. For small variance in permeability, and for small correlation length, effective relative permeability can be described by capillary equilibrium models. At higher variance and moderate correlation length, the average flow can be described by a dynamic relative permeability. As the oil wettability increases, the capillary stabilizing effect decreases and the deviation from this average flow increases. For fractal fields with large variance in permeability, effective relative permeability is not adequate in describing the flow.
Statistical scaling of pore-scale Lagrangian velocities in natural porous media.
Siena, M; Guadagnini, A; Riva, M; Bijeljic, B; Pereira Nunes, J P; Blunt, M J
2014-08-01
We investigate the scaling behavior of sample statistics of pore-scale Lagrangian velocities in two different rock samples, Bentheimer sandstone and Estaillades limestone. The samples are imaged using x-ray computer tomography with micron-scale resolution. The scaling analysis relies on the study of the way qth-order sample structure functions (statistical moments of order q of absolute increments) of Lagrangian velocities depend on separation distances, or lags, traveled along the mean flow direction. In the sandstone block, sample structure functions of all orders exhibit a power-law scaling within a clearly identifiable intermediate range of lags. Sample structure functions associated with the limestone block display two diverse power-law regimes, which we infer to be related to two overlapping spatially correlated structures. In both rocks and for all orders q, we observe linear relationships between logarithmic structure functions of successive orders at all lags (a phenomenon that is typically known as extended power scaling, or extended self-similarity). The scaling behavior of Lagrangian velocities is compared with the one exhibited by porosity and specific surface area, which constitute two key pore-scale geometric observables. The statistical scaling of the local velocity field reflects the behavior of these geometric observables, with the occurrence of power-law-scaling regimes within the same range of lags for sample structure functions of Lagrangian velocity, porosity, and specific surface area.
Kelly, Dana L.
1999-11-01
The scale anchoring method was used to analyze and describe the TIMSS primary and middle school (Populations 1 and 2) mathematics and science achievement scales. Scale anchoring is a way of attaching meaning to a scale by describing what students know and can do at specific points on the scale. Student achievement was scrutinized at four points on the TIMSS primary and middle school achievement scales---the 25th, 50th, 75th, and 90th international percentiles for fourth and eighth grades. The scale anchoring method was adapted for the TIMSS data and items that students scoring at each of the four scale points were likely to answer correctly (with a 65 percent probability) were identified. The items were assembled in binders organized by anchor level and content area. Two ten-member panels of subject-matter specialists were convened to scrutinize the items, draft descriptions of student proficiency at the four scale points, and identify example TIMSS items to illustrate performance at each level. Following the panel meetings, the descriptions were refined through an iterative review process. The result is a content-referenced interpretation of the TIMSS scales through which TIMSS achievement results can be better communicated and understood.
Principle of Parsimony, Fake Science, and Scales
Yeh, T. C. J.; Wan, L.; Wang, X. S.
2017-12-01
Considering difficulties in predicting exact motions of water molecules, and the scale of our interests (bulk behaviors of many molecules), Fick's law (diffusion concept) has been created to predict solute diffusion process in space and time. G.I. Taylor (1921) demonstrated that random motion of the molecules reach the Fickian regime in less a second if our sampling scale is large enough to reach ergodic condition. Fick's law is widely accepted for describing molecular diffusion as such. This fits the definition of the parsimony principle at the scale of our concern. Similarly, advection-dispersion or convection-dispersion equation (ADE or CDE) has been found quite satisfactory for analysis of concentration breakthroughs of solute transport in uniformly packed soil columns. This is attributed to the solute is often released over the entire cross-section of the column, which has sampled many pore-scale heterogeneities and met the ergodicity assumption. Further, the uniformly packed column contains a large number of stationary pore-size heterogeneity. The solute thus reaches the Fickian regime after traveling a short distance along the column. Moreover, breakthrough curves are concentrations integrated over the column cross-section (the scale of our interest), and they meet the ergodicity assumption embedded in the ADE and CDE. To the contrary, scales of heterogeneity in most groundwater pollution problems evolve as contaminants travel. They are much larger than the scale of our observations and our interests so that the ergodic and the Fickian conditions are difficult. Upscaling the Fick's law for solution dispersion, and deriving universal rules of the dispersion to the field- or basin-scale pollution migrations are merely misuse of the parsimony principle and lead to a fake science ( i.e., the development of theories for predicting processes that can not be observed.) The appropriate principle of parsimony for these situations dictates mapping of large-scale