WorldWideScience

Sample records for scale relativity methods

  1. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  2. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  3. Multiple time scale methods in tokamak magnetohydrodynamics

    International Nuclear Information System (INIS)

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  4. Degeneracy relations in QCD and the equivalence of two systematic all-orders methods for setting the renormalization scale

    Directory of Open Access Journals (Sweden)

    Huan-Yu Bi

    2015-09-01

    Full Text Available The Principle of Maximum Conformality (PMC eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I; the other, more recent, method (PMC-II uses the Rδ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfy all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio Re+e− and the Higgs partial width Γ(H→bb¯. Both methods lead to the same resummed (‘conformal’ series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {βi}-terms in the pQCD expansion are taken into account. We also show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.

  5. Optimal renormalization scales and commensurate scale relations

    International Nuclear Information System (INIS)

    Brodsky, S.J.; Lu, H.J.

    1996-01-01

    Commensurate scale relations relate observables to observables and thus are independent of theoretical conventions, such as the choice of intermediate renormalization scheme. The physical quantities are related at commensurate scales which satisfy a transitivity rule which ensures that predictions are independent of the choice of an intermediate renormalization scheme. QCD can thus be tested in a new and precise way by checking that the observables track both in their relative normalization and in their commensurate scale dependence. For example, the radiative corrections to the Bjorken sum rule at a given momentum transfer Q can be predicted from measurements of the e+e - annihilation cross section at a corresponding commensurate energy scale √s ∝ Q, thus generalizing Crewther's relation to non-conformal QCD. The coefficients that appear in this perturbative expansion take the form of a simple geometric series and thus have no renormalon divergent behavior. The authors also discuss scale-fixed relations between the threshold corrections to the heavy quark production cross section in e+e - annihilation and the heavy quark coupling α V which is measurable in lattice gauge theory

  6. Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method

    International Nuclear Information System (INIS)

    Cadinu, F.; Kozlowski, T.; Dinh, T.N.

    2007-01-01

    Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)

  7. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  8. Scale relativity: from quantum mechanics to chaotic dynamics.

    Science.gov (United States)

    Nottale, L.

    Scale relativity is a new approach to the problem of the origin of fundamental scales and of scaling laws in physics, which consists in generalizing Einstein's principle of relativity to the case of scale transformations of resolutions. We recall here how it leads one to the concept of fractal space-time, and to introduce a new complex time derivative operator which allows to recover the Schrödinger equation, then to generalize it. In high energy quantum physics, it leads to the introduction of a Lorentzian renormalization group, in which the Planck length is reinterpreted as a lowest, unpassable scale, invariant under dilatations. These methods are successively applied to two problems: in quantum mechanics, that of the mass spectrum of elementary particles; in chaotic dynamics, that of the distribution of planets in the Solar System.

  9. Stepwise integral scaling method and its application to severe accident phenomena

    International Nuclear Information System (INIS)

    Ishii, M.; Zhang, G.

    1993-10-01

    Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses

  10. Methods of scaling threshold color difference using printed samples

    Science.gov (United States)

    Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier

    2012-01-01

    A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

  11. A new method for large-scale assessment of change in ecosystem functioning in relation to land degradation

    Science.gov (United States)

    Horion, Stephanie; Ivits, Eva; Verzandvoort, Simone; Fensholt, Rasmus

    2017-04-01

    Ongoing pressures on European land are manifold with extreme climate events and non-sustainable use of land resources being amongst the most important drivers altering the functioning of the ecosystems. The protection and conservation of European natural capital is one of the key objectives of the 7th Environmental Action Plan (EAP). The EAP stipulates that European land must be managed in a sustainable way by 2020 and the UN Sustainable development goals define a Land Degradation Neutral world as one of the targets. This implies that land degradation (LD) assessment of European ecosystems must be performed repeatedly allowing for the assessment of the current state of LD as well as changes compared to a baseline adopted by the UNCCD for the objective of land degradation neutrality. However, scientifically robust methods are still lacking for large-scale assessment of LD and repeated consistent mapping of the state of terrestrial ecosystems. Historical land degradation assessments based on various methods exist, but methods are generally non-replicable or difficult to apply at continental scale (Allan et al. 2007). The current lack of research methods applicable at large spatial scales is notably caused by the non-robust definition of LD, the scarcity of field data on LD, as well as the complex inter-play of the processes driving LD (Vogt et al., 2011). Moreover, the link between LD and changes in land use (how land use changes relates to change in vegetation productivity and ecosystem functioning) is not straightforward. In this study we used the segmented trend method developed by Horion et al. (2016) for large-scale systematic assessment of hotspots of change in ecosystem functioning in relation to LD. This method alleviates shortcomings of widely used linear trend model that does not account for abrupt change, nor adequately captures the actual changes in ecosystem functioning (de Jong et al. 2013; Horion et al. 2016). Here we present a new methodology for

  12. Level density in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki

    2005-01-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)

  13. Laboratory-scale measurements of effective relative permeability for layered sands

    Energy Technology Data Exchange (ETDEWEB)

    Butts, M.G.; Korsgaard, S.

    1996-12-31

    Predictions of the impact of remediation or the extent of contamination resulting from spills of gasoline, solvents and other petroleum products, must often be made in complex geological environments. Such problems can be treated by introducing the concept of effective parameters that incorporate the effects of soil layering or other heterogeneities into a large-scale flow description. Studies that derive effective multiphase parameters are few, and approximations are required to treat the non-linear multiphase flow equations. The purpose of this study is to measure effective relative permeabilities for well-defined multi-layered soils at the laboratory scale. Relative permeabilities were determined for homogeneous and layered, unconsolidated sands using the method of Jones and Roszelle (1978). The experimental data show that endpoint relative permeabilities are important in defining the shape of the relative permeability curves, but these cannot be predicted by estimation methods base on capillary pressure data. The most significant feature of the measured effective relative permeability curves is that the entrapped (residual) oil saturation is significantly larger than the residual saturation of the individual layers. This observation agrees with previous theoretical predictions of large-scale entrapment Butts, 1993 and (1995). Enhanced entrapment in heterogeneous soils has several important implications for spill remediation, for example, the reduced efficiency of direct recovery. (au) 17 refs.

  14. Laboratory-scale measurements of effective relative permeability for layered sands

    International Nuclear Information System (INIS)

    Butts, M.G.; Korsgaard, S.

    1996-01-01

    Predictions of the impact of remediation or the extent of contamination resulting from spills of gasoline, solvents and other petroleum products, must often be made in complex geological environments. Such problems can be treated by introducing the concept of effective parameters that incorporate the effects of soil layering or other heterogeneities into a large-scale flow description. Studies that derive effective multiphase parameters are few, and approximations are required to treat the non-linear multiphase flow equations. The purpose of this study is to measure effective relative permeabilities for well-defined multi-layered soils at the laboratory scale. Relative permeabilities were determined for homogeneous and layered, unconsolidated sands using the method of Jones and Roszelle (1978). The experimental data show that endpoint relative permeabilities are important in defining the shape of the relative permeability curves, but these cannot be predicted by estimation methods base on capillary pressure data. The most significant feature of the measured effective relative permeability curves is that the entrapped (residual) oil saturation is significantly larger than the residual saturation of the individual layers. This observation agrees with previous theoretical predictions of large-scale entrapment Butts, 1993 and (1995). Enhanced entrapment in heterogeneous soils has several important implications for spill remediation, for example, the reduced efficiency of direct recovery. (au) 17 refs

  15. Psychometric properties of the satisfaction with food-related Life Scale

    DEFF Research Database (Denmark)

    Schnettler, Berta; Miranda, Horacio; Sepúlveda, José

    2013-01-01

    with proportional attachment per city. Results: The results of the confirmatory factor analysis showed an adequate level of internal consistency and a good fit (root mean square error of approximation ¼ 0.071, goodness-of-fit index ¼ 0.95, adjusted goodness-of-fit index ¼ 0.92) to the SWFL data (1-dimensional......Objective: To evaluate the psychometric properties of the Satisfaction with Food-related Life (SWFL) scale and its relation to the Satisfaction with Life Scale (SWLS) in southern Chile. Methods: A survey was applied to a sample of 316 persons in the principal cities of southern Chile distributed...

  16. Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations

    Science.gov (United States)

    Mantz, A.; Allen, S. W.

    2011-01-01

    Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.

  17. Work related injuries and associated factors among small scale ...

    African Journals Online (AJOL)

    Objective: This study aims to assess the magnitude of work related injury and associated factors among small scale industrial workers in Mizan-Aman town, Bench Maji Zone, Southwest Ethiopia. Method: A cross-sectional study design was conducted from February to May, 2016. Data was collected using a structured face to ...

  18. Commensurate scale relations: Precise tests of quantum chromodynamics without scale or scheme ambiguity

    International Nuclear Information System (INIS)

    Brodsky, S.J.; Lu, H.J.

    1994-10-01

    We derive commensurate scale relations which relate perturbatively calculable QCD observables to each other, including the annihilation ratio R e+ e - , the heavy quark potential, τ decay, and radiative corrections to structure function sum rules. For each such observable one can define an effective charge, such as α R (√s)/π ≡ R e+ e - (√s)/(3Σe q 2 )-1. The commensurate scale relation connecting the effective charges for observables A and B has the form α A (Q A ) α B (Q B )(1 + r A/Bπ / αB + hor-ellipsis), where the coefficient r A/B is independent of the number of flavors ∫ contributing to coupling renormalization, as in BLM scale-fixing. The ratio of scales Q A /Q B is unique at leading order and guarantees that the observables A and B pass through new quark thresholds at the same physical scale. In higher orders a different renormalization scale Q n* is assigned for each order n in the perturbative series such that the coefficients of the series are identical to that of a invariant theory. The commensurate scale relations and scales satisfy the renormalization group transitivity rule which ensures that predictions in PQCD are independent of the choice of an intermediate renormalization scheme C. In particular, scale-fixed predictions can be made without reference to theoretically constructed singular renormalization schemes such as MS. QCD can thus be tested in a new and precise way by checking that the effective charges of observables track both in their relative normalization and in their commensurate scale dependence. The commensurate scale relations which relate the radiative corrections to the annihilation ratio R e + e - to the radiative corrections for the Bjorken and Gross-Llewellyn Smith sum rules are particularly elegant and interesting

  19. Test equating, scaling, and linking methods and practices

    CERN Document Server

    Kolen, Michael J

    2014-01-01

    This book provides an introduction to test equating, scaling, and linking, including those concepts and practical issues that are critical for developers and all other testing professionals.  In addition to statistical procedures, successful equating, scaling, and linking involves many aspects of testing, including procedures to develop tests, to administer and score tests, and to interpret scores earned on tests. Test equating methods are used with many standardized tests in education and psychology to ensure that scores from multiple test forms can be used interchangeably.  Test scaling is the process of developing score scales that are used when scores on standardized tests are reported. In test linking, scores from two or more tests are related to one another. Linking has received much recent attention, due largely to investigations of linking similarly named tests from different test publishers or tests constructed for different purposes. In recent years, researchers from the education, psychology, and...

  20. Testing general relativity at cosmological scales: Implementation and parameter correlations

    International Nuclear Information System (INIS)

    Dossett, Jason N.; Ishak, Mustapha; Moldenhauer, Jacob

    2011-01-01

    The testing of general relativity at cosmological scales has become a possible and timely endeavor that is not only motivated by the pressing question of cosmic acceleration but also by the proposals of some extensions to general relativity that would manifest themselves at large scales of distance. We analyze here correlations between modified gravity growth parameters and some core cosmological parameters using the latest cosmological data sets including the refined Cosmic Evolution Survey 3D weak lensing. We provide the parametrized modified growth equations and their evolution. We implement known functional and binning approaches, and propose a new hybrid approach to evolve the modified gravity parameters in redshift (time) and scale. The hybrid parametrization combines a binned redshift dependence and a smooth evolution in scale avoiding a jump in the matter power spectrum. The formalism developed to test the consistency of current and future data with general relativity is implemented in a package that we make publicly available and call ISiTGR (Integrated Software in Testing General Relativity), an integrated set of modified modules for the publicly available packages CosmoMC and CAMB, including a modified version of the integrated Sachs-Wolfe-galaxy cross correlation module of Ho et al. and a new weak-lensing likelihood module for the refined Hubble Space Telescope Cosmic Evolution Survey weak gravitational lensing tomography data. We obtain parameter constraints and correlation coefficients finding that modified gravity parameters are significantly correlated with σ 8 and mildly correlated with Ω m , for all evolution methods. The degeneracies between σ 8 and modified gravity parameters are found to be substantial for the functional form and also for some specific bins in the hybrid and binned methods indicating that these degeneracies will need to be taken into consideration when using future high precision data.

  1. Relations between overturning length scales at the Spanish planetary boundary layer

    Science.gov (United States)

    López, Pilar; Cano, José L.

    2016-04-01

    We analyze the behavior of the maximum Thorpe displacement (dT)max and the Thorpe scale LTat the atmospheric boundary layer (ABL), extending previous research with new data and improving our studies related to the novel use of the Thorpe method applied to ABL. The maximum Thorpe displacements vary between -900 m and 950 m for the different field campaigns. The maximum Thorpe displacement is always greater under convective conditions than under stable ones, independently of its sign. The Thorpe scale LT ranges between 0.2 m and 680 m for the different data sets which cover different stratified mixing conditions (turbulence shear-driven and convective regions). The Thorpe scale does not exceed several tens of meters under stable and neutral stratification conditions related to instantaneous density gradients. In contrast, under convective conditions, Thorpe scales are relatively large, they exceed hundreds of meters which may be related to convective bursts. We analyze the relation between (dT)max and the Thorpe scale LT and we deduce that they verify a power law. We also deduce that there is a difference in exponents of the power laws for convective conditions and shear-driven conditions. These different power laws could identify overturns created under different mechanisms. References Cuxart, J., Yagüe, C., Morales, G., Terradellas, E., Orbe, J., Calvo, J., Fernández, A., Soler, M., Infante, C., Buenestado, P., Espinalt, Joergensen, H., Rees, J., Vilà, J., Redondo, J., Cantalapiedra, I. and Conangla, L.: Stable atmospheric boundary-layer experiment in Spain (Sables 98). A report, Boundary-Layer Meteorology, 96, 337-370, 2000. Dillon, T. M.: Vertical Overturns: A Comparison of Thorpe and Ozmidov Length Scales, J. Geophys. Res., 87(C12), 9601-9613, 1982. Itsweire, E. C.: Measurements of vertical overturns in stably stratified turbulent flow, Phys. Fluids, 27(4), 764-766, 1984. Kitade, Y., Matsuyama, M. and Yoshida, J.: Distribution of overturn induced by internal

  2. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Science.gov (United States)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  3. The scaling of stress distribution under small scale yielding by T-scaling method and application to prediction of the temperature dependence on fracture toughness

    International Nuclear Information System (INIS)

    Ishihara, Kenichi; Hamada, Takeshi; Meshii, Toshiyuki

    2017-01-01

    In this paper, a new method for scaling the crack tip stress distribution under small scale yielding condition was proposed and named as T-scaling method. This method enables to identify the different stress distributions for materials with different tensile properties but identical load in terms of K or J. Then by assuming that the temperature dependence of a material is represented as the stress-strain relationship temperature dependence, a method to predict the fracture load at an arbitrary temperature from the already known fracture load at a reference temperature was proposed. This method combined the T-scaling method and the knowledge “fracture stress for slip induced cleavage fracture is temperature independent.” Once the fracture load is predicted, fracture toughness J c at the temperature under consideration can be evaluated by running elastic-plastic finite element analysis. Finally, the above-mentioned framework to predict the J c temperature dependency of a material in the ductile-to-brittle temperature distribution was validated for 0.55% carbon steel JIS S55C. The proposed framework seems to have a possibility to solve the problem the master curve is facing in the relatively higher temperature region, by requiring only tensile tests. (author)

  4. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  5. On the evolution of cluster scaling relations

    International Nuclear Information System (INIS)

    Diemer, Benedikt; Kravtsov, Andrey V.; More, Surhud

    2013-01-01

    Understanding the evolution of scaling relations between the observable properties of clusters and their total mass is key to realizing their potential as cosmological probes. In this study, we investigate whether the evolution of cluster scaling relations is affected by the spurious evolution of mass caused by the evolving reference density with respect to which halo masses are defined (pseudo-evolution). We use the relation between mass, M, and velocity dispersion, σ, as a test case, and show that the deviation from the M-σ relation of cluster-sized halos caused by pseudo-evolution is smaller than 10% for a wide range of mass definitions. The reason for this small impact is a tight relation between the velocity dispersion and mass profiles, σ(relation is generically expected for a variety of density profiles, as long as halos are in approximate Jeans equilibrium. Thus, as the outer 'virial' radius used to define the halo mass, R, increases due to pseudo-evolution, halos approximately preserve their M-σ relation. This result highlights the fact that tight scaling relations are the result of tight equilibrium relations between radial profiles of physical quantities. We find exceptions at very small and very large radii, where the profiles deviate from the relations they exhibit at intermediate radii. We discuss the implications of these results for other cluster scaling relations and argue that pseudo-evolution should have a small effect on most scaling relations, except for those that involve the stellar masses of galaxies. In particular, we show that the relation between stellar-mass fraction and total mass is affected by pseudo-evolution and is largely shaped by it for halo masses ≲ 10 14 M ☉ .

  6. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  7. Dual-scale Galerkin methods for Darcy flow

    Science.gov (United States)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  8. Commensurate scale relations and the Abelian correspondence principle

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1998-06-01

    Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scales, independent of the choice of intermediate renormalization scheme or other theoretical conventions. A prominent example is the generalized Crewther relation which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. Commensurate scale relations also provide an extension of the standard minimal subtraction scheme which is analytic in the quark masses, has non-ambiguous scale-setting properties, and inherits the physical properties of the effective charge α V (Q 2 ) defined from the heavy quark potential. The author also discusses a property of perturbation theory, the Abelian correspondence principle, which provides an analytic constraint on non-Abelian gauge theory for N C → 0

  9. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  10. Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging

    Directory of Open Access Journals (Sweden)

    Svetlana V. Shinkareva

    2013-01-01

    Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.

  11. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  12. Balancing related methods for minimal realization of periodic systems

    OpenAIRE

    Varga, A.

    1999-01-01

    We propose balancing related numerically reliable methods to compute minimal realizations of linear periodic systems with time-varying dimensions. The first method belongs to the family of square-root methods with guaranteed enhanced computational accuracy and can be used to compute balanced minimal order realizations. An alternative balancing-free square-root method has the advantage of a potentially better numerical accuracy in case of poorly scaled original systems. The key numerical co...

  13. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  14. A hybrid method for provincial scale energy-related carbon emission allocation in China.

    Science.gov (United States)

    Bai, Hongtao; Zhang, Yingxuan; Wang, Huizhi; Huang, Yanying; Xu, He

    2014-01-01

    Achievement of carbon emission reduction targets proposed by national governments relies on provincial/state allocations. In this study, a hybrid method for provincial energy-related carbon emissions allocation in China was developed to provide a good balance between production- and consumption-based approaches. In this method, provincial energy-related carbon emissions are decomposed into direct emissions of local activities other than thermal power generation and indirect emissions as a result of electricity consumption. Based on the carbon reduction efficiency principle, the responsibility for embodied emissions of provincial product transactions is assigned entirely to the production area. The responsibility for carbon generation during the production of thermal power is borne by the electricity consumption area, which ensures that different regions with resource endowments have rational development space. Empirical studies were conducted to examine the hybrid method and three indices, per capita GDP, resource endowment index and the proportion of energy-intensive industries, were screened to preliminarily interpret the differences among China's regional carbon emissions. Uncertainty analysis and a discussion of this method are also provided herein.

  15. Giant molecular cloud scaling relations: the role of the cloud definition

    Science.gov (United States)

    Khoperskov, S. A.; Vasiliev, E. O.; Ladeyschikov, D. A.; Sobolev, A. M.; Khoperskov, A. V.

    2016-01-01

    We investigate the physical properties of molecular clouds in disc galaxies with different morphologies: a galaxy without prominent structure, a spiral barred galaxy and a galaxy with flocculent structure. Our N-body/hydrodynamical simulations take into account non-equilibrium H2 and CO chemical kinetics, self-gravity, star formation and feedback processes. For the simulated galaxies, the scaling relations of giant molecular clouds, or so-called Larson's relations, are studied for two types of cloud definition (or extraction method): the first is based on total column density position-position (PP) data sets and the second is indicated by the CO (1-0) line emission used in position-position-velocity (PPV) data. We find that the cloud populations obtained using both cloud extraction methods generally have similar physical parameters, except that for the CO data the mass spectrum of clouds has a tail with low-mass objects M ˜ 103-104 M⊙. Owing toa varying column density threshold, the power-law indices in the scaling relations are significantly changed. In contrast, the relations are invariant to the CO brightness temperature threshold. Finally, we find that the mass spectra of clouds for PPV data are almost insensitive to the galactic morphology, whereas the spectra for PP data demonstrate significant variation.

  16. Modified dispersion relations, inflation, and scale invariance

    Science.gov (United States)

    Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward

    2018-02-01

    For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.

  17. Scaling relations for eddy current phenomena

    International Nuclear Information System (INIS)

    Dodd, C.V.; Deeds, W.E.

    1975-11-01

    Formulas are given for various electromagnetic quantities for coils in the presence of conductors, with the scaling parameters factored out so that small-scale model experiments can be related to large-scale apparatus. Particular emphasis is given to such quantities as eddy current heating, forces, power, and induced magnetic fields. For axially symmetric problems, closed-form integrals are available for the vector potential and all the other quantities obtainable from it. For unsymmetrical problems, a three-dimensional relaxation program can be used to obtain the vector potential and then the derivable quantities. Data on experimental measurements are given to verify the validity of the scaling laws for forces, inductances, and impedances. Indirectly these also support the validity of the scaling of the vector potential and all of the other quantities obtained from it

  18. Method of producing nano-scaled inorganic platelets

    Science.gov (United States)

    Zhamu, Aruna; Jang, Bor Z.

    2012-11-13

    The present invention provides a method of exfoliating a layered material (e.g., transition metal dichalcogenide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites.

  19. Variable scaling method and Stark effect in hydrogen atom

    International Nuclear Information System (INIS)

    Choudhury, R.K.R.; Ghosh, B.

    1983-09-01

    By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)

  20. Planck-scale-modified dispersion relations in FRW spacetime

    Science.gov (United States)

    Rosati, Giacomo; Amelino-Camelia, Giovanni; Marcianò, Antonino; Matassa, Marco

    2015-12-01

    In recent years, Planck-scale modifications of the dispersion relation have been attracting increasing interest also from the viewpoint of possible applications in astrophysics and cosmology, where spacetime curvature cannot be neglected. Nonetheless, the interplay between Planck-scale effects and spacetime curvature is still poorly understood, particularly in cases where curvature is not constant. These challenges have been so far postponed by relying on an ansatz, first introduced by Jacob and Piran. We propose here a general strategy of analysis of the effects of modifications of the dispersion relation in Friedmann-Robertson-Walker spacetimes, applicable both to cases where the relativistic equivalence of frames is spoiled ("preferred-frame scenarios") and to the alternative possibility of "DSR-relativistic theories," theories that are fully relativistic but with relativistic laws deformed so that the modified dispersion relation is observer independent. We show that the Jacob-Piran ansatz implicitly assumes that spacetime translations are not affected by the Planck scale, while under rather general conditions, the same Planck-scale quantum-spacetime structures producing modifications of the dispersion relation also affect translations. Through the explicit analysis of one of the effects produced by modifications of the dispersion relation, an effect amounting to Planck-scale corrections to travel times, we show that our concerns are not merely conceptual but rather can have significant quantitative implications.

  1. Origins of scaling relations in nonequilibrium growth

    International Nuclear Information System (INIS)

    Escudero, Carlos; Korutcheva, Elka

    2012-01-01

    Scaling and hyperscaling laws provide exact relations among critical exponents describing the behavior of a system at criticality. For nonequilibrium growth models with a conserved drift, there exist few of them. One such relation is α + z = 4, found to be inexact in a renormalization group calculation for several classical models in this field. Herein, we focus on the two-dimensional case and show that it is possible to construct conserved surface growth equations for which the relation α + z = 4 is exact in the renormalization group sense. We explain the presence of this scaling law in terms of the existence of geometric principles dominating the dynamics. (paper)

  2. Universal Scaling Relations in Scale-Free Structure Formation

    Science.gov (United States)

    Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.

    2018-04-01

    A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM∝M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters and even dark matter halos. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power law tail of dA/d lnΣ∝Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D∝R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM halos) tend to a ρ∝R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation and detailed "full physics" hydrodynamical simulations. We find that these power-laws are good first order descriptions in all cases.

  3. A Method of Vector Map Multi-scale Representation Considering User Interest on Subdivision Gird

    Directory of Open Access Journals (Sweden)

    YU Tong

    2016-12-01

    Full Text Available Compared with the traditional spatial data model and method, global subdivision grid show a great advantage in the organization and expression of massive spatial data. In view of this, a method of vector map multi-scale representation considering user interest on subdivision gird is proposed. First, the spatial interest field is built using a large number POI data to describe the spatial distribution of the user interest in geographic information. Second, spatial factor is classified and graded, and its representation scale range can be determined. Finally, different levels of subdivision surfaces are divided based on GeoSOT subdivision theory, and the corresponding relation of subdivision level and scale is established. According to the user interest of subdivision surfaces, the spatial feature can be expressed in different degree of detail. It can realize multi-scale representation of spatial data based on user interest. The experimental results show that this method can not only satisfy general-to-detail and important-to-secondary space cognitive demands of users, but also achieve better multi-scale representation effect.

  4. Temperature scaling method for Markov chains.

    Science.gov (United States)

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  5. A novel fruit shape classification method based on multi-scale analysis

    Science.gov (United States)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  6. Experimental methods for laboratory-scale ensilage of lignocellulosic biomass

    International Nuclear Information System (INIS)

    Tanjore, Deepti; Richard, Tom L.; Marshall, Megan N.

    2012-01-01

    Anaerobic fermentation is a potential storage method for lignocellulosic biomass in biofuel production processes. Since biomass is seasonally harvested, stocks are often dried or frozen at laboratory scale prior to fermentation experiments. Such treatments prior to fermentation studies cause irreversible changes in the plant cells, influencing the initial state of biomass and thereby the progression of the fermentation processes itself. This study investigated the effects of drying, refrigeration, and freezing relative to freshly harvested corn stover in lab-scale ensilage studies. Particle sizes, as well as post-ensilage drying temperatures for compositional analysis, were tested to identify the appropriate sample processing methods. After 21 days of ensilage the lowest pH value (3.73 ± 0.03), lowest dry matter loss (4.28 ± 0.26 g. 100 g-1DM), and highest water soluble carbohydrate (WSC) concentrations (7.73 ± 0.26 g. 100 g-1DM) were observed in control biomass (stover ensiled within 12 h of harvest without any treatments). WSC concentration was significantly reduced in samples refrigerated for 7 days prior to ensilage (3.86 ± 0.49 g. 100 g −1 DM). However, biomass frozen prior to ensilage produced statistically similar results to the fresh biomass control, especially in treatments with cell wall degrading enzymes. Grinding to decrease particle size reduced the variance amongst replicates for pH values of individual reactors to a minor extent. Drying biomass prior to extraction of WSCs resulted in degradation of the carbohydrates and a reduced estimate of their concentrations. The methods developed in this study can be used to improve ensilage experiments and thereby help in developing ensilage as a storage method for biofuel production. -- Highlights: ► Laboratory-scale methods to assess the influence of ensilage biofuel production. ► Drying, freezing, and refrigeration of biomass influenced microbial fermentation. ► Freshly ensiled stover exhibited

  7. Non-Abelian gauge field theory in scale relativity

    International Nuclear Information System (INIS)

    Nottale, Laurent; Celerier, Marie-Noeelle; Lehner, Thierry

    2006-01-01

    Gauge field theory is developed in the framework of scale relativity. In this theory, space-time is described as a nondifferentiable continuum, which implies it is fractal, i.e., explicitly dependent on internal scale variables. Owing to the principle of relativity that has been extended to scales, these scale variables can themselves become functions of the space-time coordinates. Therefore, a coupling is expected between displacements in the fractal space-time and the transformations of these scale variables. In previous works, an Abelian gauge theory (electromagnetism) has been derived as a consequence of this coupling for global dilations and/or contractions. We consider here more general transformations of the scale variables by taking into account separate dilations for each of them, which yield non-Abelian gauge theories. We identify these transformations with the usual gauge transformations. The gauge fields naturally appear as a new geometric contribution to the total variation of the action involving these scale variables, while the gauge charges emerge as the generators of the scale transformation group. A generalized action is identified with the scale-relativistic invariant. The gauge charges are the conservative quantities, conjugates of the scale variables through the action, which find their origin in the symmetries of the ''scale-space.'' We thus found in a geometric way and recover the expression for the covariant derivative of gauge theory. Adding the requirement that under the scale transformations the fermion multiplets and the boson fields transform such that the derived Lagrangian remains invariant, we obtain gauge theories as a consequence of scale symmetries issued from a geometric space-time description

  8. PHIBSS: Unified Scaling Relations of Gas Depletion Time and Molecular Gas Fractions

    Science.gov (United States)

    Tacconi, L. J.; Genzel, R.; Saintonge, A.; Combes, F.; García-Burillo, S.; Neri, R.; Bolatto, A.; Contini, T.; Förster Schreiber, N. M.; Lilly, S.; Lutz, D.; Wuyts, S.; Accurso, G.; Boissier, J.; Boone, F.; Bouché, N.; Bournaud, F.; Burkert, A.; Carollo, M.; Cooper, M.; Cox, P.; Feruglio, C.; Freundlich, J.; Herrera-Camus, R.; Juneau, S.; Lippa, M.; Naab, T.; Renzini, A.; Salome, P.; Sternberg, A.; Tadaki, K.; Übler, H.; Walter, F.; Weiner, B.; Weiss, A.

    2018-02-01

    This paper provides an update of our previous scaling relations between galaxy-integrated molecular gas masses, stellar masses, and star formation rates (SFRs), in the framework of the star formation main sequence (MS), with the main goal of testing for possible systematic effects. For this purpose our new study combines three independent methods of determining molecular gas masses from CO line fluxes, far-infrared dust spectral energy distributions, and ∼1 mm dust photometry, in a large sample of 1444 star-forming galaxies between z = 0 and 4. The sample covers the stellar mass range log(M */M ⊙) = 9.0–11.8, and SFRs relative to that on the MS, δMS = SFR/SFR(MS), from 10‑1.3 to 102.2. Our most important finding is that all data sets, despite the different techniques and analysis methods used, follow the same scaling trends, once method-to-method zero-point offsets are minimized and uncertainties are properly taken into account. The molecular gas depletion time t depl, defined as the ratio of molecular gas mass to SFR, scales as (1 + z)‑0.6 × (δMS)‑0.44 and is only weakly dependent on stellar mass. The ratio of molecular to stellar mass μ gas depends on (1+z{)}2.5× {(δ {MS})}0.52× {({M}* )}-0.36, which tracks the evolution of the specific SFR. The redshift dependence of μ gas requires a curvature term, as may the mass dependences of t depl and μ gas. We find no or only weak correlations of t depl and μ gas with optical size R or surface density once one removes the above scalings, but we caution that optical sizes may not be appropriate for the high gas and dust columns at high z. Based on observations of an IRAM Legacy Program carried out with the NOEMA, operated by the Institute for Radio Astronomy in the Millimetre Range (IRAM), which is funded by a partnership of INSU/CNRS (France), MPG (Germany), and IGN (Spain).

  9. A new method for estimating carbon dioxide emissions from transportation at fine spatial scales

    Energy Technology Data Exchange (ETDEWEB)

    Shu Yuqin [School of Geographical Science, South China Normal University, Guangzhou 510631 (China); Lam, Nina S N; Reams, Margaret, E-mail: gis_syq@126.com, E-mail: nlam@lsu.edu, E-mail: mreams@lsu.edu [Department of Environmental Sciences, Louisiana State University, Baton Rouge, 70803 (United States)

    2010-10-15

    Detailed estimates of carbon dioxide (CO{sub 2}) emissions at fine spatial scales are useful to both modelers and decision makers who are faced with the problem of global warming and climate change. Globally, transport related emissions of carbon dioxide are growing. This letter presents a new method based on the volume-preserving principle in the areal interpolation literature to disaggregate transportation-related CO{sub 2} emission estimates from the county-level scale to a 1 km{sup 2} grid scale. The proposed volume-preserving interpolation (VPI) method, together with the distance-decay principle, were used to derive emission weights for each grid based on its proximity to highways, roads, railroads, waterways, and airports. The total CO{sub 2} emission value summed from the grids within a county is made to be equal to the original county-level estimate, thus enforcing the volume-preserving property. The method was applied to downscale the transportation-related CO{sub 2} emission values by county (i.e. parish) for the state of Louisiana into 1 km{sup 2} grids. The results reveal a more realistic spatial pattern of CO{sub 2} emission from transportation, which can be used to identify the emission 'hot spots'. Of the four highest transportation-related CO{sub 2} emission hotspots in Louisiana, high-emission grids literally covered the entire East Baton Rouge Parish and Orleans Parish, whereas CO{sub 2} emission in Jefferson Parish (New Orleans suburb) and Caddo Parish (city of Shreveport) were more unevenly distributed. We argue that the new method is sound in principle, flexible in practice, and the resultant estimates are more accurate than previous gridding approaches.

  10. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    Science.gov (United States)

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  11. Scaling Relations of Starburst-driven Galactic Winds

    International Nuclear Information System (INIS)

    Tanner, Ryan; Cecil, Gerald; Heitsch, Fabian

    2017-01-01

    Using synthetic absorption lines generated from 3D hydrodynamical simulations, we explore how the velocity of a starburst-driven galactic wind correlates with the star formation rate (SFR) and SFR density. We find strong correlations for neutral and low ionized gas, but no correlation for highly ionized gas. The correlations for neutral and low ionized gas only hold for SFRs below a critical limit set by the mass loading of the starburst, above which point the scaling relations flatten abruptly. Below this point the scaling relations depend on the temperature regime being probed by the absorption line, not on the mass loading. The exact scaling relation depends on whether the maximum or mean velocity of the absorption line is used. We find that the outflow velocity of neutral gas can be up to five times lower than the average velocity of ionized gas, with the velocity difference increasing for higher ionization states. Furthermore, the velocity difference depends on both the SFR and mass loading of the starburst. Thus, absorption lines of neutral or low ionized gas cannot easily be used as a proxy for the outflow velocity of the hot gas.

  12. Scaling Relations of Starburst-driven Galactic Winds

    Energy Technology Data Exchange (ETDEWEB)

    Tanner, Ryan [Department of Chemistry and Physics, Augusta University, Augusta, GA 30912 (United States); Cecil, Gerald; Heitsch, Fabian, E-mail: rytanner@augusta.edu [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3255 (United States)

    2017-07-10

    Using synthetic absorption lines generated from 3D hydrodynamical simulations, we explore how the velocity of a starburst-driven galactic wind correlates with the star formation rate (SFR) and SFR density. We find strong correlations for neutral and low ionized gas, but no correlation for highly ionized gas. The correlations for neutral and low ionized gas only hold for SFRs below a critical limit set by the mass loading of the starburst, above which point the scaling relations flatten abruptly. Below this point the scaling relations depend on the temperature regime being probed by the absorption line, not on the mass loading. The exact scaling relation depends on whether the maximum or mean velocity of the absorption line is used. We find that the outflow velocity of neutral gas can be up to five times lower than the average velocity of ionized gas, with the velocity difference increasing for higher ionization states. Furthermore, the velocity difference depends on both the SFR and mass loading of the starburst. Thus, absorption lines of neutral or low ionized gas cannot easily be used as a proxy for the outflow velocity of the hot gas.

  13. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  14. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    Science.gov (United States)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  15. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  16. A Modified Conjugacy Condition and Related Nonlinear Conjugate Gradient Method

    Directory of Open Access Journals (Sweden)

    Shengwei Yao

    2014-01-01

    Full Text Available The conjugate gradient (CG method has played a special role in solving large-scale nonlinear optimization problems due to the simplicity of their very low memory requirements. In this paper, we propose a new conjugacy condition which is similar to Dai-Liao (2001. Based on this condition, the related nonlinear conjugate gradient method is given. With some mild conditions, the given method is globally convergent under the strong Wolfe-Powell line search for general functions. The numerical experiments show that the proposed method is very robust and efficient.

  17. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  18. Continuum Level Density in Complex Scaling Method

    International Nuclear Information System (INIS)

    Suzuki, R.; Myo, T.; Kato, K.

    2005-01-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique

  19. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  20. Coulometric-potentiometric determination of autoprotolysis constant and relative acidity scale of water

    Directory of Open Access Journals (Sweden)

    Džudović Radmila M.

    2010-01-01

    Full Text Available The autoprotolysis constant and relative acidity scale of water were determined by applying the coulometric-potentiometric method and a hydrogen/palladium (H2/Pd generator anode. In the described procedure for the evaluation of autoprotolysis constant, a strong base generated coulometrically at the platinum cathode in situ in the electrolytic cell, in presence of sodium perchlorate as the supporting electrolyte, is titrated with hydrogen ions obtained by the anodic oxidation of hydrogen dissolved in palladium electrode. The titration was carried out with a glass-SCE electrode pair at 25.0±0.1°C. The value obtained pKw = 13.91 ± 0.06 is in agreement with literature data. The range of acidity scale of water is determined from the difference between the halfneutralization potentials of electrogenerated perchloric acid and that of sodium hydroxide in a sodium perchlorate medium. The halfneutralization potentials were measured using both a glass-SCE and a (H2/Pdind-SCE electrode pairs. A wider range of relative acidity scale of water was obtained with the glass-SCE electrode pair.

  1. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-01-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to ''address uncertainties and increase confidence in the projected, full-scale mixing performance and operations'' in the Waste Treatment and Immobilization Plant (WTP).

  2. Connotations of pixel-based scale effect in remote sensing and the modified fractal-based analysis method

    Science.gov (United States)

    Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu

    2017-06-01

    Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing

  3. Elements of a method to scale ignition reactor Tokamak

    International Nuclear Information System (INIS)

    Cotsaftis, M.

    1984-08-01

    Due to unavoidable uncertainties from present scaling laws when projected to thermonuclear regime, a method is proposed to minimize these uncertainties in order to figure out the main parameters of ignited tokamak. The method mainly consists in searching, if any, a domain in adapted parameters space which allows Ignition, but is the least sensitive to possible change in scaling laws. In other words, Ignition domain is researched which is the intersection of all possible Ignition domains corresponding to all possible scaling laws produced by all possible transports

  4. Scheme-Independent Predictions in QCD: Commensurate Scale Relations and Physical Renormalization Schemes

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    1998-01-01

    Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scale, such as the ''generalized Crewther relation'', which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. All non-conformal effects are absorbed by fixing the ratio of the respective momentum transfer and energy scales. In the case of fixed-point theories, commensurate scale relations relate both the ratio of couplings and the ratio of scales as the fixed point is approached. The relations between the observables are independent of the choice of intermediate renormalization scheme or other theoretical conventions. Commensurate scale relations also provide an extension of the standard minimal subtraction scheme, which is analytic in the quark masses, has non-ambiguous scale-setting properties, and inherits the physical properties of the effective charge α V (Q 2 ) defined from the heavy quark potential. The application of the analytic scheme to the calculation of quark-mass-dependent QCD corrections to the Z width is also reviewed

  5. Cosmology and cluster halo scaling relations

    NARCIS (Netherlands)

    Araya-Melo, Pablo A.; van de Weygaert, Rien; Jones, Bernard J. T.

    2009-01-01

    We explore the effects of dark matter and dark energy on the dynamical scaling properties of galaxy clusters. We investigate the cluster Faber-Jackson (FJ), Kormendy and Fundamental Plane (FP) relations between the mass, radius and velocity dispersion of cluster-sized haloes in cosmological N-body

  6. Short scales to assess cannabis-related problems: a review of psychometric properties

    Directory of Open Access Journals (Sweden)

    Klempova Danica

    2008-12-01

    Full Text Available Abstract Aims The purpose of this paper is to summarize the psychometric properties of four short screening scales to assess problematic forms of cannabis use: Severity of Dependence Scale (SDS, Cannabis Use Disorders Identification Test (CUDIT, Cannabis Abuse Screening Test (CAST and Problematic Use of Marijuana (PUM. Methods A systematic computer-based literature search was conducted within the databases of PubMed, PsychINFO and Addiction Abstracts. A total of 12 publications reporting measures of reliability or validity were identified: 8 concerning SDS, 2 concerning CUDIT and one concerning CAST and PUM. Studies spanned adult and adolescent samples from general and specific user populations in a number of countries worldwide. Results All screening scales tended to have moderate to high internal consistency (Cronbach's α ranging from .72 to .92. Test-retest reliability and item total correlation have been reported for SDS with acceptable results. Results of validation studies varied depending on study population and standards used for validity assessment, but generally sensitivity, specificity and predictive power are satisfactory. Standard diagnostic cut-off points that can be generalized to different populations do not exist for any scale. Conclusion Short screening scales to assess dependence and other problems related to the use of cannabis seem to be a time and cost saving opportunity to estimate overall prevalences of cannabis-related negative consequences and to identify at-risk persons prior to using more extensive diagnostic instruments. Nevertheless, further research is needed to assess the performance of the tests in different populations and in comparison to broader criteria of cannabis-related problems other than dependence.

  7. Functional Independent Scaling Relation for ORR/OER Catalysts

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Dickens, Colin F.

    2016-01-01

    reactions. Here, we show that the oxygen-oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data...... and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largely cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange-correlation functional...

  8. Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration

    KAUST Repository

    Gasda, S. E.

    2009-04-23

    Large-scale implementation of geological CO2 sequestration requires quantification of risk and leakage potential. One potentially important leakage pathway for the injected CO2 involves existing oil and gas wells. Wells are particularly important in North America, where more than a century of drilling has created millions of oil and gas wells. Models of CO 2 injection and leakage will involve large uncertainties in parameters associated with wells, and therefore a probabilistic framework is required. These models must be able to capture both the large-scale CO 2 plume associated with the injection and the small-scale leakage problem associated with localized flow along wells. Within a typical simulation domain, many hundreds of wells may exist. One effective modeling strategy combines both numerical and analytical models with a specific set of simplifying assumptions to produce an efficient numerical-analytical hybrid model. The model solves a set of governing equations derived by vertical averaging with assumptions of a macroscopic sharp interface and vertical equilibrium. These equations are solved numerically on a relatively coarse grid, with an analytical model embedded to solve for wellbore flow occurring at the sub-gridblock scale. This vertical equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid refinement for sub-scale features. Through a series of benchmark problems, we show that VESA compares well with traditional numerical simulations and to a semi-analytical model which applies to appropriately simple systems. We believe that the VESA model provides the necessary accuracy and efficiency for applications of risk analysis in many CO2 sequestration problems. © 2009 Springer Science+Business Media B.V.

  9. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  10. Spectral properties and scaling relations in off diagonally disordered chains

    International Nuclear Information System (INIS)

    Ure, J.E.; Majlis, N.

    1987-07-01

    We obtain the localization length L as a function of the energy E and the disorder width W for an off-diagonally disordered chain. This is done performing numerical simulations involving the continued fraction representations of the transfer matrix. The scaling relation L=W s is obtained with values of the exponent s in agreement with calculations of other authors. We also obtain the relation L ∼ |E| v for E → 0, and use it in the Herbert-Spencer-Thouless formula for L to describe the singularity of the density of states near E=0. We show that the slightest diagonal disorder obliterates this singularity. A practical method is presented to calculate the Green function by exploiting its continued fraction expansion. (author). 20 refs, 4 figs

  11. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  12. Single and two-phase similarity analysis of a reduced-scale natural convection loop relative to a full-scale prototype

    International Nuclear Information System (INIS)

    Botelho, David A.; Faccini, Jose L.H.

    2002-01-01

    The main topic in this paper is a new device being considered to improve nuclear reactor safety employing the natural circulation. A scaled experiment used to demonstrate the performance of the device is also described. We also applied a similarity analysis method for single and two-phase natural convection loop flow to the IEN CCN experiment and to an APEX like experiment to verify the degree of similarity relative to a full-scale prototype like the AP600. Most of the CCN similarity numbers that represent important single and two-phase similarity conditions are comparable to the APEX like loop non-dimensional numbers calculated employing the same methodology. Despite the much smaller geometric, pressure, and power scales, we conclude that the IEN CCN has single and two-phase natural circulation similarity numbers that represent fairly well the full-scale prototype. even lacking most complementary primary and safety systems, this IEN circuit provided a much valid experience to develop human, experimental, and analytical resources, besides its utilization as a training tool. (author)

  13. Linear-scaling quantum mechanical methods for excited states.

    Science.gov (United States)

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  14. Conformal Symmetry as a Template:Commensurate Scale Relations and Physical Renormalization Schemes

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    1999-01-01

    Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scale, such as the ''generalized Crewther relation'', which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. We show how conformal symmetry provides a template for such QCD predictions, providing relations between observables which are present even in theories which are not scale invariant. All non-conformal effects are absorbed by fixing the ratio of the respective momentum transfer and energy scales. In the case of fixed-point theories, commensurate scale relations relate both the ratio of couplings and the ratio of scales as the fixed point is approached. In the case of the α V scheme defined from heavy quark interactions, virtual corrections due to fermion pairs are analytically incorporated into the Gell-Mann Low function, thus avoiding the problem of explicitly computing and resuming quark mass corrections related to the running of the coupling. Applications to the decay width of the Z boson, the BFKL pomeron, and virtual photon scattering are discussed

  15. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    International Nuclear Information System (INIS)

    Williams, Mark L.; Rearden, Bradley T.

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  16. Method Effects on an Adaptation of the Rosenberg Self-Esteem Scale in Greek and the Role of Personality Traits.

    Science.gov (United States)

    Michaelides, Michalis P; Koutsogiorgi, Chrystalla; Panayiotou, Georgia

    2016-01-01

    Rosenberg's Self-Esteem Scale is a balanced, 10-item scale designed to be unidimensional; however, research has repeatedly shown that its factorial structure is contaminated by method effects due to item wording. Beyond the substantive self-esteem factor, 2 additional factors linked to the positive and negative wording of items have been theoretically specified and empirically supported. Initial evidence has revealed systematic relations of the 2 method factors with variables expressing approach and avoidance motivation. This study assessed the fit of competing confirmatory factor analytic models for the Rosenberg Self-Esteem Scale using data from 2 samples of adult participants in Cyprus. Models that accounted for both positive and negative wording effects via 2 latent method factors had better fit compared to alternative models. Measures of experiential avoidance, social anxiety, and private self-consciousness were associated with the method factors in structural equation models. The findings highlight the need to specify models with wording effects for a more accurate representation of the scale's structure and support the hypothesis of method factors as response styles, which are associated with individual characteristics related to avoidance motivation, behavioral inhibition, and anxiety.

  17. Electron beam absorption in solid and in water phantoms: depth scaling and energy-range relations

    International Nuclear Information System (INIS)

    Grosswendt, B.; Roos, M.

    1989-01-01

    In electron dosimetry energy parameters are used with values evaluated from ranges in water. The electron ranges in water may be deduced from ranges measured in solid phantoms. Several procedures recommended by national and international organisations differ both in the scaling of the ranges and in the energy-range relations for water. Using the Monte Carlo method the application of different procedures for electron energies below 10 MeV is studied for different phantom materials. It is shown that deviations in the range scaling and in the energy-range relations for water may accumulate to give energy errors of several per cent. In consequence energy-range relations are deduced for several solid phantom materials which enable a single-step energy determination. (author)

  18. Urban energy consumption and related carbon emission estimation: a study at the sector scale

    Science.gov (United States)

    Lu, Weiwei; Chen, Chen; Su, Meirong; Chen, Bin; Cai, Yanpeng; Xing, Tao

    2013-12-01

    With rapid economic development and energy consumption growth, China has become the largest energy consumer in the world. Impelled by extensive international concern, there is an urgent need to analyze the characteristics of energy consumption and related carbon emission, with the objective of saving energy, reducing carbon emission, and lessening environmental impact. Focusing on urban ecosystems, the biggest energy consumer, a method for estimating energy consumption and related carbon emission was established at the urban sector scale in this paper. Based on data for 1996-2010, the proposed method was applied to Beijing in a case study to analyze the consumption of different energy resources (i.e., coal, oil, gas, and electricity) and related carbon emission in different sectors (i.e., agriculture, industry, construction, transportation, household, and service sectors). The results showed that coal and oil contributed most to energy consumption and carbon emission among different energy resources during the study period, while the industrial sector consumed the most energy and emitted the most carbon among different sectors. Suggestions were put forward for energy conservation and emission reduction in Beijing. The analysis of energy consumption and related carbon emission at the sector scale is helpful for practical energy saving and emission reduction in urban ecosystems.

  19. Scale factor measure method without turntable for angular rate gyroscope

    Science.gov (United States)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  20. The validity of the density scaling method in primary electron transport for photon and electron beams

    International Nuclear Information System (INIS)

    Woo, M.K.; Cunningham, J.R.

    1990-01-01

    In the convolution/superposition method of photon beam dose calculations, inhomogeneities are usually handled by using some form of scaling involving the relative electron densities of the inhomogeneities. In this paper the accuracy of density scaling as applied to primary electrons generated in photon interactions is examined. Monte Carlo calculations are compared with density scaling calculations for air and cork slab inhomogeneities. For individual primary photon kernels as well as for photon interactions restricted to a thin layer, the results can differ significantly, by up to 50%, between the two calculations. However, for realistic photon beams where interactions occur throughout the whole irradiated volume, the discrepancies are much less severe. The discrepancies for the kernel calculation are attributed to the scattering characteristics of the electrons and the consequent oversimplified modeling used in the density scaling method. A technique called the kernel integration technique is developed to analyze the general effects of air and cork inhomogeneities. It is shown that the discrepancies become significant only under rather extreme conditions, such as immediately beyond the surface after a large air gap. In electron beams all the primary electrons originate from the surface of the phantom and the errors caused by simple density scaling can be much more significant. Various aspects relating to the accuracy of density scaling for air and cork slab inhomogeneities are discussed

  1. An allometric scaling relation based on logistic growth of cities

    International Nuclear Information System (INIS)

    Chen, Yanguang

    2014-01-01

    Highlights: • An allometric scaling based on logistic process can be used to model urban growth. • The traditional allometry is based on exponential growth instead of logistic growth. • The exponential allometry represents a local scaling of urban growth. • The logistic allometry represents a global scaling of urban growth. • The exponential allometry is an approximation relation of the logistic allometry. - Abstract: The relationships between urban area and population size have been empirically demonstrated to follow the scaling law of allometric growth. This allometric scaling is based on exponential growth of city size and can be termed “exponential allometry”, which is associated with the concepts of fractals. However, both city population and urban area comply with the course of logistic growth rather than exponential growth. In this paper, I will present a new allometric scaling based on logistic growth to solve the above mentioned problem. The logistic growth is a process of replacement dynamics. Defining a pair of replacement quotients as new measurements, which are functions of urban area and population, we can derive an allometric scaling relation from the logistic processes of urban growth, which can be termed “logistic allometry”. The exponential allometric relation between urban area and population is the approximate expression of the logistic allometric equation when the city size is not large enough. The proper range of the allometric scaling exponent value is reconsidered through the logistic process. Then, a medium-sized city of Henan Province, China, is employed as an example to validate the new allometric relation. The logistic allometry is helpful for further understanding the fractal property and self-organized process of urban evolution in the right perspective

  2. The initial development of the Pregnancy-related Anxiety Scale.

    Science.gov (United States)

    Brunton, Robyn J; Dryer, Rachel; Saliba, Anthony; Kohlhoff, Jane

    2018-05-30

    Pregnancy-related anxiety is a distinct anxiety characterised by pregnancy-specific concerns. This anxiety is consistently associated with adverse birth outcomes, and obstetric and paediatric risk factors, associations generally not seen with other anxieties. The need exists for a psychometrically sound scale for this anxiety type. This study, therefore, reports on the initial development of the Pregnancy-related Anxiety Scale. The item pool was developed following a literature review and the formulation of a definition for pregnancy-related anxiety. An Expert Review Panel reviewed the definition, item pool and test specifications. Pregnant women were recruited online (N=671). Using a subsample (N=262, M=27.94, SD=4.99), fourteen factors were extracted using Principal Components Analysis accounting for 63.18% of the variance. Further refinement resulted in 11 distinct factors. Confirmatory Factor Analysis further tested the model with a second subsample (N=369, M=26.59, SD=4.76). After additional refinement, the resulting model was a good fit with nine factors (childbirth, appearance, attitudes towards childbirth, motherhood, acceptance, anxiety, medical, avoidance, and baby concerns). Internal consistency reliability was good with the majority of subscales exceeding α=.80. The Pregnancy-related Anxiety Scale is easy to administer with higher scores indicative of greater pregnancy-related anxiety. The inclusion of reverse-scored items is a potential limitation with poorer reliability evident for these factors. Although still in its development stage, the Pregnancy-related Anxiety Scale will eventually be useful both clinically (affording early intervention) and in research settings. Copyright © 2018 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  3. A multi-scale network method for two-phase flow in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick

    2017-08-01

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces within each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.

  4. A multi-scale network method for two-phase flow in porous media

    International Nuclear Information System (INIS)

    Khayrat, Karim; Jenny, Patrick

    2017-01-01

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces within each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.

  5. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  6. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    Science.gov (United States)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  7. Heritage and scale: settings, boundaries and relations

    DEFF Research Database (Denmark)

    Harvey, David

    2015-01-01

    of individuals and communities, towns and cities, regions, nations, continents or globally – becomes ever more important. Partly reflecting this crisis of the national container, researchers have sought opportunities both through processes of ‘downscaling’, towards community, family and even personal forms...... relations. This paper examines how heritage is produced and practised, consumed and experienced, managed and deployed at a variety of scales, exploring how notions of scale, territory and boundedness have a profound effect on the heritage process. Drawing on the work of Doreen Massey and others, the paper...

  8. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  9. Using scaling relations to understand trends in the catalytic activity of transition metals

    International Nuclear Information System (INIS)

    Jones, G; Bligaard, T; Abild-Pedersen, F; Noerskov, J K

    2008-01-01

    A method is developed to estimate the potential energy diagram for a full catalytic reaction for a range of late transition metals on the basis of a calculation (or an experimental determination) for a single metal. The method, which employs scaling relations between adsorption energies, is illustrated by calculating the potential energy diagram for the methanation reaction and ammonia synthesis for 11 different metals on the basis of results calculated for Ru. It is also shown that considering the free energy diagram for the reactions, under typical industrial conditions, provides additional insight into reactivity trends

  10. Scaling Relations for Viscous and Gravitational Flow Instabilities in Multiphase Multicomponent Compressible Flow

    Science.gov (United States)

    Moortgat, J.; Amooie, M. A.; Soltanian, M. R.

    2016-12-01

    Problems in hydrogeology and hydrocarbon reservoirs generally involve the transport of solutes in a single solvent phase (e.g., contaminants or dissolved injection gas), or the flow of multiple phases that may or may not exchange mass (e.g., brine, NAPL, oil, gas). Often, flow is viscously and gravitationally unstable due to mobility and density contrasts within a phase or between phases. Such instabilities have been studied in detail for single-phase incompressible fluids and for two-phase immiscible flow, but to a lesser extent for multiphase multicomponent compressible flow. The latter is the subject of this presentation. Robust phase stability analyses and phase split calculations, based on equations of state, determine the mass exchange between phases and the resulting phase behavior, i.e., phase densities, viscosities, and volumes. Higher-order finite element methods and fine grids are used to capture the small-scale onset of flow instabilities. A full matrix of composition dependent coefficients is considered for each Fickian diffusive phase flux. Formation heterogeneity can have a profound impact and is represented by realistic geostatistical models. Qualitatively, fingering in multiphase compositional flow is different from single-phase problems because 1) phase mobilities depend on rock wettability through relative permeabilities, and 2) the initial density and viscosity ratios between phases may change due to species transfer. To quantify mixing rates in different flow regimes and for varying degrees of miscibility and medium heterogeneities, we define the spatial variance, scalar dissipation rate, dilution index, skewness, and kurtosis of the molar density of introduced species. Molar densities, unlike compositions, include compressibility effects. The temporal evolution of these measures shows that, while transport at the small-scale (cm) is described by the classical advection-diffusion-dispersion relations, scaling at the macro-scale (> 10 m) shows

  11. Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity

    NARCIS (Netherlands)

    Maher, G.D.; Hulshoff, S.J.

    2014-01-01

    The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain

  12. Estimates of the pion-nucleon sigma term using dispersion relations and taking into account the relation between chiral and scale invariance breaking

    International Nuclear Information System (INIS)

    Efrosinin, V.P.; Zaikin, D.A.

    1983-01-01

    We study the possible reasons for the disagreement between the estimates of the pion-nucleon sigma term obtained by the method of dispersion relations with extrapolation to the Cheng-Dashen point and by other methods which do not involve this extrapolation. One reason for the disagreement may be the nonanalyticity of the πN amplitude in the variable t for ν = 0. We propose a method for estimating the sigma term using the threshold data for the πN amplitude, in which the effect of this nonanalyticity is minimized. We discuss the relation between scale invariance violation and chiral symmetry breaking and give the corresponding estimate of the sigma term. The two estimates are similar (42 and 34 MeV) and are in agreement when the uncertainties of the two methods are taken into consideration

  13. The MIMIC Method with Scale Purification for Detecting Differential Item Functioning

    Science.gov (United States)

    Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien

    2009-01-01

    This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…

  14. Test methods of total dose effects in very large scale integrated circuits

    International Nuclear Information System (INIS)

    He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng

    2004-01-01

    A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU

  15. Deposit and scale prevention methods in thermal sea water desalination

    International Nuclear Information System (INIS)

    Froehner, K.R.

    1977-01-01

    Introductory remarks deal with the 'fouling factor' and its influence on the overall heat transfer coefficient of msf evaporators. The composition of the matter dissolved in sea water and the thermal and chemical properties lead to formation of alkaline scale or even hard, sulphate scale on the heat exchanger tube walls and can hamper plant operation and economics seriously. Among the scale prevention methods are 1) pH control by acid dosing (decarbonation), 2) 'threshold treatment' by dosing of inhibitors of different kind, 3) mechanical cleaning by sponge rubber balls guided through the heat exchanger tubes, in general combined with methods no. 1 or 2, and 4) application of a scale crystals germ slurry (seeding). Mention is made of several other scale prevention proposals. The problems encountered with marine life (suspension, deposit, growth) in desalination plants are touched. (orig.) [de

  16. Accuracy of a digital weight scale relative to the nintendo wii in measuring limb load asymmetry.

    Science.gov (United States)

    Kumar, Ns Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah

    2014-08-01

    [Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry.

  17. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  18. The development and psychometric analysis of the Chinese HIV-Related Fatigue Scale.

    Science.gov (United States)

    Li, Su-Yin; Wu, Hua-Shan; Barroso, Julie

    2016-04-01

    To develop a Chinese version of the human immunodeficiency virus-related Fatigue Scale and examine its reliability and validity. Fatigue is found in more than 70% of people infected with human immunodeficiency virus. However, a scale to assess fatigue in human immunodeficiency virus-positive people has not yet been developed for use in Chinese-speaking countries. A methodologic study involving instrument development and psychometric evaluation was used. The human immunodeficiency virus-related Fatigue Scale was examined through a two-step procedure: (1) translation and back translation and (2) psychometric analysis. A sample of 142 human immunodeficiency virus-positive patients was recruited from the Infectious Disease Outpatient Clinic in central Taiwan. Their fatigue data were analysed with Cronbach's α for internal consistency. Two weeks later, the data of a random sample of 28 patients from the original 142 were analysed for test-retest reliability. The correlation between the World Health Organization Quality of Life Assessment-Human Immunodeficiency Virus and the Chinese version of the human immunodeficiency virus-related Fatigue Scale was analysed for concurrent validity. The Chinese version of the human immunodeficiency virus-related Fatigue Scale scores of human immunodeficiency virus-positive patients with highly active antiretroviral therapy and those without were compared to demonstrate construct validity. The internal consistency and test-retest reliability of the Chinese version of the human immunodeficiency virus-related Fatigue Scale were 0·97 and 0·686, respectively. In regard to concurrent validity, a negative correlation was found between the scores of the Chinese version of the human immunodeficiency virus-related Fatigue Scale and the World Health Organization Quality of Life Assessment-Human Immunodeficiency Virus. Additionally, the Chinese version of the human immunodeficiency virus-related Fatigue Scale could be used to effectively

  19. The development and validation of the Relational Self-Esteem Scale.

    Science.gov (United States)

    Du, Hongfei; King, Ronnel B; Chi, Peilian

    2012-06-01

    According to the tripartite model of the self (Brewer & Gardner, 1996), the self consists of three aspects: personal, relational, and collective. Correspondingly, individuals can achieve a sense of self-worth through their personal attributes (personal self-esteem), relationship with significant others (relational self-esteem), or social group membership (collective self-esteem). Existing measures on personal and collective self-esteem are available in the literature; however, no scale exists that assesses relational self-esteem. The authors developed a scale to measure individual differences in relational self-esteem and tested it with two samples of Chinese university students. Between and within-network approaches to construct validation were used. The scale showed adequate internal consistency reliability and results of the confirmatory factor analysis showed good fit. It also exhibited meaningful correlations with theoretically relevant constructs in the nomological network. Implications and directions for future research are discussed. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  20. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    Science.gov (United States)

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  1. An allometric scaling relation based on logistic growth of cities

    Science.gov (United States)

    Chen, Yanguang

    2014-08-01

    The relationships between urban area and population size have been empirically demonstrated to follow the scaling law of allometric growth. This allometric scaling is based on exponential growth of city size and can be termed "exponential allometry", which is associated with the concepts of fractals. However, both city population and urban area comply with the course of logistic growth rather than exponential growth. In this paper, I will present a new allometric scaling based on logistic growth to solve the abovementioned problem. The logistic growth is a process of replacement dynamics. Defining a pair of replacement quotients as new measurements, which are functions of urban area and population, we can derive an allometric scaling relation from the logistic processes of urban growth, which can be termed "logistic allometry". The exponential allometric relation between urban area and population is the approximate expression of the logistic allometric equation when the city size is not large enough. The proper range of the allometric scaling exponent value is reconsidered through the logistic process. Then, a medium-sized city of Henan Province, China, is employed as an example to validate the new allometric relation. The logistic allometry is helpful for further understanding the fractal property and self-organized process of urban evolution in the right perspective.

  2. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  3. A multi-scale method of mapping urban influence

    Science.gov (United States)

    Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters

    2009-01-01

    Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...

  4. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  5. The Relation between Cosmological Redshift and Scale Factor for Photons

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Shuxun, E-mail: tshuxun@mail.bnu.edu.cn [Department of Astronomy, Beijing Normal University, Beijing 100875 (China); Department of Physics, Wuhan University, Wuhan 430072 (China)

    2017-09-10

    The cosmological constant problem has become one of the most important ones in modern cosmology. In this paper, we try to construct a model that can avoid the cosmological constant problem and have the potential to explain the apparent late-time accelerating expansion of the universe in both luminosity distance and angular diameter distance measurement channels. In our model, the core is to modify the relation between cosmological redshift and scale factor for photons. We point out three ways to test our hypothesis: the supernova time dilation; the gravitational waves and its electromagnetic counterparts emitted by the binary neutron star systems; and the Sandage–Loeb effect. All of this method is feasible now or in the near future.

  6. Explaining Method Effects Associated with Negatively Worded Items in Trait and State Global and Domain-Specific Self-Esteem Scales

    Science.gov (United States)

    Tomas, Jose M.; Oliver, Amparo; Galiana, Laura; Sancho, Patricia; Lila, Marisol

    2013-01-01

    Several investigators have interpreted method effects associated with negatively worded items in a substantive way. This research extends those studies in different ways: (a) it establishes the presence of methods effects in further populations and particular scales, and (b) it examines the possible relations between a method factor associated…

  7. Ambiguous tests of general relativity on cosmological scales

    International Nuclear Information System (INIS)

    Zuntz, Joe; Baker, Tessa; Ferreira, Pedro G.; Skordis, Constantinos

    2012-01-01

    There are a number of approaches to testing General Relativity (GR) on linear scales using parameterized frameworks for modifying cosmological perturbation theory. It is sometimes assumed that the details of any given parameterization are unimportant if one uses it as a diagnostic for deviations from GR. In this brief report we argue that this is not necessarily so. First we show that adopting alternative combinations of modifications to the field equations significantly changes the constraints that one obtains. In addition, we show that using a parameterization with insufficient freedom significantly tightens the apparent theoretical constraints. Fundamentally we argue that it is almost never appropriate to consider modifications to the perturbed Einstein equations as being constraints on the effective gravitational constant, for example, in the same sense that solar system constraints are. The only consistent modifications are either those that grant near-total freedom, as in decomposition methods, or ones which map directly to a particular part of theory space

  8. Length scales in glass-forming liquids and related systems: a review

    International Nuclear Information System (INIS)

    Karmakar, Smarajit; Dasgupta, Chandan; Sastry, Srikanth

    2016-01-01

    The central problem in the study of glass-forming liquids and other glassy systems is the understanding of the complex structural relaxation and rapid growth of relaxation times seen on approaching the glass transition. A central conceptual question is whether one can identify one or more growing length scale(s) associated with this behavior. Given the diversity of molecular glass-formers and a vast body of experimental, computational and theoretical work addressing glassy behavior, a number of ideas and observations pertaining to growing length scales have been presented over the past few decades, but there is as yet no consensus view on this question. In this review, we will summarize the salient results and the state of our understanding of length scales associated with dynamical slow down. After a review of slow dynamics and the glass transition, pertinent theories of the glass transition will be summarized and a survey of ideas relating to length scales in glassy systems will be presented. A number of studies have focused on the emergence of preferred packing arrangements and discussed their role in glassy dynamics. More recently, a central object of attention has been the study of spatially correlated, heterogeneous dynamics and the associated length scale, studied in computer simulations and theoretical analysis such as inhomogeneous mode coupling theory. A number of static length scales have been proposed and studied recently, such as the mosaic length scale discussed in the random first-order transition theory and the related point-to-set correlation length. We will discuss these, elaborating on key results, along with a critical appraisal of the state of the art. Finally we will discuss length scales in driven soft matter, granular fluids and amorphous solids, and give a brief description of length scales in aging systems. Possible relations of these length scales with those in glass-forming liquids will be discussed. (review article)

  9. Methods of numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1983-01-01

    Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)

  10. Relationship between the domains of the Multidimensional Students’ Life Satisfaction Scale, satisfaction with food-related life and happiness in university students

    DEFF Research Database (Denmark)

    Schnettler, Berta; Orellana, Ligia; Lobos, Germán

    2015-01-01

    Aim: to characterize types of university students based on satisfaction with life domains that affect eating habits, satisfaction with food-related life and subjective happiness. Materials and methods: a questionnaire was applied to a nonrandom sample of 305 students of both genders in five...... universities in Chile. The questionnaire included the abbreviated Multidimensional Student’s Life Satisfaction Scale (MSLSS), Satisfaction with Food-related Life Scale (SWFL) and the Subjective Happiness Scale (SHS). Eating habits, frequency of food consumption in and outside the place of residence...

  11. Scaling up: Assessing social impacts at the macro-scale

    International Nuclear Information System (INIS)

    Schirmer, Jacki

    2011-01-01

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  12. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.

  13. Universal scaling relations for the energies of many-electron Hooke atoms

    Science.gov (United States)

    Odriazola, A.; Solanpää, J.; Kylänpää, I.; González, A.; Räsänen, E.

    2017-04-01

    A three-dimensional harmonic oscillator consisting of N ≥2 Coulomb-interacting charged particles, often called a (many-electron) Hooke atom, is a popular model in computational physics for, e.g., semiconductor quantum dots and ultracold ions. Starting from Thomas-Fermi theory, we show that the ground-state energy of such a system satisfies a nontrivial relation: Eg s=ω N4 /3fg s(β N1 /2) , where ω is the oscillator strength, β is the ratio between Coulomb and oscillator characteristic energies, and fg s is a universal function. We perform extensive numerical calculations to verify the applicability of the relation. In addition, we show that the chemical potentials and addition energies also satisfy approximate scaling relations. In all cases, analytic expressions for the universal functions are provided. The results have predictive power in estimating the key ground-state properties of the system in the large-N limit, and can be used in the development of approximative methods in electronic structure theory.

  14. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  15. Working memory performance inversely predicts spontaneous delta and theta-band scaling relations.

    Science.gov (United States)

    Euler, Matthew J; Wiltshire, Travis J; Niermeyer, Madison A; Butner, Jonathan E

    2016-04-15

    Electrophysiological studies have strongly implicated theta-band activity in human working memory processes. Concurrently, work on spontaneous, non-task-related oscillations has revealed the presence of long-range temporal correlations (LRTCs) within sub-bands of the ongoing EEG, and has begun to demonstrate their functional significance. However, few studies have yet assessed the relation of LRTCs (also called scaling relations) to individual differences in cognitive abilities. The present study addressed the intersection of these two literatures by investigating the relation of narrow-band EEG scaling relations to individual differences in working memory ability, with a particular focus on the theta band. Fifty-four healthy adults completed standardized assessments of working memory and separate recordings of their spontaneous, non-task-related EEG. Scaling relations were quantified in each of the five classical EEG frequency bands via the estimation of the Hurst exponent obtained from detrended fluctuation analysis. A multilevel modeling framework was used to characterize the relation of working memory performance to scaling relations as a function of general scalp location in Cartesian space. Overall, results indicated an inverse relationship between both delta and theta scaling relations and working memory ability, which was most prominent at posterior sensors, and was independent of either spatial or individual variability in band-specific power. These findings add to the growing literature demonstrating the relevance of neural LRTCs for understanding brain functioning, and support a construct- and state-dependent view of their functional implications. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Development of scaling factor prediction method for radionuclide composition in low-level radioactive waste

    International Nuclear Information System (INIS)

    Park, Jin Beak

    1995-02-01

    Low-level radioactive waste management require the knowledge of the natures and quantities of radionuclides in the immobilized or packaged waste. U. S. NRC rules require programs that measure the concentrations of all relevant nuclides either directly or indirectly by relating difficult-to-measure radionuclides to other easy-to-measure radionuclides with application of scaling factors. Scaling factors previously developed through statistical approach can give only generic ones and have many difficult problem about sampling procedures. Generic scaling factors can not take into account for plant operation history. In this study, a method to predict plant-specific and operational history dependent scaling factors is developed. Realistic and detailed approach are taken to find scaling factors at reactor coolant. This approach begin with fission product release mechanisms and fundamental release properties of fuel-source nuclide such as fission product and transuranic nuclide. Scaling factors at various waste streams are derived from the predicted reactor coolant scaling factors with the aid of radionuclide retention and build up model. This model make use of radioactive material balance within the radioactive waste processing systems. Scaling factors at reactor coolant and waste streams which can include the effects of plant operation history have been developed according to input parameters of plant operation history

  17. Scaling Green-Kubo Relation and Application to Three Aging Systems

    Directory of Open Access Journals (Sweden)

    A. Dechant

    2014-02-01

    Full Text Available The Green-Kubo formula relates the spatial diffusion coefficient to the stationary velocity autocorrelation function. We derive a generalization of the Green-Kubo formula that is valid for systems with long-range or nonstationary correlations for which the standard approach is no longer valid. For the systems under consideration, the velocity autocorrelation function ⟨v(t+τv(t⟩ asymptotically exhibits a certain scaling behavior and the diffusion is anomalous, ⟨x^{2}(t⟩≃2D_{ν}t^{ν}. We show how both the anomalous diffusion coefficient D_{ν} and the exponent ν can be extracted from this scaling form. Our scaling Green-Kubo relation thus extends an important relation between transport properties and correlation functions to generic systems with scale-invariant dynamics. This includes stationary systems with slowly decaying power-law correlations, as well as aging systems, systems whose properties depend on the age of the system. Even for systems that are stationary in the long-time limit, we find that the long-time diffusive behavior can strongly depend on the initial preparation of the system. In these cases, the diffusivity D_{ν} is not unique, and we determine its values, respectively, for a stationary or nonstationary initial state. We discuss three applications of the scaling Green-Kubo relation: free diffusion with nonlinear friction corresponding to cold atoms diffusing in optical lattices, the fractional Langevin equation with external noise recently suggested to model active transport in cells, and the Lévy walk with numerous applications, in particular, blinking quantum dots. These examples underline the wide applicability of our approach, which is able to treat very different mechanisms of anomalous diffusion.

  18. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  19. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  20. Scale dependency of American marten (Martes americana) habitat relations [Chapter 12

    Science.gov (United States)

    Andrew J. Shirk; Tzeidle N. Wasserman; Samuel A. Cushman; Martin G. Raphael

    2012-01-01

    Animals select habitat resources at multiple spatial scales; therefore, explicit attention to scale-dependency when modeling habitat relations is critical to understanding how organisms select habitat in complex landscapes. Models that evaluate habitat variables calculated at a single spatial scale (e.g., patch, home range) fail to account for the effects of...

  1. NGC 1275: An Outlier of the Black Hole-Host Scaling Relations

    Directory of Open Access Journals (Sweden)

    Eleonora Sani

    2018-02-01

    Full Text Available The active galaxy NGC 1275 lies at the center of the Perseus cluster of galaxies, being an archetypal BH-galaxy system that is supposed to fit well with the MBH-host scaling relations obtained for quiescent galaxies. Since it harbors an obscured AGN, only recently our group has been able to estimate its black hole mass. Here our aim is to pinpoint NGC 1275 on the less dispersed scaling relations, namely the MBH-σ⋆ and MBH−Lbul planes. Starting from our previous work (Ricci et al., 2017a, we estimate that NGC 1275 falls well outside the intrinsic dispersion of the MBH-σ⋆ plane being 1.2 dex (in black hole mass displaced with respect to the scaling relations. We then perform a 2D morphological decomposition analysis on Spitzer/IRAC images at 3.6 μm and find that, beyond the bright compact nucleus that dominates the central emission, NGC 1275 follows a de Vaucouleurs profile with no sign of significant star formation nor clear merger remnants. Nonetheless, its displacement on the MBH−L3.6,bul plane with respect to the scaling relation is as high as observed in the MBH-σ⋆. We explore various scenarios to interpret such behaviors, of which the most realistic one is the evolutionary pattern followed by NGC 1275 to approach the scaling relation. We indeed speculate that NGC 1275 might be a specimen for those galaxies in which the black holes adjusted to its host.

  2. The Debye light scattering equation’s scaling relation reveals the purity of synthetic dendrimers

    Energy Technology Data Exchange (ETDEWEB)

    Tseng, Hui-Yu; Chen, Hsiao-Ping [National Chung Cheng University, Department of Chemistry and Biochemistry (China); Tang, Yi-Hsuan [Kaohsiung Medical University, Department of Medicinal and Applied Chemistry (China); Chen, Hui-Ting [Kaohsiung Medical University, Department of Fragrance and Cosmetic Science (China); Kao, Chai-Lin, E-mail: clkao@kmu.edu.tw [Kaohsiung Medical University, Department of Medicinal and Applied Chemistry (China); Wang, Shau-Chun, E-mail: chescw@ccu.edu.tw [National Chung Cheng University, Department of Chemistry and Biochemistry (China)

    2016-03-15

    Spherical dendrimer structures cannot be structurally modeled using conventional polymer models of random coil or rod-like configurations during the calibration of the static light scattering (LS) detectors used to determine the molecular weight (M.W.) of a dendrimer or directly assess the purity of a synthetic compound. In this paper, we used the Debye equation-based scaling relation, which predicts that the static LS intensity per unit concentration is linearly proportional to the M.W. of a synthetic dendrimer in a dilute solution, as a tool to examine the purity of high-generational compounds and to monitor the progress of dendrimer preparations. Without using expensive equipment, such as nuclear magnetic resonance or mass spectrometry, this method only required an affordable flow injection set-up with an LS detector. Solutions of the purified dendrimers, including the poly(amidoamine) (PAMAM) dendrimer and its fourth to seventh generation pyridine derivatives with size range of 5–9 nm, were used to establish the scaling relation with high linearity. The use of artificially impure mixtures of six or seven generations revealed significant deviations from linearity. The raw synthesized products of the pyridine-modified PAMAM dendrimer, which included incompletely reacted dendrimers, were also examined to gauge the reaction progress. As a reaction toward a particular generational derivative of the PAMAM dendrimers proceeded over time, deviations from the linear scaling relation decreased. The difference between the polydispersity index of the incompletely converted products and that of the pure compounds was only about 0.01. The use of the Debye equation-based scaling relation, therefore, is much more useful than the polydispersity index for monitoring conversion processes toward an indicated functionality number in a given preparation.Graphical abstract.

  3. The Debye light scattering equation’s scaling relation reveals the purity of synthetic dendrimers

    International Nuclear Information System (INIS)

    Tseng, Hui-Yu; Chen, Hsiao-Ping; Tang, Yi-Hsuan; Chen, Hui-Ting; Kao, Chai-Lin; Wang, Shau-Chun

    2016-01-01

    Spherical dendrimer structures cannot be structurally modeled using conventional polymer models of random coil or rod-like configurations during the calibration of the static light scattering (LS) detectors used to determine the molecular weight (M.W.) of a dendrimer or directly assess the purity of a synthetic compound. In this paper, we used the Debye equation-based scaling relation, which predicts that the static LS intensity per unit concentration is linearly proportional to the M.W. of a synthetic dendrimer in a dilute solution, as a tool to examine the purity of high-generational compounds and to monitor the progress of dendrimer preparations. Without using expensive equipment, such as nuclear magnetic resonance or mass spectrometry, this method only required an affordable flow injection set-up with an LS detector. Solutions of the purified dendrimers, including the poly(amidoamine) (PAMAM) dendrimer and its fourth to seventh generation pyridine derivatives with size range of 5–9 nm, were used to establish the scaling relation with high linearity. The use of artificially impure mixtures of six or seven generations revealed significant deviations from linearity. The raw synthesized products of the pyridine-modified PAMAM dendrimer, which included incompletely reacted dendrimers, were also examined to gauge the reaction progress. As a reaction toward a particular generational derivative of the PAMAM dendrimers proceeded over time, deviations from the linear scaling relation decreased. The difference between the polydispersity index of the incompletely converted products and that of the pure compounds was only about 0.01. The use of the Debye equation-based scaling relation, therefore, is much more useful than the polydispersity index for monitoring conversion processes toward an indicated functionality number in a given preparation.Graphical abstract

  4. BOX-COX REGRESSION METHOD IN TIME SCALING

    Directory of Open Access Journals (Sweden)

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  5. DISK GALAXY SCALING RELATIONS IN THE SFI++: INTRINSIC SCATTER AND APPLICATIONS

    International Nuclear Information System (INIS)

    Saintonge, Amelie; Spekkens, Kristine

    2011-01-01

    We study the scaling relations between the luminosities, sizes, and rotation velocities of disk galaxies in the SFI++, with a focus on the size-luminosity (RL) and size-rotation velocity (RV) relations. Using isophotal radii instead of disk scale lengths as a size indicator, we find relations that are significantly tighter than previously reported: the correlation coefficients of the template RL and RV relations are r = 0.97 and r= 0.85, respectively, which rival that of the more widely studied LV (Tully-Fisher) relation. The scatter in the SFI++ RL relation is 2.5-4 times smaller than previously reported for various samples, which we attribute to the reliability of isophotal radii relative to disk scale lengths. After carefully accounting for all measurement errors, our scaling relation error budgets are consistent with a constant intrinsic scatter in the LV and RV relations for velocity widths log W ∼> 2.4, with evidence for increasing intrinsic scatter below this threshold. The scatter in the RL relation is consistent with constant intrinsic scatter that is biased by incompleteness at the low-L end. Possible applications of the unprecedentedly tight SFI++ RV and RL relations are investigated. Just like the Tully-Fisher relation, the RV relation can be used as a distance indicator: we derive distances to galaxies with primary Cepheid distances that are accurate to 25%, and reverse the problem to measure a Hubble constant H 0 = 72 ± 7 km s -1 Mpc -1 . Combining the small intrinsic scatter of our RL relation (ε int = 0.034 ± 0.001log [h -1 kpc]) with a simple model for disk galaxy formation, we find an upper limit in the range of disk spin parameters that is a factor of ∼7 smaller than that of the halo spin parameters predicted by cosmological simulations. This likely implies that the halos hosting Sc galaxies have a much narrower distribution of spin parameters than previously thought.

  6. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacó n Rebollo, Tomá s; Dia, Ben Mansour

    2015-01-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  7. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacón Rebollo, Tomás

    2015-03-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  8. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    Science.gov (United States)

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  9. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    Science.gov (United States)

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  10. Development of polygon elements based on the scaled boundary finite element method

    International Nuclear Information System (INIS)

    Chiong, Irene; Song Chongmin

    2010-01-01

    We aim to extend the scaled boundary finite element method to construct conforming polygon elements. The development of the polygonal finite element is highly anticipated in computational mechanics as greater flexibility and accuracy can be achieved using these elements. The scaled boundary polygonal finite element will enable new developments in mesh generation, better accuracy from a higher order approximation and better transition elements in finite element meshes. Polygon elements of arbitrary number of edges and order have been developed successfully. The edges of an element are discretised with line elements. The displacement solution of the scaled boundary finite element method is used in the development of shape functions. They are shown to be smooth and continuous within the element, and satisfy compatibility and completeness requirements. Furthermore, eigenvalue decomposition has been used to depict element modes and outcomes indicate the ability of the scaled boundary polygonal element to express rigid body and constant strain modes. Numerical tests are presented; the patch test is passed and constant strain modes verified. Accuracy and convergence of the method are also presented and the performance of the scaled boundary polygonal finite element is verified on Cook's swept panel problem. Results show that the scaled boundary polygonal finite element method outperforms a traditional mesh and accuracy and convergence are achieved from fewer nodes. The proposed method is also shown to be truly flexible, and applies to arbitrary n-gons formed of irregular and non-convex polygons.

  11. Research on performance evaluation and anti-scaling mechanism of green scale inhibitors by static and dynamic methods

    International Nuclear Information System (INIS)

    Liu, D.

    2011-01-01

    Increasing environmental concerns and discharge limitations have imposed additional challenges in treating process waters. Thus, the concept of 'Green Chemistry' was proposed and green scale inhibitors became a focus of water treatment technology. Finding some economical and environmentally friendly inhibitors is one of the major research focuses nowadays. In this dissertation, the inhibition performance of different phosphonates as CaCO 3 scale inhibitors in simulated cooling water was evaluated. Homo-, co-, and ter-polymers were also investigated for their performance as Ca-phosphonate inhibitors. Addition of polymers as inhibitors with phosphonates could reduce Ca-phosphonate precipitation and enhance the inhibition efficiency for CaCO 3 scale. The synergistic effect of poly-aspartic acid (PASP) and Poly-epoxy-succinic acid (PESA) on inhibition of scaling has been studied using both static and dynamic methods. Results showed that the anti-scaling performance of PASP combined with PESA was superior to that of PASP or PESA alone for CaCO 3 , CaSO 4 and BaSO 4 scale. The influence of dosage, temperature and Ca 2+ concentration was also investigated in simulated cooling water circuit. Moreover, SEM analysis demonstrated the modification of crystalline morphology in the presence of PASP and PESA. In this work, we also investigated the respective inhibition effectiveness of copper and zinc ions for scaling in drinking water by the method of Rapid Controlled Precipitation (RCP). The results indicated that the zinc ion and copper ion were high efficient inhibitors of low concentration, and the analysis of SEM and IR showed that copper and zinc ions could affect the calcium carbonate germination and change the crystal morphology. Moreover, the influence of temperature and dissolved CO 2 on the scaling potential of a mineral water (Salvetat) in the presence of copper and zinc ions was studied by laboratory experiments. An ideal scale inhibitor should be a solid form

  12. Large-scale circulation departures related to wet episodes in northeast Brazil

    Science.gov (United States)

    Sikdar, D. N.; Elsner, J. B.

    1985-01-01

    Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season season is devided into dry and wet periods, the FGGE and geostationary satellite data was averaged and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLP's have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.

  13. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    Science.gov (United States)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  14. Gamma Ray Tomographic Scan Method for Large Scale Industrial Plants

    International Nuclear Information System (INIS)

    Moon, Jin Ho; Jung, Sung Hee; Kim, Jong Bum; Park, Jang Geun

    2011-01-01

    The gamma ray tomography systems have been used to investigate a chemical process for last decade. There have been many cases of gamma ray tomography for laboratory scale work but not many cases for industrial scale work. Non-tomographic equipment with gamma-ray sources is often used in process diagnosis. Gamma radiography, gamma column scanning and the radioisotope tracer technique are examples of gamma ray application in industries. In spite of many outdoor non-gamma ray tomographic equipment, the most of gamma ray tomographic systems still remained as indoor equipment. But, as the gamma tomography has developed, the demand on gamma tomography for real scale plants also increased. To develop the industrial scale system, we introduced the gamma-ray tomographic system with fixed detectors and rotating source. The general system configuration is similar to 4 th generation geometry. But the main effort has been made to actualize the instant installation of the system for real scale industrial plant. This work would be a first attempt to apply the 4th generation industrial gamma tomographic scanning by experimental method. The individual 0.5-inch NaI detector was used for gamma ray detection by configuring circular shape around industrial plant. This tomographic scan method can reduce mechanical complexity and require a much smaller space than a conventional CT. Those properties make it easy to get measurement data for a real scale plant

  15. Projection Of The Stellar To Halo Mass Relation Into The Scaling Relations Of A Disc Galaxy Population

    Science.gov (United States)

    Mancillas, Brisa; Ávila-Reese, Vladimir; Rodríguez-Puebla, Aldo; Valls-Gabaud, David

    2017-06-01

    Several pieces of evidence suggest that disk formation is the generic process of assembly of galaxies, while the spheroidal component arises from the merging/interactions of disks as well as from their secular evolution. To understand galaxy formation and evolution, a cosmological framework is required. The current cosmological paradigm is summarized in the so-called Λ-cold dark matter model (ΛCDM). The statistical connection between the masses of the observed galaxies and those of the simulated CDM halos in large volumes leads us to the galaxy-halo mass relation, which summarizes the main astrophysical processes of galaxy formation and evolution (gas heating and cooling, SF, SN- and AGN-driven feedback, etc.). An important question is how this relation constrained by semi-empirical methods (e.g., Rodriguez-Puebla et al. 2014) is "projected" into the disk galaxy scaling relations and other galaxy correlations. To explore this question, we generate a synthetic catalog of thousands of disk/halo systems by means of an extended Mo, Mao & White (1998) model, and by using as input the baryonic-to-halo mass relation, fbar(Mh), of local disk galaxy as recently constrained by Calette et al. (2015).

  16. Large Scale Leach Test Facility: Development of equipment and methods, and comparison to MCC-1 leach tests

    International Nuclear Information System (INIS)

    Pellarin, D.J.; Bickford, D.F.

    1985-01-01

    This report describes the test equipment and methods, and documents the results of the first large-scale MCC-1 experiments in the Large Scale Leach Test Facility (LSLTF). Two experiments were performed using 1-ft-long samples sectioned from the middle of canister MS-11. The leachant used in the experiments was ultrapure deionized water - an aggressive and well characterized leachant providing high sensitivity for liquid sample analyses. All the original test plan objectives have been successfully met. Equipment and procedures have been developed for large-sample-size leach testing. The statistical reliability of the method has been determined, and ''bench mark'' data developed to relate small scale leach testing to full size waste forms. The facility is unique, and provides sampling reliability and flexibility not possible in smaller laboratory scale tests. Future use of this facility should simplify and accelerate the development of leaching models and repository specific data. The factor of less than 3 for leachability, corresponding to a 200,000/1 increase in sample volume, enhances the credibility of small scale test data which precedes this work, and supports the ability of the DWPF waste form to meet repository criteria

  17. Oscillating red giants in eclipsing binary systems: empirical reference value for asteroseismic scaling relation

    Science.gov (United States)

    Themeßl, N.; Hekker, S.; Southworth, J.; Beck, P. G.; Pavlovski, K.; Tkachenko, A.; Angelou, G. C.; Ball, W. H.; Barban, C.; Corsaro, E.; Elsworth, Y.; Handberg, R.; Kallinger, T.

    2018-05-01

    The internal structures and properties of oscillating red-giant stars can be accurately inferred through their global oscillation modes (asteroseismology). Based on 1460 days of Kepler observations we perform a thorough asteroseismic study to probe the stellar parameters and evolutionary stages of three red giants in eclipsing binary systems. We present the first detailed analysis of individual oscillation modes of the red-giant components of KIC 8410637, KIC 5640750 and KIC 9540226. We obtain estimates of their asteroseismic masses, radii, mean densities and logarithmic surface gravities by using the asteroseismic scaling relations as well as grid-based modelling. As these red giants are in double-lined eclipsing binaries, it is possible to derive their independent dynamical masses and radii from the orbital solution and compare it with the seismically inferred values. For KIC 5640750 we compute the first spectroscopic orbit based on both components of this system. We use high-resolution spectroscopic data and light curves of the three systems to determine up-to-date values of the dynamical stellar parameters. With our comprehensive set of stellar parameters we explore consistencies between binary analysis and asteroseismic methods, and test the reliability of the well-known scaling relations. For the three red giants under study, we find agreement between dynamical and asteroseismic stellar parameters in cases where the asteroseismic methods account for metallicity, temperature and mass dependence as well as surface effects. We are able to attain agreement from the scaling laws in all three systems if we use Δνref, emp = 130.8 ± 0.9 μHz instead of the usual solar reference value.

  18. A multiple-scaling method of the computation of threaded structures

    International Nuclear Information System (INIS)

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  19. Method of producing carbon coated nano- and micron-scale particles

    Science.gov (United States)

    Perry, W. Lee; Weigle, John C; Phillips, Jonathan

    2013-12-17

    A method of making carbon-coated nano- or micron-scale particles comprising entraining particles in an aerosol gas, providing a carbon-containing gas, providing a plasma gas, mixing the aerosol gas, the carbon-containing gas, and the plasma gas proximate a torch, bombarding the mixed gases with microwaves, and collecting resulting carbon-coated nano- or micron-scale particles.

  20. Scaling Relations between Gas and Star Formation in Nearby Galaxies

    Science.gov (United States)

    Bigiel, Frank; Leroy, Adam; Walter, Fabian

    2011-04-01

    High resolution, multi-wavelength maps of a sizeable set of nearby galaxies have made it possible to study how the surface densities of H i, H2 and star formation rate (ΣHI, ΣH2, ΣSFR) relate on scales of a few hundred parsecs. At these scales, individual galaxy disks are comfortably resolved, making it possible to assess gas-SFR relations with respect to environment within galaxies. ΣH2, traced by CO intensity, shows a strong correlation with ΣSFR and the ratio between these two quantities, the molecular gas depletion time, appears to be constant at about 2 Gyr in large spiral galaxies. Within the star-forming disks of galaxies, ΣSFR shows almost no correlation with ΣHI. In the outer parts of galaxies, however, ΣSFR does scale with ΣHI, though with large scatter. Combining data from these different environments yields a distribution with multiple regimes in Σgas - ΣSFR space. If the underlying assumptions to convert observables to physical quantities are matched, even combined datasets based on different SFR tracers, methodologies and spatial scales occupy a well define locus in Σgas - ΣSFR space.

  1. A spatial method to calculate small-scale fisheries effort in data poor scenarios.

    Science.gov (United States)

    Johnson, Andrew Frederick; Moreno-Báez, Marcia; Giron-Nava, Alfredo; Corominas, Julia; Erisman, Brad; Ezcurra, Exequiel; Aburto-Oropeza, Octavio

    2017-01-01

    To gauge the collateral impacts of fishing we must know where fishing boats operate and how much they fish. Although small-scale fisheries land approximately the same amount of fish for human consumption as industrial fleets globally, methods of estimating their fishing effort are comparatively poor. We present an accessible, spatial method of calculating the effort of small-scale fisheries based on two simple measures that are available, or at least easily estimated, in even the most data-poor fisheries: the number of boats and the local coastal human population. We illustrate the method using a small-scale fisheries case study from the Gulf of California, Mexico, and show that our measure of Predicted Fishing Effort (PFE), measured as the number of boats operating in a given area per day adjusted by the number of people in local coastal populations, can accurately predict fisheries landings in the Gulf. Comparing our values of PFE to commercial fishery landings throughout the Gulf also indicates that the current number of small-scale fishing boats in the Gulf is approximately double what is required to land theoretical maximum fish biomass. Our method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This new method provides an important first step towards estimating the fishing effort of small-scale fleets globally.

  2. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  3. Scaling Relations for Adsorption Energies on Doped Molybdenum Phosphide Surfaces

    International Nuclear Information System (INIS)

    Fields, Meredith; Tsai, Charlie; Chen, Leanne D.; Abild-Pedersen, Frank; Nørskov, Jens K.; Chan, Karen

    2017-01-01

    Molybdenum phosphide (MoP), a well-documented catalyst for applications ranging from hydrotreating reactions to electrochemical hydrogen evolution, has yet to be mapped from a more fundamental perspective, particularly in the context of transition-metal scaling relations. In this work, we use periodic density functional theory to extend linear scaling arguments to doped MoP surfaces and understand the behavior of the phosphorus active site. The derived linear relationships for hydrogenated C, N, and O species on a variety of doped surfaces suggest that phosphorus experiences a shift in preferred bond order depending on the degree of hydrogen substitution on the adsorbate molecule. This shift in phosphorus hybridization, dependent on the bond order of the adsorbate to the surface, can result in selective bond weakening or strengthening of chemically similar species. As a result, we discuss how this behavior deviates from transition-metal, sulfide, carbide, and nitride scaling relations, and we discuss potential applications in the context of electrochemical reduction reactions.

  4. Constructing sites at a large scale - towards new design (education) methods

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2010-01-01

    of the design disciplines within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective the formulation of the design problem or brief becomes an integrated part of the design process......Since the 1990s the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. The current paradigm of planning by projects reinforces the role....... This paper discusses new design (education) methods based on a relational concept of urban sites and design processes using the actor-network-theory as theoretical frame....

  5. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    Energy Technology Data Exchange (ETDEWEB)

    Lenormand, R.; Thiele, M.R. [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  6. Relative scale and the strength and deformability of rock masses

    Science.gov (United States)

    Schultz, Richard A.

    1996-09-01

    The strength and deformation of rocks depend strongly on the degree of fracturing, which can be assessed in the field and related systematically to these properties. Appropriate Mohr envelopes obtained from the Rock Mass Rating (RMR) classification system and the Hoek-Brown criterion for outcrops and other large-scale exposures of fractured rocks show that rock-mass cohesive strength, tensile strength, and unconfined compressive strength can be reduced by as much as a factor often relative to values for the unfractured material. The rock-mass deformation modulus is also reduced relative to Young's modulus. A "cook-book" example illustrates the use of RMR in field applications. The smaller values of rock-mass strength and deformability imply that there is a particular scale of observation whose identification is critical to applying laboratory measurements and associated failure criteria to geologic structures.

  7. Measuring emotions during epistemic activities: the Epistemically-Related Emotion Scales.

    Science.gov (United States)

    Pekrun, Reinhard; Vogl, Elisabeth; Muis, Krista R; Sinatra, Gale M

    2017-09-01

    Measurement instruments assessing multiple emotions during epistemic activities are largely lacking. We describe the construction and validation of the Epistemically-Related Emotion Scales, which measure surprise, curiosity, enjoyment, confusion, anxiety, frustration, and boredom occurring during epistemic cognitive activities. The instrument was tested in a multinational study of emotions during learning from conflicting texts (N = 438 university students from the United States, Canada, and Germany). The findings document the reliability, internal validity, and external validity of the instrument. A seven-factor model best fit the data, suggesting that epistemically-related emotions should be conceptualised in terms of discrete emotion categories, and the scales showed metric invariance across the North American and German samples. Furthermore, emotion scores changed over time as a function of conflicting task information and related significantly to perceived task value and use of cognitive and metacognitive learning strategies.

  8. On the link between column density distribution and density scaling relation in star formation regions

    Science.gov (United States)

    Veltchev, Todor; Donkov, Sava; Stanchev, Orlin

    2017-07-01

    We present a method to derive the density scaling relation ∝ L^{-α} in regions of star formation or in their turbulent vicinities from straightforward binning of the column-density distribution (N-pdf). The outcome of the method is studied for three types of N-pdf: power law (7/5≤α≤5/3), lognormal (0.7≲α≲1.4) and combination of lognormals. In the last case, the method of Stanchev et al. (2015) was also applied for comparison and a very weak (or close to zero) correlation was found. We conclude that the considered `binning approach' reflects rather the local morphology of the N-pdf with no reference to the physical conditions in a considered region. The rough consistency of the derived slopes with the widely adopted Larson's (1981) value α˜1.1 is suggested to support claims that the density-size relation in molecular clouds is indeed an artifact of the observed N-pdf.

  9. Feasibility of a shorter Goal Attainment Scaling method for a pediatric spasticity clinic - The 3-milestones GAS.

    Science.gov (United States)

    Krasny-Pacini, A; Pauly, F; Hiebel, J; Godon, S; Isner-Horobeti, M-E; Chevignard, M

    2017-07-01

    Goal Attainment Scaling (GAS) is a method for writing personalized evaluation scales to quantify progress toward defined goals. It is useful in rehabilitation but is hampered by the experience required to adequately "predict" the possible outcomes relating to a particular goal before treatment and the time needed to describe all 5 levels of the scale. Here we aimed to investigate the feasibility of using GAS in a clinical setting of a pediatric spasticity clinic with a shorter method, the "3-milestones" GAS (goal setting with 3 levels and goal rating with the classical 5 levels). Secondary aims were to (1) analyze the types of goals children's therapists set for botulinum toxin treatment and (2) compare the score distribution (and therefore the ability to predict outcome) by goal type. Therapists were trained in GAS writing and prepared GAS scales in the regional spasticity-management clinic they attended with their patients and families. The study included all GAS scales written during a 2-year period. GAS score distribution across the 5 GAS levels was examined to assess whether the therapist could reliably predict outcome and whether the 3-milestones GAS yielded similar distributions as the original GAS method. In total, 541 GAS scales were written and showed the expected score distribution. Most scales (55%) referred to movement quality goals and fewer (29%) to family goals and activity domains. The 3-milestones GAS method was feasible within the time constraints of the spasticity clinic and could be used by local therapists in cooperation with the hospital team. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  10. Scaling relations in elastic scattering cross sections between multiply charged ions and hydrogen

    International Nuclear Information System (INIS)

    Rodriguez, V.D.

    1991-01-01

    Differential elastic scattering cross sections of bare ions from hydrogen are calculated using the eikonal approximation. The results satisfy a scaling relation involving the scattering angle, the ion charge and a factor related to the ion mass. A semiclassical explanation in terms of a distant collision hypothesis for small scattering angle is proposed. A unified picture of related scaling rules found in direct processes is discussed. (author)

  11. Method of producing exfoliated graphite, flexible graphite, and nano-scaled graphene platelets

    Science.gov (United States)

    Zhamu, Aruna; Shi, Jinjun; Guo, Jiusheng; Jang, Bor Z.

    2010-11-02

    The present invention provides a method of exfoliating a layered material (e.g., graphite and graphite oxide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of graphite, graphite oxide, or a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites. Nano-scaled graphene platelets are much lower-cost alternatives to carbon nano-tubes or carbon nano-fibers.

  12. Task-Management Method Using R-Tree Spatial Cloaking for Large-Scale Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Yan Li

    2017-12-01

    Full Text Available With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In this paper, we propose the use of an R-tree spatial cloaking-based task-assignment method for large-scale spatial crowdsourcing. We use an estimated R-tree based on the requested crowdsourcing tasks to reduce the crowdsourcing server-side inserting cost and enable the scalability. By using Minimum Bounding Rectangle (MBR-based spatial anonymous data without exact position data, this method preserves the location privacy of participants in a simple way. In our experiment, we showed that our proposed method is faster than the current method, and is very efficient when the scale is increased.

  13. Boundary layers and scaling relations in natural thermal convection

    Science.gov (United States)

    Shishkina, Olga; Lohse, Detlef; Grossmann, Siegfried

    2017-11-01

    We analyse the boundary layer (BL) equations in natural thermal convection, which includes vertical convection (VC), where the fluid is confined between two differently heated vertical walls, horizontal convection (HC), where the fluid is heated at one part of the bottom plate and cooled at some other part, and Rayleigh-Benard convection (RBC). For BL dominated regimes we derive the scaling relations of the Nusselt and Reynolds numbers (Nu, Re) with the Rayleigh and Prandtl numbers (Ra, Pr). For VC the scaling relations are obtained directly from the BL equations, while for HC they are derived by applying the Grossmann-Lohse theory to the case of VC. In particular, for RBC with large Pr we derive Nu Pr0Ra1/3 and Re Pr-1Ra2/3. The work is supported by the Deutsche Forschungsgemeinschaft (DFG) under the Grant Sh 405/4 - Heisenberg fellowship.

  14. [Organizational well-being and work-related stress in health care organizations: validation of the Work-related Stress Assessment Scale].

    Science.gov (United States)

    Coluccia, Anna; Lorini, Francesca; Ferretti, Fabio; Pozza, Andrea; Gaetani, Marco

    2015-01-01

    The issue of the assessment of work-related stress has stimulated in recent years, the production of several theoretical paradigms and assessment tools. In this paper we present a new scale for the assessment of organizational well-being and work-related stress specific for healthcare organizations (Work-related Stress Assessment Scale - WSAS). The goal of the authors is to examine the psychometric properties of the scale, so that it can be used in the healthcare setting as a work-related stress assessment tool. The answers of 230 healthcare professionals belonging to different roles have been analyzed. The study was realized in 16 Units of the University Hospital "S. Maria alle Scotte "of Siena. The exploratory factor analysis (EFA) revealed the presence of five factors with good internal consistency and reliability, "relationship to the structure of proximity" (α = 0.93) "change" (α = 0.92), "organization of work "(α = 0.81)," relationship with the company / Governance "(α = 0.87)" working environment "(α = 0.83). The analysis of SEM (Structural Equation Models) has confirmed the goodness of the factor solution (NNFI = 0.835, CFI = 0.921, RMSEA = 0.060). The good psychometric qualities, the shortness and simplicity of the scale WSAS makes it a useful aid in the assessment of work-related stress in health care organizations.

  15. New parametrization for the scale dependent growth function in general relativity

    International Nuclear Information System (INIS)

    Dent, James B.; Dutta, Sourish; Perivolaropoulos, Leandros

    2009-01-01

    We study the scale-dependent evolution of the growth function δ(a,k) of cosmological perturbations in dark energy models based on general relativity. This scale dependence is more prominent on cosmological scales of 100h -1 Mpc or larger. We derive a new scale-dependent parametrization which generalizes the well-known Newtonian approximation result f 0 (a)≡(dlnδ 0 /dlna)=Ω(a) γ (γ=(6/11) for ΛCDM) which is a good approximation on scales less than 50h -1 Mpc. Our generalized parametrization is of the form f(a)=(f 0 (a)/1+ξ(a,k)), where ξ(a,k)=(3H 0 2 Ω 0m )/(ak 2 ). We demonstrate that this parametrization fits the exact result of a full general relativistic evaluation of the growth function up to horizon scales for both ΛCDM and dynamical dark energy. In contrast, the scale independent parametrization does not provide a good fit on scales beyond 5% of the horizon scale (k≅0.01h -1 Mpc).

  16. Large-scale circulation departures related to wet episodes in north-east Brazil

    Science.gov (United States)

    Sikdar, Dhirendra N.; Elsner, James B.

    1987-01-01

    Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season is divided into dry and wet periods; the FGGE and geostationary satellite data was averaged; and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLPs have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.

  17. Violence-Related Attitudes and Beliefs: Scale Construction and Psychometrics

    Science.gov (United States)

    Brand, Pamela A.; Anastasio, Phyllis A.

    2006-01-01

    The 50-item Violence-Related Attitudes and Beliefs Scale (V-RABS) includes three subscales measuring possible causes of violent behavior (environmental influences, biological influences, and mental illness) and four subscales assessing possible controls of violent behavior (death penalty, punishment, prevention, and catharsis). Each subscale…

  18. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  19. [Construction of a physiological aging scale for healthy people based on a modified Delphi method].

    Science.gov (United States)

    Long, Yao; Zhou, Xuan; Deng, Pengfei; Liao, Xiong; Wu, Lei; Zhou, Jianming; Huang, Helang

    2016-04-01

    To build a physiological aging scale for healthy people.
 We collected age-related physiologic items through literature screening and expert interview. Two rounds of Delphi were implemented. The importance, feasibility and the degree of authority for the physiological index system were graded. Using analytic hierarchy process, we determined the weight of dimensions and items.
 Using Delphy mothod, 17 physiological and other professional experts offered the results as follow: coefficient of expert authorities Cr was 0.86±0.03, coordination coefficients for the first and second round were 0.264(χ2=229.691, Paging scale for healthy people included 3 dimensions, namely physical form, feeling movement and functional status. Each dimension had 8 items. The weight coefficients for the 3 dimensions were 0.54, 0.16, and 0.30, respectively. The Cronbach's α coefficient of the scale was 0.893, the reliability was 0.796, and the variance of the common factor was 58.17%.
 The improved Delphi method or physiological aging scale is satisfied, which can provide reference for the evaluation of aging.

  20. Scaling relations between structure and rheology of ageing casein particle gels

    NARCIS (Netherlands)

    Mellema, M.

    2000-01-01

    Mellema, M. (Michel), Scaling relations between structure and rheology of ageing casein particle gels , PhD Thesis, Wageningen University, 150 + 10 pages, references by chapter, English and Dutch summaries (2000).

    The relation between (colloidal)

  1. Experimental Evaluation for the Microvibration Performance of a Segmented PC Method Based High Technology Industrial Facility Using 1/2 Scale Test Models

    Directory of Open Access Journals (Sweden)

    Sijun Kim

    2017-01-01

    Full Text Available The precast concrete (PC method used in the construction process of high technology industrial facilities is limited when applied to those with greater span lengths, due to the transport length restriction (maximum length of 15~16 m in Korea set by traffic laws. In order to resolve this, this study introduces a structural system with a segmented PC system, and a 1/2 scale model with a width of 9000 mm (hereafter Segmented Model is manufactured to evaluate vibration performance. Since a real vibrational environment cannot be reproduced for vibration testing using a scale model, a comparative analysis of their relative performances is conducted in this study. For this purpose, a 1/2 scale model with a width of 7200 mm (hereafter Nonsegmented Model of a high technology industrial facility is additionally prepared using the conventional PC method. By applying the same experiment method for both scale models and comparing the results, the relative vibration performance of the Segmented Model is observed. Through impact testing, the natural frequencies of the two scale models are compared. Also, in order to analyze the estimated response induced by the equipment, the vibration responses due to the exciter are compared. The experimental results show that the Segmented Model exhibits similar or superior performances when compared to the Nonsegmented Model.

  2. Surface temperature and evapotranspiration: application of local scale methods to regional scales using satellite data

    International Nuclear Information System (INIS)

    Seguin, B.; Courault, D.; Guerif, M.

    1994-01-01

    Remotely sensed surface temperatures have proven useful for monitoring evapotranspiration (ET) rates and crop water use because of their direct relationship with sensible and latent energy exchange processes. Procedures for using the thermal infrared (IR) obtained with hand-held radiometers deployed at ground level are now well established and even routine for many agricultural research and management purposes. The availability of IR from meteorological satellites at scales from 1 km (NOAA-AVHRR) to 5 km (METEOSAT) permits extension of local, ground-based approaches to larger scale crop monitoring programs. Regional observations of surface minus air temperature (i.e., the stress degree day) and remote estimates of daily ET were derived from satellite data over sites in France, the Sahel, and North Africa and summarized here. Results confirm that similar approaches can be applied at local and regional scales despite differences in pixel size and heterogeneity. This article analyzes methods for obtaining these data and outlines the potential utility of satellite data for operational use at the regional scale. (author)

  3. Environmental consequence analyses of fish farm emissions related to different scales and exemplified by data from the Baltic--a review.

    Science.gov (United States)

    Gyllenhammar, Andreas; Håkanson, Lars

    2005-08-01

    The aim of this work is to review studies to evaluate how emissions from fish cage farms cause eutrophication effects in marine environments. The focus is on four different scales: (i) the conditions at the site of the farm, (ii) the local scale related to the coastal area where the farm is situated, (iii) the regional scale encompassing many coastal areas and (iv) the international scale including several regional coastal areas. The aim is to evaluate the role of nutrient emissions from fish farms in a general way, but all selected examples come from the Baltic Sea. An important part of this evaluation concerns the method to define the boundaries of a given coastal area. If this is done arbitrarily, one would obtain arbitrary results in the environmental consequence analysis. In this work, the boundary lines between the coast and the sea are drawn using GIS methods (geographical information systems) according to the topographical bottleneck method, which opens a way to determine many fundamental characteristics in the context of mass balance calculations. In mass balance modelling, the fluxes from the fish farm should be compared to other fluxes to, within and from coastal areas. Results collected in this study show that: (1) at the smallest scale (impact areas of fish cage farm often corresponds to the size of a "football field" (50-100 m) if the annual fish production is about 50 ton, (2) at the local scale (1 ha to 100 km2), there exists a simple load diagram (effect-load-sensitivity) to relate the environmental response and effects from a specific load from a fish cage farm. This makes it possible to obtain a first estimate of the maximum allowable fish production in a specific coastal area, (3) at the regional scale (100-10,000 km2), it is possible to create negative nutrient fluxes, i.e., use fish farming as a method to reduce the nutrient loading to the sea. The breaking point is to use more than about 1.1 g wet weight regionally caught wild fish per gram

  4. Bioclim Deliverable D6b: application of statistical down-scaling within the BIOCLIM hierarchical strategy: methods, data requirements and underlying assumptions

    International Nuclear Information System (INIS)

    2004-01-01

    The overall aim of BIOCLIM is to assess the possible long term impacts due to climate change on the safety of radioactive waste repositories in deep formations. The coarse spatial scale of the Earth-system Models of Intermediate Complexity (EMICs) used in BIOCLIM compared with the BIOCLIM study regions and the needs of performance assessment creates a need for down-scaling. Most of the developmental work on down-scaling methodologies undertaken by the international research community has focused on down-scaling from the general circulation model (GCM) scale (with a typical spatial resolution of 400 km by 400 km over Europe in the current generation of models) using dynamical down-scaling (i.e., regional climate models (RCMs), which typically have a spatial resolution of 50 km by 50 km for models whose domain covers the European region) or statistical methods (which can provide information at the point or station scale) in order to construct scenarios of anthropogenic climate change up to 2100. Dynamical down-scaling (with the MAR RCM) is used in BIOCLIM WP2 to down-scale from the GCM (i.e., IPSL C M4 D ) scale. In the original BIOCLIM description of work, it was proposed that UEA would apply statistical down-scaling to IPSL C M4 D output in WP2 as part of the hierarchical strategy. Statistical down-scaling requires the identification of statistical relationships between the observed large-scale and regional/local climate, which are then applied to large-scale GCM output, on the assumption that these relationships remain valid in the future (the assumption of stationarity). Thus it was proposed that UEA would investigate the extent to which it is possible to apply relationships between the present-day large-scale and regional/local climate to the relatively extreme conditions of the BIOCLIM WP2 snapshot simulations. Potential statistical down-scaling methodologies were identified from previous work performed at UEA. Appropriate station data from the case

  5. Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration

    KAUST Repository

    Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.

    2009-01-01

    equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid

  6. Development and psychometric testing of the Nursing Workplace Relational Environment Scale (NWRES).

    Science.gov (United States)

    Duddle, Maree; Boughton, Maureen

    2009-03-01

    The aim of this study was to develop and test the psychometric properties of the Nursing Workplace Relational Environment Scale (NWRES). A positive relational environment in the workplace is characterised by a sense of connectedness and belonging, support and cooperation among colleagues, open communication and effectively managed conflict. A poor relational environment in the workplace may contribute to job dissatisfaction and early turnover of staff. Quantitative survey. A three-stage process was used to design and test the NWRES. In Stage 1, an extensive literature review was conducted on professional working relationships and the nursing work environment. Three key concepts; collegiality, workplace conflict and job satisfaction were identified and defined. In Stage 2, a pool of items was developed from the dimensions of each concept and formulated into a 35-item scale which was piloted on a convenience sample of 31 nurses. In Stage 3, the newly refined 28-item scale was administered randomly to a convenience sample of 150 nurses. Psychometric testing was conducted to establish the construct validity and reliability of the scale. Exploratory factor analysis resulted in a 22-item scale. The factor analysis indicated a four-factor structure: collegial behaviours, relational atmosphere, outcomes of conflict and job satisfaction which explained 68.12% of the total variance. Cronbach's alpha coefficient for the NWRES was 0.872 and the subscales ranged from 0.781-0.927. The results of the study confirm the reliability and validity of the NWRES. Replication of this study with a larger sample is indicated to determine relationships among the subscales. The results of this study have implications for health managers in terms of understanding the impact of the relational environment of the workplace on job satisfaction and retention.

  7. Universal Dark Halo Scaling Relation for the Dwarf Spheroidal Satellites

    Science.gov (United States)

    Hayashi, Kohei; Ishiyama, Tomoaki; Ogiya, Go; Chiba, Masashi; Inoue, Shigeki; Mori, Masao

    2017-07-01

    Motivated by a recently found interesting property of the dark halo surface density within a radius, {r}\\max , giving the maximum circular velocity, {V}\\max , we investigate it for dark halos of the Milky Way’s and Andromeda’s dwarf satellites based on cosmological simulations. We select and analyze the simulated subhalos associated with Milky-Way-sized dark halos and find that the values of their surface densities, {{{Σ }}}{V\\max }, are in good agreement with those for the observed dwarf spheroidal satellites even without employing any fitting procedures. Moreover, all subhalos on the small scales of dwarf satellites are expected to obey the universal relation, irrespective of differences in their orbital evolutions, host halo properties, and observed redshifts. Therefore, we find that the universal scaling relation for dark halos on dwarf galaxy mass scales surely exists and provides us with important clues for understanding fundamental properties of dark halos. We also investigate orbital and dynamical evolutions of subhalos to understand the origin of this universal dark halo relation and find that most subhalos evolve generally along the {r}\\max \\propto {V}\\max sequence, even though these subhalos have undergone different histories of mass assembly and tidal stripping. This sequence, therefore, should be the key feature for understanding the nature of the universality of {{{Σ }}}{V\\max }.

  8. Multi-scale method for the resolution of the neutronic kinetics equations

    International Nuclear Information System (INIS)

    Chauvet, St.

    2008-10-01

    In this PhD thesis and in order to improve the time/precision ratio of the numerical simulation calculations, we investigate multi-scale techniques for the resolution of the reactor kinetics equations. We choose to focus on the mixed dual diffusion approximation and the quasi-static methods. We introduce a space dependency for the amplitude function which only depends on the time variable in the standard quasi-static context. With this new factorization, we develop two mixed dual problems which can be solved with Cea's solver MINOS. An algorithm is implemented, performing the resolution of these problems defined on different scales (for time and space). We name this approach: the Local Quasi-Static method. We present here this new multi-scale approach and its implementation. The inherent details of amplitude and shape treatments are discussed and justified. Results and performances, compared to MINOS, are studied. They illustrate the improvement on the time/precision ratio for kinetics calculations. Furthermore, we open some new possibilities to parallelize computations with MINOS. For the future, we also introduce some improvement tracks with adaptive scales. (author)

  9. A comparison of multidimensional scaling methods for perceptual mapping

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare

  10. On the mass-coupling relation of multi-scale quantum integrable models

    Energy Technology Data Exchange (ETDEWEB)

    Bajnok, Zoltán; Balog, János [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary); Ito, Katsushi [Department of Physics, Tokyo Institute of Technology,2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 (Japan); Satoh, Yuji [Institute of Physics, University of Tsukuba,1-1-1 Tennodai, Tsukuba, Ibaraki 305-8571 (Japan); Tóth, Gábor Zsolt [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary)

    2016-06-13

    We determine exactly the mass-coupling relation for the simplest multi-scale quantum integrable model, the homogenous sine-Gordon model with two independent mass-scales. We first reformulate its perturbed coset CFT description in terms of the perturbation of a projected product of minimal models. This representation enables us to identify conserved tensor currents on the UV side. These UV operators are then mapped via form factor perturbation theory to operators on the IR side, which are characterized by their form factors. The relation between the UV and IR operators is given in terms of the sought-for mass-coupling relation. By generalizing the Θ sum rule Ward identity we are able to derive differential equations for the mass-coupling relation, which we solve in terms of hypergeometric functions. We check these results against the data obtained by numerically solving the thermodynamic Bethe Ansatz equations, and find a complete agreement.

  11. Continuum level density of a coupled-channel system in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Kruppa, Andras; Giraud, Bertrand G.

    2008-01-01

    We study the continuum level density (CLD) in the formalism of the complex scaling method (CSM) for coupled-channel systems. We apply the formalism to the 4 He=[ 3 H+p]+[ 3 He+n] coupled-channel cluster model where there are resonances at low energy. Numerical calculations of the CLD in the CSM with a finite number of L 2 basis functions are consistent with the exact result calculated from the S-matrix by solving coupled-channel equations. We also study channel densities. In this framework, the extended completeness relation (ECR) plays an important role. (author)

  12. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  13. Correlates of the Rosenberg Self-Esteem Scale Method Effects

    Science.gov (United States)

    Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan

    2006-01-01

    Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…

  14. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  15. An eigenfunction method for reconstruction of large-scale and high-contrast objects.

    Science.gov (United States)

    Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P

    2007-07-01

    A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.

  16. A graphical method for reducing and relating models in systems biology.

    Science.gov (United States)

    Gay, Steven; Soliman, Sylvain; Fages, François

    2010-09-15

    In Systems Biology, an increasing collection of models of various biological processes is currently developed and made available in publicly accessible repositories, such as biomodels.net for instance, through common exchange formats such as SBML. To date, however, there is no general method to relate different models to each other by abstraction or reduction relationships, and this task is left to the modeler for re-using and coupling models. In mathematical biology, model reduction techniques have been studied for a long time, mainly in the case where a model exhibits different time scales, or different spatial phases, which can be analyzed separately. These techniques are however far too restrictive to be applied on a large scale in systems biology, and do not take into account abstractions other than time or phase decompositions. Our purpose here is to propose a general computational method for relating models together, by considering primarily the structure of the interactions and abstracting from their dynamics in a first step. We present a graph-theoretic formalism with node merge and delete operations, in which model reductions can be studied as graph matching problems. From this setting, we derive an algorithm for deciding whether there exists a reduction from one model to another, and evaluate it on the computation of the reduction relations between all SBML models of the biomodels.net repository. In particular, in the case of the numerous models of MAPK signalling, and of the circadian clock, biologically meaningful mappings between models of each class are automatically inferred from the structure of the interactions. We conclude on the generality of our graphical method, on its limits with respect to the representation of the structure of the interactions in SBML, and on some perspectives for dealing with the dynamics. The algorithms described in this article are implemented in the open-source software modeling platform BIOCHAM available at http

  17. Psychological effects of relational job characteristics: validation of the scale for hospital nurses.

    Science.gov (United States)

    Santos, Alda; Castanheira, Filipa; Chambel, Maria José; Amarante, Michael Vieira; Costa, Carlos

    2017-07-01

    This study validates the Portuguese version of the psychological effects of the relational job characteristics scale among hospital nurses in Portugal and Brazil. Increasing attention has been given to the social dimension of work, following the transition to a service economy. Nevertheless, and despite the unquestionable relational characteristics of nursing work, scarce research has been developed among nurses under a relational job design framework. Moreover, it is important to develop instruments that study the effects of relational job characteristics among nurses. We followed Messick's framework for scale validation, comprising the steps regarding the response process and internal structure, as well as relationships with other variables (work engagement and burnout). Statistical analysis included exploratory factor analysis and confirmatory factor analysis. The psychological effects of the relational job characteristics scale provided evidence of good psychometric properties with Portuguese and Brazilian hospital nurses. Also, the psychological effects of the relational job characteristics are associated with nurses' work-related well-being: positively with work engagement and negatively concerning burnout. Hospitals that foster the relational characteristics of nursing work are contributing to their nurses' work-related well-being, which may be reflected in the quality of care and patient safety. © 2017 John Wiley & Sons Ltd.

  18. Research of the effectiveness of parallel multithreaded realizations of interpolation methods for scaling raster images

    Science.gov (United States)

    Vnukov, A. A.; Shershnev, M. B.

    2018-01-01

    The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.

  19. Scale relation in logσ - logε diagrams for Zry-4

    International Nuclear Information System (INIS)

    Cuniberti, A.M.; Picasso, A.C.

    1991-01-01

    The stress relaxation assay allows access to information about plastic behaviour of the corresponding material. This work describes a stress relaxation test carried out on polycrystalline Zry-4 at 293 K to verify the existence of a scale relation related to the plastic state equation. (Author) [es

  20. Relative importance of climate changes at different time scales on net primary productivity-a case study of the Karst area of northwest Guangxi, China.

    Science.gov (United States)

    Liu, Huiyu; Zhang, Mingyang; Lin, Zhenshan

    2017-10-05

    Climate changes are considered to significantly impact net primary productivity (NPP). However, there are few studies on how climate changes at multiple time scales impact NPP. With MODIS NPP product and station-based observations of sunshine duration, annual average temperature and annual precipitation, impacts of climate changes at different time scales on annual NPP, have been studied with EEMD (ensemble empirical mode decomposition) method in the Karst area of northwest Guangxi, China, during 2000-2013. Moreover, with partial least squares regression (PLSR) model, the relative importance of climatic variables for annual NPP has been explored. The results show that (1) only at quasi 3-year time scale do sunshine duration and temperature have significantly positive relations with NPP. (2) Annual precipitation has no significant relation to NPP by direct comparison, but significantly positive relation at 5-year time scale, which is because 5-year time scale is not the dominant scale of precipitation; (3) the changes of NPP may be dominated by inter-annual variabilities. (4) Multiple time scales analysis will greatly improve the performance of PLSR model for estimating NPP. The variable importance in projection (VIP) scores of sunshine duration and temperature at quasi 3-year time scale, and precipitation at quasi 5-year time scale are greater than 0.8, indicating important for NPP during 2000-2013. However, sunshine duration and temperature at quasi 3-year time scale are much more important. Our results underscore the importance of multiple time scales analysis for revealing the relations of NPP to changing climate.

  1. Analysis of global multiscale finite element methods for wave equations with continuum spatial scales

    KAUST Repository

    Jiang, Lijian; Efendiev, Yalchin; Ginting, Victor

    2010-01-01

    In this paper, we discuss a numerical multiscale approach for solving wave equations with heterogeneous coefficients. Our interest comes from geophysics applications and we assume that there is no scale separation with respect to spatial variables. To obtain the solution of these multiscale problems on a coarse grid, we compute global fields such that the solution smoothly depends on these fields. We present a Galerkin multiscale finite element method using the global information and provide a convergence analysis when applied to solve the wave equations. We investigate the relation between the smoothness of the global fields and convergence rates of the global Galerkin multiscale finite element method for the wave equations. Numerical examples demonstrate that the use of global information renders better accuracy for wave equations with heterogeneous coefficients than the local multiscale finite element method. © 2010 IMACS.

  2. Analysis of global multiscale finite element methods for wave equations with continuum spatial scales

    KAUST Repository

    Jiang, Lijian

    2010-08-01

    In this paper, we discuss a numerical multiscale approach for solving wave equations with heterogeneous coefficients. Our interest comes from geophysics applications and we assume that there is no scale separation with respect to spatial variables. To obtain the solution of these multiscale problems on a coarse grid, we compute global fields such that the solution smoothly depends on these fields. We present a Galerkin multiscale finite element method using the global information and provide a convergence analysis when applied to solve the wave equations. We investigate the relation between the smoothness of the global fields and convergence rates of the global Galerkin multiscale finite element method for the wave equations. Numerical examples demonstrate that the use of global information renders better accuracy for wave equations with heterogeneous coefficients than the local multiscale finite element method. © 2010 IMACS.

  3. The resource-based relative value scale and physician reimbursement policy.

    Science.gov (United States)

    Laugesen, Miriam J

    2014-11-01

    Most physicians are unfamiliar with the details of the Resource-Based Relative Value Scale (RBRVS) and how changes in the RBRVS influence Medicare and private reimbursement rates. Physicians in a wide variety of settings may benefit from understanding the RBRVS, including physicians who are employees, because many organizations use relative value units as productivity measures. Despite the complexity of the RBRVS, its logic and ideal are simple: In theory, the resource usage (comprising physician work, practice expense, and liability insurance premium costs) for one service is relative to the resource usage of all others. Ensuring relativity when new services are introduced or existing services are changed is, therefore, critical. Since the inception of the RBRVS, the American Medical Association's Relative Value Scale Update Committee (RUC) has made recommendations to the Centers for Medicare & Medicaid Services on changes to relative value units. The RUC's core focus is to develop estimates of physician work, but work estimates also partly determine practice expense payments. Critics have attributed various health-care system problems, including declining and growing gaps between primary care and specialist incomes, to the RUC's role in the RBRVS update process. There are persistent concerns regarding the quality of data used in the process and the potential for services to be overvalued. The Affordable Care Act addresses some of these concerns by increasing payments to primary care physicians, requiring reevaluation of the data underlying work relative value units, and reviewing misvalued codes.

  4. Structural validity of a 16-item abridged version of the Cervantes Health-Related Quality of Life scale for menopause: the Cervantes Short-Form Scale.

    Science.gov (United States)

    Coronado, Pluvio J; Borrego, Rafael Sánchez; Palacios, Santiago; Ruiz, Miguel A; Rejas, Javier

    2015-03-01

    The Cervantes Scale is a specific health-related quality of life questionnaire that was originally developed in Spanish to be used in Spain for women through and beyond menopause. It contains 31 items and is time-consuming. The aim of this study was to produce an abridged version with the same dimensional structure and with similar psychometric properties. A representative sample of 516 postmenopausal women (mean [SD] age, 57 [4.31] y) seen in outpatient gynecology clinics and extracted from an observational cross-sectional study was used. Item analysis, internal consistency reliability, item-total and item-dimension correlations, and item correlation with the 12-item Medical Outcomes Study Short Form Health Survey Version 2.0 were studied. Dimensional and full-model confirmatory factor analyses were used to check structure stability. A threefold cross-validation method was used to obtain stable estimates by means of multigroup analysis. The scale was reduced to a 16-item version, the Cervantes Short-Form Scale, containing four main dimensions (Menopause and Health, Psychological, Sexuality, and Couple Relations), with the first dimension composed of three subdimensions (Vasomotor Symptoms, Health, and Aging). Goodness-of-fit statistics were better than those of the extended version (χ(2)/df = 2.493; adjusted goodness-of-fit index, 0.802; parsimony comparative fit index, 0.749; root mean standard error of approximation, 0.054). Internal consistency was good (Cronbach's α = 0.880). Correlations between the extended and the reduced dimensions were high and significant in all cases (P < 0.001; r values ranged from 0.90 for Sexuality to 0.969 for Vasomotor Symptoms). The Cervantes Scale can be reduced to a 16-item abridged version (Cervantes Short-Form Scale) that maintains the original dimensional structure and psychometric properties. At 51% of the original length, this version can be administered faster, making it especially suitable for routine medical practice.

  5. Scale-Dependent Assessment of Relative Disease Resistance to Plant Pathogens

    Directory of Open Access Journals (Sweden)

    Peter Skelsey

    2014-03-01

    Full Text Available Phenotyping trials may not take into account sufficient spatial context to infer quantitative disease resistance of recommended varieties in commercial production settings. Recent ecological theory—the dispersal scaling hypothesis—provides evidence that host heterogeneity and scale of host heterogeneity interact in a predictable and straightforward manner to produce a unimodal (“humpbacked” distribution of epidemic outcomes. This suggests that the intrinsic artificiality (scale and design of experimental set-ups may lead to spurious conclusions regarding the resistance of selected elite cultivars, due to the failure of experimental efforts to accurately represent disease pressure in real agricultural situations. In this model-based study we investigate the interaction of host heterogeneity and scale as a confounding factor in the inference from ex-situ assessment of quantitative disease resistance to commercial production settings. We use standard modelling approaches in plant disease epidemiology and a number of different agronomic scenarios. Model results revealed that the interaction of heterogeneity and scale is a determinant of relative varietal performance under epidemic conditions. This is a previously unreported phenomenon that could provide a new basis for informing the design of future phenotyping platforms, and optimising the scale at which quantitative disease resistance is assessed.

  6. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  7. Who is Distressed Applying the Diabetes Related Distress Scale in a Diabetes Clinic

    Science.gov (United States)

    2017-06-09

    59 MDW /SGVU SUBJECT: Professional Presentation Approval 7APR 2017 1. Your paper, entitled Who is Distressed? Applying the Diabetes -Related Distress...Scale in A Diabetes Clinic presented at/published to American Diabetes Association 2017 Meeting, San Francisco, CA (National Conference), 9-16 June...as a publication/presentation, a new 59 MOW Form 3039 must be submitted for review and approval.) Using the Diabetes -Related Distress Scale in

  8. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.

    Science.gov (United States)

    Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin

    2018-03-02

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.

  9. Assessment of the methods for determining net radiation at different time-scales of meteorological variables

    Directory of Open Access Journals (Sweden)

    Ni An

    2017-04-01

    Full Text Available When modeling the soil/atmosphere interaction, it is of paramount importance to determine the net radiation flux. There are two common calculation methods for this purpose. Method 1 relies on use of air temperature, while Method 2 relies on use of both air and soil temperatures. Nowadays, there has been no consensus on the application of these two methods. In this study, the half-hourly data of solar radiation recorded at an experimental embankment are used to calculate the net radiation and long-wave radiation at different time-scales (half-hourly, hourly, and daily using the two methods. The results show that, compared with Method 2 which has been widely adopted in agronomical, geotechnical and geo-environmental applications, Method 1 is more feasible for its simplicity and accuracy at shorter time-scale. Moreover, in case of longer time-scale, daily for instance, less variations of net radiation and long-wave radiation are obtained, suggesting that no detailed soil temperature variations can be obtained. In other words, shorter time-scales are preferred in determining net radiation flux.

  10. Cultural adaptation of the Tuberculosis-related stigma scale to Brazil.

    Science.gov (United States)

    Crispim, Juliane de Almeida; Touso, Michelle Mosna; Yamamura, Mellina; Popolin, Marcela Paschoal; Garcia, Maria Concebida da Cunha; Santos, Cláudia Benedita Dos; Palha, Pedro Fredemir; Arcêncio, Ricardo Alexandre

    2016-06-01

    The process of stigmatization associated with TB has been undervalued in national research as this social aspect is important in the control of the disease, especially in marginalized populations. This paper introduces the stages of the process of cultural adaptation in Brazil of the Tuberculosis-related stigma scale for TB patients. It is a methodological study in which the items of the scale were translated and back-translated with semantic validation with 15 individuals of the target population. After translation, the reconciled back-translated version was compared with the original version by the project coordinator in Southern Thailand, who approved the final version in Brazilian Portuguese. The results of the semantic validation conducted with TB patients enable the identification that, in general, the scale was well accepted and easily understood by the participants.

  11. Gauge-Independent Scales Related to the Standard Model Vacuum Instability

    CERN Document Server

    Espinosa, Jose R.; Konstandin, Thomas; Riotto, Antonio

    2017-01-01

    The measured (central) values of the Higgs and top quark masses indicate that the Standard Model (SM) effective potential develops an instability at high field values. The scale of this instability, determined as the Higgs field value at which the potential drops below the electroweak minimum, is about $10^{11}$ GeV. However, such a scale is unphysical as it is not gauge-invariant and suffers from a gauge-fixing uncertainty of up to two orders of magnitude. Subjecting our system, the SM, to several probes of the instability (adding higher order operators to the potential; letting the vacuum decay through critical bubbles; heating up the system to very high temperature; inflating it) and asking in each case physical questions, we are able to provide several gauge-invariant scales related with the Higgs potential instability.

  12. Selective vulnerability related to aging in large-scale resting brain networks.

    Science.gov (United States)

    Zhang, Hong-Ying; Chen, Wen-Xin; Jiao, Yun; Xu, Yao; Zhang, Xiang-Rong; Wu, Jing-Tao

    2014-01-01

    Normal aging is associated with cognitive decline. Evidence indicates that large-scale brain networks are affected by aging; however, it has not been established whether aging has equivalent effects on specific large-scale networks. In the present study, 40 healthy subjects including 22 older (aged 60-80 years) and 18 younger (aged 22-33 years) adults underwent resting-state functional MRI scanning. Four canonical resting-state networks, including the default mode network (DMN), executive control network (ECN), dorsal attention network (DAN) and salience network, were extracted, and the functional connectivities in these canonical networks were compared between the younger and older groups. We found distinct, disruptive alterations present in the large-scale aging-related resting brain networks: the ECN was affected the most, followed by the DAN. However, the DMN and salience networks showed limited functional connectivity disruption. The visual network served as a control and was similarly preserved in both groups. Our findings suggest that the aged brain is characterized by selective vulnerability in large-scale brain networks. These results could help improve our understanding of the mechanism of degeneration in the aging brain. Additional work is warranted to determine whether selective alterations in the intrinsic networks are related to impairments in behavioral performance.

  13. OBSERVED SCALING RELATIONS FOR STRONG LENSING CLUSTERS: CONSEQUENCES FOR COSMOLOGY AND CLUSTER ASSEMBLY

    International Nuclear Information System (INIS)

    Comerford, Julia M.; Moustakas, Leonidas A.; Natarajan, Priyamvada

    2010-01-01

    Scaling relations of observed galaxy cluster properties are useful tools for constraining cosmological parameters as well as cluster formation histories. One of the key cosmological parameters, σ 8 , is constrained using observed clusters of galaxies, although current estimates of σ 8 from the scaling relations of dynamically relaxed galaxy clusters are limited by the large scatter in the observed cluster mass-temperature (M-T) relation. With a sample of eight strong lensing clusters at 0.3 8 , but combining the cluster concentration-mass relation with the M-T relation enables the inclusion of unrelaxed clusters as well. Thus, the resultant gains in the accuracy of σ 8 measurements from clusters are twofold: the errors on σ 8 are reduced and the cluster sample size is increased. Therefore, the statistics on σ 8 determination from clusters are greatly improved by the inclusion of unrelaxed clusters. Exploring cluster scaling relations further, we find that the correlation between brightest cluster galaxy (BCG) luminosity and cluster mass offers insight into the assembly histories of clusters. We find preliminary evidence for a steeper BCG luminosity-cluster mass relation for strong lensing clusters than the general cluster population, hinting that strong lensing clusters may have had more active merging histories.

  14. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  15. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  16. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  17. Absolute flux scale for radioastronomy

    International Nuclear Information System (INIS)

    Ivanov, V.P.; Stankevich, K.S.

    1986-01-01

    The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized

  18. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  19. A low Fermi scale from a simple gaugino-scalar mass relation

    Energy Technology Data Exchange (ETDEWEB)

    Bruemmer, F. [International School for Advanced Studies, Trieste (Italy); Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Buchmueller, W. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-11-15

    In supersymmetric extensions of the Standard Model, the Fermi scale of electroweak symmetry breaking is determined by the pattern of supersymmetry breaking. We present an example, motivated by a higher-dimensional GUT model, where a particular mass relation between the gauginos, third-generation squarks and Higgs fields of the MSSM leads to a Fermi scale smaller than the soft mass scale. This is in agreement with the measured Higgs boson mass. The {mu} parameter is generated independently of supersymmetry breaking, however the {mu} problem becomes less acute due to the little hierarchy between the soft mass scale and the Fermi scale as we argue. The resulting superparticle mass spectra depend on the localization of quark and lepton fields in higher dimensions. In one case, the squarks of the first two generations as well as the gauginos and higgsinos can be in the range of the LHC. Alternatively, only the higgsinos may be accessible at colliders. The lightest superparticle is the gravitino.

  20. Micro-and/or nano-scale patterned porous membranes, methods of making membranes, and methods of using membranes

    KAUST Repository

    Wang, Xianbin; Chen, Wei; Wang, Zhihong; Zhang, Xixiang; Yue, Weisheng; Lai, Zhiping

    2015-01-01

    Embodiments of the present disclosure provide for materials that include a pre-designed patterned, porous membrane (e.g., micro- and/or nano-scale patterned), structures or devices that include a pre-designed patterned, porous membrane, methods of making pre-designed patterned, porous membranes, methods of separation, and the like.

  1. Micro-and/or nano-scale patterned porous membranes, methods of making membranes, and methods of using membranes

    KAUST Repository

    Wang, Xianbin

    2015-01-22

    Embodiments of the present disclosure provide for materials that include a pre-designed patterned, porous membrane (e.g., micro- and/or nano-scale patterned), structures or devices that include a pre-designed patterned, porous membrane, methods of making pre-designed patterned, porous membranes, methods of separation, and the like.

  2. Rapid high temperature field test method for evaluation of geothermal calcite scale inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    Asperger, R.G.

    1982-08-01

    A test method is described which allows the rapid field testing of calcite scale inhibitors in high- temperature geothermal brines. Five commercial formulations, chosen on the basis of laboratory screening tests, were tested in brines with low total dissolved solids at ca 500 F. Four were found to be effective; of these, 2 were found to be capable of removing recently deposited scale. One chemical was tested in the full-flow brine line for 6 wks. It was shown to stop a severe surface scaling problem at the well's control valve, thus proving the viability of the rapid test method. (12 refs.)

  3. X-Ray Scaling Relations of Early-type Galaxies

    Science.gov (United States)

    Babyk, Iu. V.; McNamara, B. R.; Nulsen, P. E. J.; Hogan, M. T.; Vantyghem, A. N.; Russell, H. R.; Pulido, F. A.; Edge, A. C.

    2018-04-01

    X-ray luminosity, temperature, gas mass, total mass, and their scaling relations are derived for 94 early-type galaxies (ETGs) using archival Chandra X-ray Observatory observations. Consistent with earlier studies, the scaling relations, L X ∝ T 4.5±0.2, M ∝ T 2.4±0.2, and L X ∝ M 2.8±0.3, are significantly steeper than expected from self-similarity. This steepening indicates that their atmospheres are heated above the level expected from gravitational infall alone. Energetic feedback from nuclear black holes and supernova explosions are likely heating agents. The tight L X –T correlation for low-luminosity systems (i.e., below 1040 erg s‑1) are at variance with hydrodynamical simulations, which generally predict higher temperatures for low-luminosity galaxies. We also investigate the relationship between total mass and pressure, Y X = M g × T, finding M\\propto {Y}X0.45+/- 0.04. We explore the gas mass to total mass fraction in ETGs and find a range of 0.1%–1.0%. We find no correlation between the gas-to-total mass fraction with temperature or total mass. Higher stellar velocity dispersions and higher metallicities are found in hotter, brighter, and more massive atmospheres. X-ray core radii derived from β-model fitting are used to characterize the degree of core and cuspiness of hot atmospheres.

  4. Cosmological hydrodynamical simulations of galaxy clusters: X-ray scaling relations and their evolution

    Science.gov (United States)

    Truong, N.; Rasia, E.; Mazzotta, P.; Planelles, S.; Biffi, V.; Fabjan, D.; Beck, A. M.; Borgani, S.; Dolag, K.; Gaspari, M.; Granato, G. L.; Murante, G.; Ragone-Figueroa, C.; Steinborn, L. K.

    2018-03-01

    We analyse cosmological hydrodynamical simulations of galaxy clusters to study the X-ray scaling relations between total masses and observable quantities such as X-ray luminosity, gas mass, X-ray temperature, and YX. Three sets of simulations are performed with an improved version of the smoothed particle hydrodynamics GADGET-3 code. These consider the following: non-radiative gas, star formation and stellar feedback, and the addition of feedback by active galactic nuclei (AGN). We select clusters with M500 > 1014 M⊙E(z)-1, mimicking the typical selection of Sunyaev-Zeldovich samples. This permits to have a mass range large enough to enable robust fitting of the relations even at z ˜ 2. The results of the analysis show a general agreement with observations. The values of the slope of the mass-gas mass and mass-temperature relations at z = 2 are 10 per cent lower with respect to z = 0 due to the applied mass selection, in the former case, and to the effect of early merger in the latter. We investigate the impact of the slope variation on the study of the evolution of the normalization. We conclude that cosmological studies through scaling relations should be limited to the redshift range z = 0-1, where we find that the slope, the scatter, and the covariance matrix of the relations are stable. The scaling between mass and YX is confirmed to be the most robust relation, being almost independent of the gas physics. At higher redshifts, the scaling relations are sensitive to the inclusion of AGNs which influences low-mass systems. The detailed study of these objects will be crucial to evaluate the AGN effect on the ICM.

  5. Methods for large-scale international studies on ICT in education

    NARCIS (Netherlands)

    Pelgrum, W.J.; Plomp, T.; Voogt, Joke; Knezek, G.A.

    2008-01-01

    International comparative assessment is a research method applied for describing and analyzing educational processes and outcomes. They are used to ‘describe the status quo’ in educational systems from an international comparative perspective. This chapter reviews different large scale international

  6. Spatial patterns of correlated scale size and scale color in relation to color pattern elements in butterfly wings.

    Science.gov (United States)

    Iwata, Masaki; Otaki, Joji M

    2016-02-01

    Complex butterfly wing color patterns are coordinated throughout a wing by unknown mechanisms that provide undifferentiated immature scale cells with positional information for scale color. Because there is a reasonable level of correspondence between the color pattern element and scale size at least in Junonia orithya and Junonia oenone, a single morphogenic signal may contain positional information for both color and size. However, this color-size relationship has not been demonstrated in other species of the family Nymphalidae. Here, we investigated the distribution patterns of scale size in relation to color pattern elements on the hindwings of the peacock pansy butterfly Junonia almana, together with other nymphalid butterflies, Vanessa indica and Danaus chrysippus. In these species, we observed a general decrease in scale size from the basal to the distal areas, although the size gradient was small in D. chrysippus. Scales of dark color in color pattern elements, including eyespot black rings, parafocal elements, and submarginal bands, were larger than those of their surroundings. Within an eyespot, the largest scales were found at the focal white area, although there were exceptional cases. Similarly, ectopic eyespots that were induced by physical damage on the J. almana background area had larger scales than in the surrounding area. These results are consistent with the previous finding that scale color and size coordinate to form color pattern elements. We propose a ploidy hypothesis to explain the color-size relationship in which the putative morphogenic signal induces the polyploidization (genome amplification) of immature scale cells and that the degrees of ploidy (gene dosage) determine scale color and scale size simultaneously in butterfly wings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Verifying quantitative stigma and medication adherence scales using qualitative methods among Thai youth living with HIV/AIDS.

    Science.gov (United States)

    Fongkaew, Warunee; Viseskul, Nongkran; Suksatit, Benjamas; Settheekul, Saowaluck; Chontawan, Ratanawadee; Grimes, Richard M; Grimes, Deanna E

    2014-01-01

    HIV/AIDS-related stigma has been linked to poor adherence resulting in drug resistance and the failure to control HIV. This study used both quantitative and qualitative methods to examine stigma and its relationship to adherence in 30 HIV-infected Thai youth aged 14 to 21 years. Stigma was measured using the HIV stigma scale and its 4 subscales, and adherence was measured using a visual analog scale. Stigma and adherence were also examined by in-depth interviews. The interviews were to determine whether verbal responses would match the scale's results. The mean score of stigma perception from the overall scale and its 4 subscales ranged from 2.14 to 2.45 on a scale of 1 to 4, indicating moderate levels of stigma. The mean adherence score was .74. The stigma scale and its subscales did not correlate with the adherence. Totally, 17 of the respondents were interviewed. Contrary to the quantitative results, the interviewees reported that the stigma led to poor adherence because the fear of disclosure often caused them to miss medication doses. The differences between the quantitative and the qualitative results highlight the importance of validating psychometric scales when they are translated and used in other cultures.

  8. A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer

    Directory of Open Access Journals (Sweden)

    Xiangqing Huang

    2017-10-01

    Full Text Available A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI. Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.

  9. A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer.

    Science.gov (United States)

    Huang, Xiangqing; Deng, Zhongguang; Xie, Yafei; Li, Zhu; Fan, Ji; Tu, Liangcheng

    2017-10-27

    A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI). Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.

  10. Polarized atomic orbitals for linear scaling methods

    Science.gov (United States)

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  11. The Language Teaching Methods Scale: Reliability and Validity Studies

    Science.gov (United States)

    Okmen, Burcu; Kilic, Abdurrahman

    2016-01-01

    The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…

  12. Identifying food-related life style segments by a cross-culturally valid scaling device

    DEFF Research Database (Denmark)

    Brunsø, Karen; Grunert, Klaus G.

    1994-01-01

    -related life style in a cross-culturally valid way. To this end, we have col-lected a pool of 202 items, collected data in three countries, and have con-structed scales based on cross-culturally stable patterns. These scales have then been subjected to a number of tests of reliability and vali-dity. We have...... then applied the set of scales to a fourth country, Germany, based on a representative sample of 1000 respondents. The scales had, with a fe exceptions, moderately good reliabilities. A cluster ana-ly-sis led to the identification of 5 segments, which differed on all 23 scales....

  13. LoCuSS: THE SUNYAEV–ZEL'DOVICH EFFECT AND WEAK-LENSING MASS SCALING RELATION

    International Nuclear Information System (INIS)

    Marrone, Daniel P.; Carlstrom, John E.; Gralla, Megan; Greer, Christopher H.; Hennessy, Ryan; Leitch, Erik M.; Plagge, Thomas; Smith, Graham P.; Okabe, Nobuhiro; Bonamente, Massimiliano; Hasler, Nicole; Culverhouse, Thomas L.; Hawkins, David; Lamb, James W.; Muchovej, Stephen; Joy, Marshall; Martino, Rossella; Mazzotta, Pasquale; Miller, Amber; Mroczkowski, Tony

    2012-01-01

    We present the first weak-lensing-based scaling relation between galaxy cluster mass, M WL , and integrated Compton parameter Y sph . Observations of 18 galaxy clusters at z ≅ 0.2 were obtained with the Subaru 8.2 m telescope and the Sunyaev-Zel'dovich Array. The M WL -Y sph scaling relations, measured at Δ = 500, 1000, and 2500 ρ c , are consistent in slope and normalization with previous results derived under the assumption of hydrostatic equilibrium (HSE). We find an intrinsic scatter in M WL at fixed Y sph of 20%, larger than both previous measurements of M HSE -Y sph scatter as well as the scatter in true mass at fixed Y sph found in simulations. Moreover, the scatter in our lensing-based scaling relations is morphology dependent, with 30%-40% larger M WL for undisturbed compared to disturbed clusters at the same Y sph at r 500 . Further examination suggests that the segregation may be explained by the inability of our spherical lens models to faithfully describe the three-dimensional structure of the clusters, in particular, the structure along the line of sight. We find that the ellipticity of the brightest cluster galaxy, a proxy for halo orientation, correlates well with the offset in mass from the mean scaling relation, which supports this picture. This provides empirical evidence that line-of-sight projection effects are an important systematic uncertainty in lensing-based scaling relations.

  14. [Scale Relativity Theory in living beings morphogenesis: fratal, determinism and chance].

    Science.gov (United States)

    Chaline, J

    2012-10-01

    The Scale Relativity Theory has many biological applications from linear to non-linear and, from classical mechanics to quantum mechanics. Self-similar laws have been used as model for the description of a huge number of biological systems. Theses laws may explain the origin of basal life structures. Log-periodic behaviors of acceleration or deceleration can be applied to branching macroevolution, to the time sequences of major evolutionary leaps. The existence of such a law does not mean that the role of chance in evolution is reduced, but instead that randomness and contingency may occur within a framework which may itself be structured in a partly statistical way. The scale relativity theory can open new perspectives in evolution. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  15. Gauge-independent scales related to the Standard Model vacuum instability

    International Nuclear Information System (INIS)

    Espinosa, J.R.; Garny, M.; Konstandin, T.; Riotto, A.

    2016-08-01

    The measured (central) values of the Higgs and top quark masses indicate that the Standard Model (SM) effective potential develops an instability at high field values. The scale of this instability, determined as the Higgs field value at which the potential drops below the electroweak minimum, is about 10"1"1 GeV. However, such a scale is unphysical as it is not gauge invariant and suffers from a gauge-fixing uncertainty of up to two orders of magnitude. Subjecting our system, the SM, to several probes of the instability (adding higher order operators to the potential; letting the vacuum decay through critical bubbles; heating up the system to very high temperature; inflating it) and asking in each case physical questions, we are able to provide several gauge-invariant scales related with the Higgs potential instability.

  16. Effect of primordial non-Gaussianities on galaxy clusters scaling relations

    Science.gov (United States)

    Trindade, A. M. M.; da Silva, Antonio

    2017-07-01

    Galaxy clusters are a valuable source of cosmological information. Their formation and evolution depends on the underlying cosmology and on the statistical nature of the primordial density fluctuations. Here we investigate the impact of primordial non-Gaussianities (PNG) on the scaling properties of galaxy clusters. We performed a series of hydrodynamic N-body simulations featuring adiabatic gas physics and different levels of non-Gaussianity within the Λ cold dark matter framework. We focus on the T-M, S-M, Y-M and YX-M scalings relating the total cluster mass with temperature, entropy and Sunyaev-Zeld'ovich integrated pressure that reflect the thermodynamic state of the intracluster medium. Our results show that PNG have an impact on cluster scalings laws. The scalings mass power-law indexes are almost unaffected by the existence of PNG, but the amplitude and redshift evolution of their normalizations are clearly affected. Changes in the Y-M and YX-M normalizations are as high as 22 per cent and 16 per cent when fNL varies from -500 to 500, respectively. Results are consistent with the view that positive/negative fNL affect cluster profiles due to an increase/decrease of cluster concentrations. At low values of fNL, as suggested by present Planck constraints on a scale invariant fNL, the impact on the scaling normalizations is only a few per cent. However, if fNL varies with scale, PNG may have larger amplitudes at clusters scales; thus, our results suggest that PNG should be taken into account when cluster data are used to infer or forecast cosmological parameters from existing or future cluster surveys.

  17. A family of conjugate gradient methods for large-scale nonlinear equations.

    Science.gov (United States)

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  18. A comparison of three methods of assessing differential item functioning (DIF) in the Hospital Anxiety Depression Scale: ordinal logistic regression, Rasch analysis and the Mantel chi-square procedure.

    Science.gov (United States)

    Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C

    2014-12-01

    It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.

  19. Relating quality of life to Glasgow outcome scale health states.

    Science.gov (United States)

    Kosty, Jennifer; Macyszyn, Luke; Lai, Kevin; McCroskery, James; Park, Hae-Ran; Stein, Sherman C

    2012-05-01

    There has recently been a call for the adoption of comparative effectiveness research (CER) and related research approaches for studying traumatic brain injury (TBI). These methods allow researchers to compare the effectiveness of different therapies in producing patient-oriented outcomes of interest. Heretofore, the only measures by which to compare such therapies have been mortality and rate of poor outcome. Better comparisons can be made if parametric, preference-based quality-of-life (QOL) values are available for intermediate outcomes, such as those described by the Glasgow Outcome Scale Extended (GOSE). Our objective was therefore to determine QOL for the health states described by the GOSE. We interviewed community members at least 18 years of age using the standard gamble method to assess QOL for descriptions of GOSE scores of 2-7 derived from the structured interview. Linear regression analysis was also performed to assess the effect of age, gender, and years of education on QOL. One hundred and one participants between the ages of 18 and 83 were interviewed (mean age 40 ± 19 years), including 55 men and 46 women. Functional impairment and QOL showed a strong inverse relationship, as assessed by both linear regression and the Spearman rank order coefficient. No consistent effect or age, gender, or years of education was seen. As expected, QOL decreased with functional outcome as described by the GOSE. The results of this study will provide the groundwork for future groups seeking to apply CER methods to clinical studies of TBI.

  20. Counting hard-to-count populations: the network scale-up method for public health

    Science.gov (United States)

    Bernard, H Russell; Hallett, Tim; Iovita, Alexandrina; Johnsen, Eugene C; Lyerla, Rob; McCarty, Christopher; Mahy, Mary; Salganik, Matthew J; Saliuk, Tetiana; Scutelniciuc, Otilia; Shelley, Gene A; Sirinirund, Petchsri; Weir, Sharon

    2010-01-01

    Estimating sizes of hidden or hard-to-reach populations is an important problem in public health. For example, estimates of the sizes of populations at highest risk for HIV and AIDS are needed for designing, evaluating and allocating funding for treatment and prevention programmes. A promising approach to size estimation, relatively new to public health, is the network scale-up method (NSUM), involving two steps: estimating the personal network size of the members of a random sample of a total population and, with this information, estimating the number of members of a hidden subpopulation of the total population. We describe the method, including two approaches to estimating personal network sizes (summation and known population). We discuss the strengths and weaknesses of each approach and provide examples of international applications of the NSUM in public health. We conclude with recommendations for future research and evaluation. PMID:21106509

  1. Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions

    International Nuclear Information System (INIS)

    Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor

    2004-01-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)

  2. Measuring the black hole mass in ultraluminous X-ray sources with the X-ray scaling method

    Science.gov (United States)

    Jang, I.; Gliozzi, M.; Satyapal, S.; Titarchuk, L.

    2018-01-01

    In our recent work, we demonstrated that a novel X-ray scaling method, originally introduced for Galactic black holes (BH), could be reliably extended to estimate the mass of supermassive black holes accreting at moderate to high level. Here, we apply this X-ray scaling method to ultraluminous X-ray sources (ULXs) to constrain their MBH. Using 49 ULXs with multiple XMM-Newton observations, we infer that ULXs host both stellar mass BHs and intermediate mass BHs. The majority of the sources of our sample seem to be consistent with the hypothesis of highly accreting massive stellar BHs with MBH ∼ 100 M⊙. Our results are in general agreement with the MBH values obtained with alternative methods, including model-independent variability methods. This suggests that the X-ray scaling method is an actual scale-independent method that can be applied to all BH systems accreting at moderate-high rate.

  3. Worldwide F(ST) estimates relative to five continental-scale populations.

    Science.gov (United States)

    Steele, Christopher D; Court, Denise Syndercombe; Balding, David J

    2014-11-01

    We estimate the population genetics parameter FST (also referred to as the fixation index) from short tandem repeat (STR) allele frequencies, comparing many worldwide human subpopulations at approximately the national level with continental-scale populations. FST is commonly used to measure population differentiation, and is important in forensic DNA analysis to account for remote shared ancestry between a suspect and an alternative source of the DNA. We estimate FST comparing subpopulations with a hypothetical ancestral population, which is the approach most widely used in population genetics, and also compare a subpopulation with a sampled reference population, which is more appropriate for forensic applications. Both estimation methods are likelihood-based, in which FST is related to the variance of the multinomial-Dirichlet distribution for allele counts. Overall, we find low FST values, with posterior 97.5 percentiles estimates, and are also about half the magnitude of STR-based estimates from population genetics surveys that focus on distinct ethnic groups rather than a general population. Our findings support the use of FST up to 3% in forensic calculations, which corresponds to some current practice.

  4. EMD-regression for modelling multi-scale relationships, and application to weather-related cardiovascular mortality

    Science.gov (United States)

    Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.

    2018-01-01

    In a number of environmental studies, relationships between natural processes are often assessed through regression analyses, using time series data. Such data are often multi-scale and non-stationary, leading to a poor accuracy of the resulting regression models and therefore to results with moderate reliability. To deal with this issue, the present paper introduces the EMD-regression methodology consisting in applying the empirical mode decomposition (EMD) algorithm on data series and then using the resulting components in regression models. The proposed methodology presents a number of advantages. First, it accounts of the issues of non-stationarity associated to the data series. Second, this approach acts as a scan for the relationship between a response variable and the predictors at different time scales, providing new insights about this relationship. To illustrate the proposed methodology it is applied to study the relationship between weather and cardiovascular mortality in Montreal, Canada. The results shed new knowledge concerning the studied relationship. For instance, they show that the humidity can cause excess mortality at the monthly time scale, which is a scale not visible in classical models. A comparison is also conducted with state of the art methods which are the generalized additive models and distributed lag models, both widely used in weather-related health studies. The comparison shows that EMD-regression achieves better prediction performances and provides more details than classical models concerning the relationship.

  5. Global hydrobelts: improved reporting scale for water-related issues?

    Science.gov (United States)

    Meybeck, M.; Kummu, M.; Dürr, H. H.

    2012-08-01

    Questions related to water such as its availability, water needs or stress, or management, are mapped at various resolutions at the global scale. They are reported at many scales, mostly along political or continental boundaries. As such, they ignore the fundamental heterogeneity of the hydroclimate and the natural boundaries of the river basins. Here, we describe the continental landmasses according to eight global-scale hydrobelts strictly limited by river basins, defined at a 30' (0.5°) resolution. The belts were defined and delineated, based primarily on the annual average temperature (T) and runoff (q), to maximise interbelt differences and minimise intrabelt variability. The belts were further divided into 29 hydroregions based on continental limits. This new global puzzle defines homogeneous and near-contiguous entities with similar hydrological and thermal regimes, glacial and postglacial basin histories, endorheism distribution and sensitivity to climate variations. The Mid-Latitude, Dry and Subtropical belts have northern and southern analogues and a general symmetry can be observed for T and q between them. The Boreal and Equatorial belts are unique. The hydroregions (median size 4.7 Mkm2) contrast strongly, with the average q ranging between 6 and 1393 mm yr-1 and the average T between -9.7 and +26.3 °C. Unlike the hydroclimate, the population density between the North and South belts and between the continents varies greatly, resulting in pronounced differences between the belts with analogues in both hemispheres. The population density ranges from 0.7 to 0.8 p km-2 for the North American Boreal and some Australian hydroregions to 280 p km-2 for the Asian part of the Northern Mid-Latitude belt. The combination of population densities and hydroclimate features results in very specific expressions of water-related characteristics in each of the 29 hydroregions. Our initial tests suggest that hydrobelt and hydroregion divisions are often more

  6. Local-scaling density-functional method: Intraorbit and interorbit density optimizations

    International Nuclear Information System (INIS)

    Koga, T.; Yamamoto, Y.; Ludena, E.V.

    1991-01-01

    The recently proposed local-scaling density-functional theory provides us with a practical method for the direct variational determination of the electron density function ρ(r). The structure of ''orbits,'' which ensures the one-to-one correspondence between the electron density ρ(r) and the N-electron wave function Ψ({r k }), is studied in detail. For the realization of the local-scaling density-functional calculations, procedures for intraorbit and interorbit optimizations of the electron density function are proposed. These procedures are numerically illustrated for the helium atom in its ground state at the beyond-Hartree-Fock level

  7. A family of conjugate gradient methods for large-scale nonlinear equations

    Directory of Open Access Journals (Sweden)

    Dexiang Feng

    2017-09-01

    Full Text Available Abstract In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  8. EI Scale: an environmental impact assessment scale related to the construction materials used in the reinforced concrete

    Directory of Open Access Journals (Sweden)

    Gilson Morales

    2010-12-01

    Full Text Available This study aimed to create EI Scal, an environmental impact assessment scal, related to construction materials used in the reinforced concrete structure production. The main reason for that was based on the need to classify the environmental impact levels through indicators to assess the damage level process. The scale allowed converting information to estimate the environmental impact caused. Indicators were defined trough the requirements and classification criteria of impact aspects considering the eco-design theory. Moreover, the scale allowed classifying the materials and processes environmental impact through four score categories which resulted in a single final impact score. It was concluded that the EI scale could be cheap, accessible, and relevant tool for environmental impact controlling and reduction, allowing the planning and material specification to minimize the construction negative effects caused in the environment.

  9. Multi-scale image segmentation method with visual saliency constraints and its application

    Science.gov (United States)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works

  10. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  11. Scaling as an Organizational Method

    DEFF Research Database (Denmark)

    Papazu, Irina Maria Clara Hansen; Nelund, Mette

    2018-01-01

    Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....

  12. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  13. Self-assembling membranes and related methods thereof

    Science.gov (United States)

    Capito, Ramille M; Azevedo, Helena S; Stupp, Samuel L

    2013-08-20

    The present invention relates to self-assembling membranes. In particular, the present invention provides self-assembling membranes configured for securing and/or delivering bioactive agents. In some embodiments, the self-assembling membranes are used in the treatment of diseases, and related methods (e.g., diagnostic methods, research methods, drug screening).

  14. Scaling relation and regime map of explosive gas–liquid flow of binary Lennard-Jones particle system

    KAUST Repository

    Inaoka, Hajime

    2012-02-01

    We study explosive gasliquid flows caused by rapid depressurization using a molecular dynamics model of Lennard-Jones particle systems. A unique feature of our model is that it consists of two types of particles: liquid particles, which tend to form liquid droplets, and gas particles, which remain supercritical gaseous states under the depressurization realized by simulations. The system has a pipe-like structure similar to the model of a shock tube. We observed physical quantities and flow regimes in systems with various combinations of initial particle number densities and initial temperatures. It is observed that a physical quantity Q, such as pressure, at position z measured along a pipe-like system at time t follows a scaling relation Q(z,t)=Q(zt) with a scaling function Q(ζ). A similar scaling relation holds for time evolution of flow regimes in a system. These scaling relations lead to a regime map of explosive flows in parameter spaces of local physical quantities. The validity of the scaling relations of physical quantities means that physics of equilibrium systems, such as an equation of state, is applicable to explosive flows in our simulations, though the explosive flows involve highly nonequilibrium processes. In other words, if the breaking of the scaling relations is observed, it means that the explosive flows cannot be fully described by physics of equilibrium systems. We show the possibility of breaking of the scaling relations and discuss its implications in the last section. © 2011 Elsevier B.V. All rights reserved.

  15. Rapid, high-temperature, field test method for evaluation of geothermal calcium carbonate scale inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    Asperger, R.G.

    1986-09-01

    A new test method is described that allows the rapid field testing of calcium carbonate scale inhibitors at 500/sup 0/F (260/sup 0/C). The method evolved from use of a full-flow test loop on a well with a mass flow rate of about 1 x 10/sup 6/ lbm/hr (126 kg/s). It is a simple, effective way to evaluate the effectiveness of inhibitors under field conditions. Five commercial formulations were chosen for field evaluation on the basis of nonflowing, laboratory screening tests at 500/sup 0/F (260/sup 0/C). Four of these formulations from different suppliers controlled calcium carbonate scale deposition as measured by the test method. Two of these could dislodge recently deposited scale that had not age-hardened. Performance-profile diagrams, which were measured for these four effective inhibitors, show the concentration interrelationship between brine calcium and inhibitor concentrations at which the formulations will and will not stop scale formation in the test apparatus. With these diagrams, one formulation was chosen for testing on the full-flow brine line. The composition was tested for 6 weeks and showed a dramatic decrease in the scaling occurring at the flow-control valve. This scaling was about to force a shutdown of a major, long-term flow test being done for reservoir economic evaluations. The inhibitor stopped the scaling, and the test was performed without interruption.

  16. Planck early results. XII. Cluster Sunyaev-Zeldovich optical scaling relations

    DEFF Research Database (Denmark)

    Poutanen, T.; Natoli, P.; Polenta, G.

    2011-01-01

    We present the Sunyaev-Zeldovich (SZ) signal-to-richness scaling relation (Y500 - N200) for the MaxBCG cluster catalogue. Employing a multi-frequency matched filter on the Planck sky maps, we measure the SZ signal for each cluster by adapting the filter according to weak-lensing calibrated mass-r...

  17. Testing Scaling Relations for Solar-like Oscillations from the Main Sequence to Red Giants Using Kepler Data

    DEFF Research Database (Denmark)

    Huber, D.; Bedding, T.R.; Stello, D.

    2011-01-01

    ), and oscillation amplitudes. We show that the difference of the Δν-νmax relation for unevolved and evolved stars can be explained by different distributions in effective temperature and stellar mass, in agreement with what is expected from scaling relations. For oscillation amplitudes, we show that neither (L/M) s......We have analyzed solar-like oscillations in ~1700 stars observed by the Kepler Mission, spanning from the main sequence to the red clump. Using evolutionary models, we test asteroseismic scaling relations for the frequency of maximum power (νmax), the large frequency separation (Δν...... scaling nor the revised scaling relation by Kjeldsen & Bedding is accurate for red-giant stars, and demonstrate that a revised scaling relation with a separate luminosity-mass dependence can be used to calculate amplitudes from the main sequence to red giants to a precision of ~25%. The residuals show...

  18. Studying the properties of photonic quasi-crystals by the scaling convergence method

    International Nuclear Information System (INIS)

    Ho, I-Lin; Ng, Ming-Yaw; Mai, Chien Chin; Ko, Peng Yu; Chang, Yia-Chung

    2013-01-01

    This work introduces the iterative scaling (or inflation) method to systematically approach and analyse the infinite structure of quasi-crystals. The resulting structures preserve local geometric orderings in order to prevent artificial disclination across the boundaries of super-cells, with realistic quasi-crystals coming out under high iteration (infinite super-cell). The method provides an easy way for decorations of quasi-crystalline lattices, and for compact reliefs with a quasi-periodic arrangement to underlying applications. Numerical examples for in-plane and off-plane properties of square-triangle quasi-crystals show fast convergence during iteratively geometric scaling, revealing characteristics that do not appear on regular crystals. (paper)

  19. Suicide-Related Experiences Among Blacks: An Empirical Test of a Suicide Potential Scale

    Science.gov (United States)

    Wenz, Friedrich V.

    1978-01-01

    Developing a Suicide Potential Scale for a number of socially differentiated, stratified census tract populations in a northern city, this paper argues that scores on this scale are related to actual suicidal behavior. These data support the position that variation in suicide among blacks is mainly determined by economic status. (Author)

  20. Comparison of Single and Multi-Scale Method for Leaf and Wood Points Classification from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie

    2018-04-01

    The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.

  1. Large scale obscuration and related climate effects open literature bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  2. Large scale obscuration and related climate effects open literature bibliography

    International Nuclear Information System (INIS)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ''Nuclear Winter Controversy'' in the early 1980's. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest

  3. Methods of Scientific Research: Teaching Scientific Creativity at Scale

    Science.gov (United States)

    Robbins, Dennis; Ford, K. E. Saavik

    2016-01-01

    We present a scaling-up plan for AstroComNYC's Methods of Scientific Research (MSR), a course designed to improve undergraduate students' understanding of science practices. The course format and goals, notably the open-ended, hands-on, investigative nature of the curriculum are reviewed. We discuss how the course's interactive pedagogical techniques empower students to learn creativity within the context of experimental design and control of variables thinking. To date the course has been offered to a limited numbers of students in specific programs. The goals of broadly implementing MSR is to reach more students and early in their education—with the specific purpose of supporting and improving retention of students pursuing STEM careers. However, we also discuss challenges in preserving the effectiveness of the teaching and learning experience at scale.

  4. A stochastic immersed boundary method for fluid-structure dynamics at microscopic length scales

    International Nuclear Information System (INIS)

    Atzberger, Paul J.; Kramer, Peter R.; Peskin, Charles S.

    2007-01-01

    In modeling many biological systems, it is important to take into account flexible structures which interact with a fluid. At the length scale of cells and cell organelles, thermal fluctuations of the aqueous environment become significant. In this work, it is shown how the immersed boundary method of [C.S. Peskin, The immersed boundary method, Acta Num. 11 (2002) 1-39.] for modeling flexible structures immersed in a fluid can be extended to include thermal fluctuations. A stochastic numerical method is proposed which deals with stiffness in the system of equations by handling systematically the statistical contributions of the fastest dynamics of the fluid and immersed structures over long time steps. An important feature of the numerical method is that time steps can be taken in which the degrees of freedom of the fluid are completely underresolved, partially resolved, or fully resolved while retaining a good level of accuracy. Error estimates in each of these regimes are given for the method. A number of theoretical and numerical checks are furthermore performed to assess its physical fidelity. For a conservative force, the method is found to simulate particles with the correct Boltzmann equilibrium statistics. It is shown in three dimensions that the diffusion of immersed particles simulated with the method has the correct scaling in the physical parameters. The method is also shown to reproduce a well-known hydrodynamic effect of a Brownian particle in which the velocity autocorrelation function exhibits an algebraic (τ -3/2 ) decay for long times [B.J. Alder, T.E. Wainwright, Decay of the Velocity Autocorrelation Function, Phys. Rev. A 1(1) (1970) 18-21]. A few preliminary results are presented for more complex systems which demonstrate some potential application areas of the method. Specifically, we present simulations of osmotic effects of molecular dimers, worm-like chain polymer knots, and a basic model of a molecular motor immersed in fluid subject to a

  5. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  6. A New Feature Extraction Method Based on EEMD and Multi-Scale Fuzzy Entropy for Motor Bearing

    Directory of Open Access Journals (Sweden)

    Huimin Zhao

    2016-12-01

    Full Text Available Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD, mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.

  7. Scaling relation of the anomalous Hall effect in (Ga,Mn)As

    Science.gov (United States)

    Glunk, M.; Daeubler, J.; Schoch, W.; Sauer, R.; Limmer, W.

    2009-09-01

    We present magnetotransport studies performed on an extended set of (Ga,Mn)As samples at 4.2 K with longitudinal conductivities σxx ranging from the low-conductivity to the high-conductivity regime. The anomalous Hall conductivity σxy(AH) is extracted from the measured longitudinal and Hall resistivities. A transition from σxy(AH)=20Ω-1cm-1 due to the Berry phase effect in the high-conductivity regime to a scaling relation σxy(AH)∝σxx1.6 for low-conductivity samples is observed. This scaling relation is consistent with a recently developed unified theory of the anomalous Hall effect in the framework of the Keldysh formalism. It turns out to be independent of crystallographic orientation, growth conditions, Mn concentration, and strain, and can therefore be considered universal for low-conductivity (Ga,Mn)As. The relation plays a crucial role when deriving values of the hole concentration from magnetotransport measurements in low-conductivity (Ga,Mn)As. In addition, the hole diffusion constants for the high-conductivity samples are determined from the measured longitudinal conductivities.

  8. Coarse-graining using the relative entropy and simplex-based optimization methods in VOTCA

    Science.gov (United States)

    Rühle, Victor; Jochum, Mara; Koschke, Konstantin; Aluru, N. R.; Kremer, Kurt; Mashayak, S. Y.; Junghans, Christoph

    2014-03-01

    Coarse-grained (CG) simulations are an important tool to investigate systems on larger time and length scales. Several methods for systematic coarse-graining were developed, varying in complexity and the property of interest. Thus, the question arises which method best suits a specific class of system and desired application. The Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) provides a uniform platform for coarse-graining methods and allows for their direct comparison. We present recent advances of VOTCA, namely the implementation of the relative entropy method and downhill simplex optimization for coarse-graining. The methods are illustrated by coarse-graining SPC/E bulk water and a water-methanol mixture. Both CG models reproduce the pair distributions accurately. SYM is supported by AFOSR under grant 11157642 and by NSF under grant 1264282. CJ was supported in part by the NSF PHY11-25915 at KITP. K. Koschke acknowledges funding by the Nestle Research Center.

  9. Natural Scales in Geographical Patterns

    Science.gov (United States)

    Menezes, Telmo; Roth, Camille

    2017-04-01

    Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.

  10. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Directory of Open Access Journals (Sweden)

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among

  11. Cone beam CT dose reduction in prostate radiotherapy using Likert scale methods.

    Science.gov (United States)

    Langmack, Keith A; Newton, Louise A; Jordan, Suzanne; Smith, Ruth

    2016-01-01

    To use a Likert scale method to optimize image quality (IQ) for cone beam CT (CBCT) soft-tissue matching for image-guided radiotherapy of the prostate. 23 males with local/locally advanced prostate cancer had the CBCT IQ assessed using a 4-point Likert scale (4 = excellent, no artefacts; 3 = good, few artefacts; 2 = poor, just able to match; 1 = unsatisfactory, not able to match) at three levels of exposure. The lateral separations of the subjects were also measured. The Friedman test and Wilcoxon signed-rank tests were used to determine if the IQ was associated with the exposure level. We used the point-biserial correlation and a χ(2) test to investigate the relationship between the separation and IQ. The Friedman test showed that the IQ was related to exposure (p = 2 × 10(-7)) and the Wilcoxon signed-rank test demonstrated that the IQ decreased as exposure decreased (all p-values <0.005). We did not find a correlation between the IQ and the separation (correlation coefficient 0.045), but for separations <35 cm, it was possible to use the lowest exposure parameters studied. We can reduce exposure factors to 80% of those supplied with the system without hindering the matching process for all patients. For patients with lateral separations <35 cm, the exposure factors can be reduced further to 64% of the original values. Likert scales are a useful tool for measuring IQ in the optimization of CBCT IQ for soft-tissue matching in radiotherapy image guidance applications.

  12. Single-field consistency relations of large scale structure

    International Nuclear Information System (INIS)

    Creminelli, Paolo; Noreña, Jorge; Simonović, Marko; Vernizzi, Filippo

    2013-01-01

    We derive consistency relations for the late universe (CDM and ΛCDM): relations between an n-point function of the density contrast δ and an (n+1)-point function in the limit in which one of the (n+1) momenta becomes much smaller than the others. These are based on the observation that a long mode, in single-field models of inflation, reduces to a diffeomorphism since its freezing during inflation all the way until the late universe, even when the long mode is inside the horizon (but out of the sound horizon). These results are derived in Newtonian gauge, at first and second order in the small momentum q of the long mode and they are valid non-perturbatively in the short-scale δ. In the non-relativistic limit our results match with [1]. These relations are a consequence of diffeomorphism invariance; they are not satisfied in the presence of extra degrees of freedom during inflation or violation of the Equivalence Principle (extra forces) in the late universe

  13. Seismic detection method for small-scale discontinuities based on dictionary learning and sparse representation

    Science.gov (United States)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei

    2017-02-01

    Studying small-scale geologic discontinuities, such as faults, cavities and fractures, plays a vital role in analyzing the inner conditions of reservoirs, as these geologic structures and elements can provide storage spaces and migration pathways for petroleum. However, these geologic discontinuities have weak energy and are easily contaminated with noises, and therefore effectively extracting them from seismic data becomes a challenging problem. In this paper, a method for detecting small-scale discontinuities using dictionary learning and sparse representation is proposed that can dig up high-resolution information by sparse coding. A K-SVD (K-means clustering via Singular Value Decomposition) sparse representation model that contains two stage of iteration procedure: sparse coding and dictionary updating, is suggested for mathematically expressing these seismic small-scale discontinuities. Generally, the orthogonal matching pursuit (OMP) algorithm is employed for sparse coding. However, the method can only update one dictionary atom at one time. In order to improve calculation efficiency, a regularized version of OMP algorithm is presented for simultaneously updating a number of atoms at one time. Two numerical experiments demonstrate the validity of the developed method for clarifying and enhancing small-scale discontinuities. The field example of carbonate reservoirs further demonstrates its effectiveness in revealing masked tiny faults and small-scale cavities.

  14. Accounting for Scale Heterogeneity in Healthcare-Related Discrete Choice Experiments when Comparing Stated Preferences: A Systematic Review.

    Science.gov (United States)

    Wright, Stuart J; Vass, Caroline M; Sim, Gene; Burton, Michael; Fiebig, Denzil G; Payne, Katherine

    2018-02-28

    Scale heterogeneity, or differences in the error variance of choices, may account for a significant amount of the observed variation in the results of discrete choice experiments (DCEs) when comparing preferences between different groups of respondents. The aim of this study was to identify if, and how, scale heterogeneity has been addressed in healthcare DCEs that compare the preferences of different groups. A systematic review identified all healthcare DCEs published between 1990 and February 2016. The full-text of each DCE was then screened to identify studies that compared preferences using data generated from multiple groups. Data were extracted and tabulated on year of publication, samples compared, tests for scale heterogeneity, and analytical methods to account for scale heterogeneity. Narrative analysis was used to describe if, and how, scale heterogeneity was accounted for when preferences were compared. A total of 626 healthcare DCEs were identified. Of these 199 (32%) aimed to compare the preferences of different groups specified at the design stage, while 79 (13%) compared the preferences of groups identified at the analysis stage. Of the 278 included papers, 49 (18%) discussed potential scale issues, 18 (7%) used a formal method of analysis to account for scale between groups, and 2 (1%) accounted for scale differences between preference groups at the analysis stage. Scale heterogeneity was present in 65% (n = 13) of studies that tested for it. Analytical methods to test for scale heterogeneity included coefficient plots (n = 5, 2%), heteroscedastic conditional logit models (n = 6, 2%), Swait and Louviere tests (n = 4, 1%), generalised multinomial logit models (n = 5, 2%), and scale-adjusted latent class analysis (n = 2, 1%). Scale heterogeneity is a prevalent issue in healthcare DCEs. Despite this, few published DCEs have discussed such issues, and fewer still have used formal methods to identify and account for the impact of scale

  15. Scaling relation between earthquake magnitude and the departure time from P wave similar growth

    Science.gov (United States)

    Noda, Shunta; Ellsworth, William L.

    2016-01-01

    We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.

  16. Landslide susceptibility mapping on a global scale using the method of logistic regression

    Directory of Open Access Journals (Sweden)

    L. Lin

    2017-08-01

    Full Text Available This paper proposes a statistical model for mapping global landslide susceptibility based on logistic regression. After investigating explanatory factors for landslides in the existing literature, five factors were selected for model landslide susceptibility: relative relief, extreme precipitation, lithology, ground motion and soil moisture. When building the model, 70 % of landslide and nonlandslide points were randomly selected for logistic regression, and the others were used for model validation. To evaluate the accuracy of predictive models, this paper adopts several criteria including a receiver operating characteristic (ROC curve method. Logistic regression experiments found all five factors to be significant in explaining landslide occurrence on a global scale. During the modeling process, percentage correct in confusion matrix of landslide classification was approximately 80 % and the area under the curve (AUC was nearly 0.87. During the validation process, the above statistics were about 81 % and 0.88, respectively. Such a result indicates that the model has strong robustness and stable performance. This model found that at a global scale, soil moisture can be dominant in the occurrence of landslides and topographic factor may be secondary.

  17. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    Science.gov (United States)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  18. Scale Development and Initial Tests of the Multidimensional Complex Adaptive Leadership Scale for School Principals: An Exploratory Mixed Method Study

    Science.gov (United States)

    Özen, Hamit; Turan, Selahattin

    2017-01-01

    This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…

  19. The Spectroscopy and H-band Imaging of Virgo Cluster Galaxies (SHIVir) Survey: Scaling Relations and the Stellar-to-total Mass Relation

    Energy Technology Data Exchange (ETDEWEB)

    Ouellette, Nathalie N.-Q.; Courteau, Stéphane [Department of Physics, Engineering Physics and Astronomy, Queen’s University, Kingston, ON K7L 3N6 (Canada); Holtzman, Jon A. [Department of Physics and Astronomy, New Mexico State University, Las Cruces, NM, 88003-8001 (United States); Dutton, Aaron A. [Department of Physics, New York University Abu Dhabi, Abu Dhabi (United Arab Emirates); Cappellari, Michele [Sub-department of Astrophysics, Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Dalcanton, Julianne J. [Department of Astronomy, University of Washington, Seattle, WA, 98195 (United States); McDonald, Michael [MIT Kavli Institute for Astrophysics and Space Research, MIT, Cambridge, MA, 02139 (United States); Roediger, Joel C.; Côté, Patrick; Ferrarese, Laura [Herzberg Institute of Astrophysics, National Research Council, Victoria, BC, V9E 2E7 (Canada); Taylor, James E. [Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L 3G1 (Canada); Tully, R. Brent [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822-1839 (United States); Peng, Eric W. [Department of Astronomy, Peking University, Beijing 100871 (China)

    2017-07-01

    We present parameter distributions and fundamental scaling relations for 190 Virgo cluster galaxies in the SHIVir survey. The distribution of galaxy velocities is bimodal about V {sub circ} ∼ 125 km s{sup −1}, hinting at the existence of dynamically unstable modes in the inner regions of galaxies. An analysis of the Tully-Fisher relation (TFR) of late-type galaxies (LTGs) and the fundamental plane (FP) of early-type galaxies (ETGs) is presented, yielding a compendium of galaxy scaling relations. The slope and zero-point of the Virgo TFR match those of field galaxies, while scatter differences likely reflect distinct evolutionary histories. The velocities minimizing scatter for the TFR and FP are measured at large apertures where the baryonic fraction becomes subdominant. While TFR residuals remain independent of any galaxy parameters, FP residuals (i.e., the FP “tilt”) correlate strongly with the dynamical-to-stellar mass ratio, yielding stringent galaxy formation constraints. We construct a stellar-to-total mass relation (STMR) for ETGs and LTGs and find linear but distinct trends over the range M {sub *} = 10{sup 8–11} M {sub ⊙}. Stellar-to-halo mass relations (SHMRs), which probe the extended dark matter halo, can be scaled down to masses estimated within the optical radius, showing a tight match with the Virgo STMR at low masses; possibly inadequate halo abundance matching prescriptions and broad radial scalings complicate this comparison at all masses. While ETGs appear to be more compact than LTGs of the same stellar mass in projected space, their mass-size relations in physical space are identical. The trends reported here may soon be validated through well-resolved numerical simulations.

  20. SCALE FACTOR DETERMINATION METHOD OF ELECTRO-OPTICAL MODULATOR IN FIBER-OPTIC GYROSCOPE

    Directory of Open Access Journals (Sweden)

    A. S. Aleynik

    2016-05-01

    Full Text Available Subject of Research. We propose a method for dynamic measurement of half-wave voltage of electro-optic modulator as part of a fiber optic gyroscope. Excluding the impact of the angular acceleration o​n measurement of the electro-optical coefficient is achieved through the use of homodyne demodulation method that allows a division of the Sagnac phase shift signal and an auxiliary signal for measuring the electro-optical coefficient in the frequency domain. Method. The method essence reduces to decomposition of step of digital serrodyne modulation in two parts with equal duration. The first part is used for quadrature modulation signals. The second part comprises samples of the auxiliary signal used to determine the value of the scale factor of the modulator. Modeling is done in standalone model, and as part of a general model of the gyroscope. The applicability of the proposed method is investigated as well as its qualitative and quantitative characteristics: absolute and relative accuracy of the electro-optic coefficient, the stability of the method to the effects of angular velocities and accelerations, method resistance to noise in actual devices. Main Results. The simulation has showed the ability to measure angular velocity changing under the influence of angular acceleration, acting on the device, and simultaneous measurement of electro-optical coefficient of the phase modulator without interference between these processes. Practical Relevance. Featured in the paper the ability to eliminate the influence of the angular acceleration on the measurement accuracy of the electro-optical coefficient of the phase modulator will allow implementing accurate measurement algorithms for fiber optic gyroscopes resistant to a significant acceleration in real devices.

  1. Efficacy and Safety of Two Methadone Titration Methods for the Treatment of Cancer-Related Pain: The EQUIMETH2 Trial (Methadone for Cancer-Related Pain).

    Science.gov (United States)

    Poulain, Philippe; Berleur, Marie-Pierre; Lefki, Shimsi; Lefebvre, Danièle; Chvetzoff, Gisèle; Serra, Eric; Tremellat, Fibra; Derniaux, Alain; Filbet, Marilène

    2016-11-01

    In the European Association for Palliative Care recommendations for cancer pain management, there was no consensus regarding the indications, titration, or monitoring of methadone. This national, randomized, multicenter trial aimed to compare two methadone titration methods (stop-and-go vs. progressive) in patients with cancer-related pain who were inadequately relieved by or intolerant to Level 3 opioids. The primary end point was the rate of success/failure at Day 4, defined as pain relief (reduction of at least two points on the visual scale and a pain score methods were considered equally easy to perform by nearly 60% of the clinicians. Methadone is an effective and sustainable second-line alternative opioid for the treatment of cancer-related pain. The methods of titration are comparable in terms of efficacy, safety, and ease of use. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  2. Scaling local species-habitat relations to the larger landscape with a hierarchical spatial count model

    Science.gov (United States)

    Thogmartin, W.E.; Knutson, M.G.

    2007-01-01

    Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.

  3. Subspace Barzilai-Borwein Gradient Method for Large-Scale Bound Constrained Optimization

    International Nuclear Information System (INIS)

    Xiao Yunhai; Hu Qingjie

    2008-01-01

    An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection

  4. A new multiscale model to describe a modified Hall-Petch relation at different scales for nano and micro materials

    Science.gov (United States)

    Fadhil, Sadeem Abbas; Alrawi, Aoday Hashim; Azeez, Jazeel H.; Hassan, Mohsen A.

    2018-04-01

    In the present work, a multiscale model is presented and used to modify the Hall-Petch relation for different scales from nano to micro. The modified Hall-Petch relation is derived from a multiscale equation that determines the cohesive energy between the atoms and their neighboring grains. This brings with it a new term that was originally ignored even in the atomistic models. The new term makes it easy to combine all other effects to derive one modified equation for the Hall-Petch relation that works for all scales together, without the need to divide the scales into two scales, each scale with a different equation, as it is usually done in other works. Due to that, applying the new relation does not require a previous knowledge of the grain size distribution. This makes the new derived relation more consistent and easier to be applied for all scales. The new relation is used to fit the data for Copper and Nickel and it is applied well for the whole range of grain sizes from nano to micro scales.

  5. Non-Abelian Kubo formula and the multiple time-scale method

    International Nuclear Information System (INIS)

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  6. LoCuSS: THE SUNYAEV-ZEL'DOVICH EFFECT AND WEAK-LENSING MASS SCALING RELATION

    Energy Technology Data Exchange (ETDEWEB)

    Marrone, Daniel P.; Carlstrom, John E.; Gralla, Megan; Greer, Christopher H.; Hennessy, Ryan; Leitch, Erik M.; Plagge, Thomas [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Smith, Graham P. [School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham, B15 2TT (United Kingdom); Okabe, Nobuhiro [Astronomical Institute, Tohoku University, Aramaki, Aoba-ku, Sendai, 980-8578 (Japan); Bonamente, Massimiliano; Hasler, Nicole [Department of Physics, University of Alabama, Huntsville, AL 35899 (United States); Culverhouse, Thomas L. [Radio Astronomy Lab, 601 Campbell Hall, University of California, Berkeley, CA 94720 (United States); Hawkins, David; Lamb, James W.; Muchovej, Stephen [Owens Valley Radio Observatory, California Institute of Technology, Big Pine, CA 93513 (United States); Joy, Marshall [Space Science-VP62, NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Martino, Rossella; Mazzotta, Pasquale [Dipartimento di Fisica, Universita degli Studi di Roma ' Tor Vergata' , via della Ricerca Scientifica 1, 00133, Roma (Italy); Miller, Amber [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Mroczkowski, Tony, E-mail: dmarrone@email.arizona.edu [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); and others

    2012-08-01

    We present the first weak-lensing-based scaling relation between galaxy cluster mass, M{sub WL}, and integrated Compton parameter Y{sub sph}. Observations of 18 galaxy clusters at z {approx_equal} 0.2 were obtained with the Subaru 8.2 m telescope and the Sunyaev-Zel'dovich Array. The M{sub WL}-Y{sub sph} scaling relations, measured at {Delta} = 500, 1000, and 2500 {rho}{sub c}, are consistent in slope and normalization with previous results derived under the assumption of hydrostatic equilibrium (HSE). We find an intrinsic scatter in M{sub WL} at fixed Y{sub sph} of 20%, larger than both previous measurements of M{sub HSE}-Y{sub sph} scatter as well as the scatter in true mass at fixed Y{sub sph} found in simulations. Moreover, the scatter in our lensing-based scaling relations is morphology dependent, with 30%-40% larger M{sub WL} for undisturbed compared to disturbed clusters at the same Y{sub sph} at r{sub 500}. Further examination suggests that the segregation may be explained by the inability of our spherical lens models to faithfully describe the three-dimensional structure of the clusters, in particular, the structure along the line of sight. We find that the ellipticity of the brightest cluster galaxy, a proxy for halo orientation, correlates well with the offset in mass from the mean scaling relation, which supports this picture. This provides empirical evidence that line-of-sight projection effects are an important systematic uncertainty in lensing-based scaling relations.

  7. Data and performance profiles applying an adaptive truncation criterion, within linesearch-based truncated Newton methods, in large scale nonconvex optimization

    Directory of Open Access Journals (Sweden)

    Andrea Caliciotti

    2018-04-01

    Full Text Available In this paper, we report data and experiments related to the research article entitled “An adaptive truncation criterion, for linesearch-based truncated Newton methods in large scale nonconvex optimization” by Caliciotti et al. [1]. In particular, in Caliciotti et al. [1], large scale unconstrained optimization problems are considered by applying linesearch-based truncated Newton methods. In this framework, a key point is the reduction of the number of inner iterations needed, at each outer iteration, to approximately solving the Newton equation. A novel adaptive truncation criterion is introduced in Caliciotti et al. [1] to this aim. Here, we report the details concerning numerical experiences over a commonly used test set, namely CUTEst (Gould et al., 2015 [2]. Moreover, comparisons are reported in terms of performance profiles (Dolan and Moré, 2002 [3], adopting different parameters settings. Finally, our linesearch-based scheme is compared with a renowned trust region method, namely TRON (Lin and Moré, 1999 [4].

  8. Weak Lensing Calibrated M-T Scaling Relation of Galaxy Groups in the COSMOS Field

    NARCIS (Netherlands)

    Kettula, K.; Finoguenov, A.; Massey, R.; Rhodes, J.; Hoekstra, H.; Taylor, J.; Spinelli, P.; Tanaka, M.; Ilbert, O.; Capak, P.; McCracken, H.; Koekemoer, A.

    2013-01-01

    The scaling between X-ray observables and mass for galaxy clusters and groups is instrumental for cluster-based cosmology and an important probe for the thermodynamics of the intracluster gas. We calibrate a scaling relation between the weak lensing mass and X-ray spectroscopic temperature for 10

  9. Scaling relations for soliton compression and dispersive-wave generation in tapered optical fibers

    DEFF Research Database (Denmark)

    Lægsgaard, Jesper

    2018-01-01

    In this paper, scaling relations for soliton compression in tapered optical fibers are derived and discussed. The relations allow simple and semi-accurate estimates of the compression point and output noise level, which is useful, for example, for tunable dispersive-wave generation with an agile ...

  10. Scales for Experience of Eating During in Childhood, Eating-related Coping Skills, and Desirable Dietary Habits

    OpenAIRE

    江坂,美佐子; 田中,宏二

    2015-01-01

     We conducted a survey on a total of 261 first- and second-year university and junior college students (92 men, 169 women), and created scales for experience of eating during in childhood, eating-related coping skills, and desirable dietary habits. The scale for experience of eating during in childhood comprised nine items and two factors (experience of enjoying eating at home and connection to dietary education at school). The scale for eating-related coping skills comprised seven items and ...

  11. How covalence breaks adsorption-energy scaling relations and solvation restores them

    DEFF Research Database (Denmark)

    Vallejo, Federico Calle; Krabbe, Alexander; García Lastra, Juan Maria

    2017-01-01

    It is known that breaking the scaling relations between the adsorption energies of *O, *OH, and *OOH is paramount in catalyzing more efficiently the reduction of O2 in fuel cells and its evolution in electrolyzers. Taking metalloporphyrins as a case study, we evaluate here the adsorption energies...

  12. Processor farming method for multi-scale analysis of masonry structures

    Science.gov (United States)

    Krejčí, Tomáš; Koudelka, Tomáš

    2017-07-01

    This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.

  13. Stress and adhesion of chromia-rich scales on ferritic stainless steels in relation with spallation

    Directory of Open Access Journals (Sweden)

    A. Galerie

    2004-03-01

    Full Text Available The relation between chromia scale spallation during oxidation or cooling down of ferritic stainless steels is generally discussed in terms of mechanical stresses induced by volume changes or differential thermal expansion. In the present paper, growth and thermal stress measurements in scales grown on different ferritic steel grades have shown that the main stress accumulation occurs during isothermal scale growth and that thermal stresses are of minor importance. However, when spallation occurs, it is always during cooling down. Steel-oxide interface undulation seems to play a major role at this stage, thus relating spallation to the metal mechanical properties, thickness and surface preparation. A major influence on spallation of the minor stabilizing elements of the steels was observed which could not be related to any difference in stress state. Therefore, an original inverted blister test was developed to derive quantitative values of the metal-oxide adhesion energy. These values clearly confirmed that this parameter was influenced by scale thickness and by minor additions, titanium greatly increasing adhesion whereas niobium decreased it.

  14. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  15. [Acceptance and understandability of various methods of health valuations for the chronically ill: willingness to pay, visual analogue scale and rating scale].

    Science.gov (United States)

    Meder, M; Farin, E

    2009-11-01

    Health valuations are one way of measuring patient preferences with respect to the results of their treatment. The study examines three different methods of health valuations--willingness to pay (WTP), visual analogue scale (VAS), and a rating question for evaluating the subjective significance. The goal is to test the understandability and acceptance of these methods for implementation in questionnaires. In various rehabilitation centres, a total of six focus groups were conducted with 5-9 patients each with a mean age of 57.1 years. The illnesses considered were chronic-ischaemic heart disease, chronic back pain, and breast cancer. Patients filled out a questionnaire that was then discussed in the group. In addition to the quantitative evaluation of the data in the questionnaire, a qualitative analysis of the contents of the group discussion protocols was made. We have results from a total of 42 patients. 14.6% of the patients had "great difficulties" understanding the WTP or rated it as "completely incomprehensible"; this value was 7.3% for VAS and 0% for the rating scale. With respect to acceptance, 31.0% of the patients indicated that they were "not really" or "not at all" willing to answer such a WTP question in a questionnaire; this was 6.6% for the VAS, and again 0% for the rating scale. The qualitative analysis provided an indication as to why some patients view the WTP question in particular in a negative light. Many difficulties in understanding it were related to the formulation of the question and the structure of the questionnaire. However, the patients' statements also made it apparent that the hypothetical nature of the WTP questionnaire was not always recognised. The most frequent reason for the lack of acceptance of the WTP was the patients' fear of negative financial consequences of their responses. With respect to understandability and acceptance, VAS questions appear to be better suited for reflecting patient preferences than WTP questions. The

  16. Ward identities and consistency relations for the large scale structure with multiple species

    International Nuclear Information System (INIS)

    Peloso, Marco; Pietroni, Massimo

    2014-01-01

    We present fully nonlinear consistency relations for the squeezed bispectrum of Large Scale Structure. These relations hold when the matter component of the Universe is composed of one or more species, and generalize those obtained in [1,2] in the single species case. The multi-species relations apply to the standard dark matter + baryons scenario, as well as to the case in which some of the fields are auxiliary quantities describing a particular population, such as dark matter halos or a specific galaxy class. If a large scale velocity bias exists between the different populations new terms appear in the consistency relations with respect to the single species case. As an illustration, we discuss two physical cases in which such a velocity bias can exist: (1) a new long range scalar force in the dark matter sector (resulting in a violation of the equivalence principle in the dark matter-baryon system), and (2) the distribution of dark matter halos relative to that of the underlying dark matter field

  17. Validity and Reliability of Persian Version of HIV/AIDS Related Stigma Scale for People Living With HIV/AIDS in Iran

    Directory of Open Access Journals (Sweden)

    Davoud Pourmarzi

    2016-04-01

    Full Text Available Objective: To assess the perceived HIV/AIDS related stigma a comprehensive and well developed stigma instrument is necessary. This study aimed to assess validity and reliability of the Persian version of HIV/AIDS related stigma scale which was developed by Kang et al for people living with HIV/AIDS in Iran.Materials and methods: Thescale was forward translatedby two bilingual academic members then both translations were discussed by expert team. Back-translation was done by two other bilingual translators then we carried out discussion with both of them. To evaluate understandability the scale was administered to 10 Persons Living with HIV/AIDS (PLWHA. Final Persian version was administered to 80 PLWHA in Qom, Iran in 2014. Test–retest reliability was assessed in a sample of 20 PLWHA after a week by intra-class correlation coefficient (ICC.Results: Cronbach’s alpha coefficient for overall scale was 0.85. Also Cronbach’s alpha coefficients for the five subscales were as follows: social rejection (9 items, α = 0.84, negative self-worth (4 items, α = 0.70, perceived interpersonal insecurity (2 items, α = 0.57, financial insecurity (3 items, α = 0.70, discretionary disclosure (2 items, α = 0.83. Test–retest reliability was also approved with ICC = 0.78. Correlation between items and their hypothesized subscale is greater than 0.5. Correlation between an item and its own subscale was significantly higher than its correlation with other subscales.Conclusion: This study demonstrate that the Persian version of HIV/AIDS related stigma scale is valid and reliable to assess HIV/AIDS related stigma perceived by people living whit HIV/AIDS in Iran.

  18. The development and psychometric properties of a new scale to measure mental illness related stigma by health care providers: The opening minds scale for Health Care Providers (OMS-HC

    Directory of Open Access Journals (Sweden)

    Kassam Aliya

    2012-06-01

    Full Text Available Abstract Background Research on the attitudes of health care providers towards people with mental illness has repeatedly shown that they may be stigmatizing. Many scales used to measure attitudes towards people with mental illness that exist today are not adequate because they do not have items that relate specifically to the role of the health care provider. Methods We developed and tested a new scale called the Opening Minds Scale for Health Care Providers (OMS-HC. After item-pool generation, stakeholder consultations and content validation, focus groups were held with 64 health care providers/trainees and six people with lived experience of mental illness to develop the scale. The OMS-HC was then tested with 787 health care providers/trainees across Canada to determine its psychometric properties. Results The initial testing OMS-HC scale showed good internal consistency, Cronbach’s alpha = 0.82 and satisfactory test-retest reliability, intraclass correlation = 0.66 (95% CI 0.54 to 0.75. The OMC-HC was only weakly correlated with social desirability, indicating that the social desirability bias was not likely to be a major determinant of OMS-HC scores. A factor analysis favoured a two-factor structure which accounted for 45% of the variance using 12 of the 20 items tested. Conclusions The OMS–HC provides a good starting point for further validation as well as a tool that could be used in the evaluation of programs aimed at reducing mental illness related stigma by health care providers. The OMS-HC incorporates various dimensions of stigma with a modest number of items that can be used with busy health care providers.

  19. The development and psychometric properties of a new scale to measure mental illness related stigma by health care providers: The opening minds scale for Health Care Providers (OMS-HC)

    Science.gov (United States)

    2012-01-01

    Background Research on the attitudes of health care providers towards people with mental illness has repeatedly shown that they may be stigmatizing. Many scales used to measure attitudes towards people with mental illness that exist today are not adequate because they do not have items that relate specifically to the role of the health care provider. Methods We developed and tested a new scale called the Opening Minds Scale for Health Care Providers (OMS-HC). After item-pool generation, stakeholder consultations and content validation, focus groups were held with 64 health care providers/trainees and six people with lived experience of mental illness to develop the scale. The OMS-HC was then tested with 787 health care providers/trainees across Canada to determine its psychometric properties. Results The initial testing OMS-HC scale showed good internal consistency, Cronbach’s alpha = 0.82 and satisfactory test-retest reliability, intraclass correlation = 0.66 (95% CI 0.54 to 0.75). The OMC-HC was only weakly correlated with social desirability, indicating that the social desirability bias was not likely to be a major determinant of OMS-HC scores. A factor analysis favoured a two-factor structure which accounted for 45% of the variance using 12 of the 20 items tested. Conclusions The OMS–HC provides a good starting point for further validation as well as a tool that could be used in the evaluation of programs aimed at reducing mental illness related stigma by health care providers. The OMS-HC incorporates various dimensions of stigma with a modest number of items that can be used with busy health care providers. PMID:22694771

  20. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    International Nuclear Information System (INIS)

    Downar, Thomas; Seker, Volkan

    2013-01-01

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local 'hot' spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  1. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    Energy Technology Data Exchange (ETDEWEB)

    Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Seker, Volkan [Univ. of Michigan, Ann Arbor, MI (United States)

    2013-04-30

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  2. Multi-scale approximation of Vlasov equation

    International Nuclear Information System (INIS)

    Mouton, A.

    2009-09-01

    One of the most important difficulties of numerical simulation of magnetized plasmas is the existence of multiple time and space scales, which can be very different. In order to produce good simulations of these multi-scale phenomena, it is recommended to develop some models and numerical methods which are adapted to these problems. Nowadays, the two-scale convergence theory introduced by G. Nguetseng and G. Allaire is one of the tools which can be used to rigorously derive multi-scale limits and to obtain new limit models which can be discretized with a usual numerical method: this procedure is so-called a two-scale numerical method. The purpose of this thesis is to develop a two-scale semi-Lagrangian method and to apply it on a gyrokinetic Vlasov-like model in order to simulate a plasma submitted to a large external magnetic field. However, the physical phenomena we have to simulate are quite complex and there are many questions without answers about the behaviour of a two-scale numerical method, especially when such a method is applied on a nonlinear model. In a first part, we develop a two-scale finite volume method and we apply it on the weakly compressible 1D isentropic Euler equations. Even if this mathematical context is far from a Vlasov-like model, it is a relatively simple framework in order to study the behaviour of a two-scale numerical method in front of a nonlinear model. In a second part, we develop a two-scale semi-Lagrangian method for the two-scale model developed by E. Frenod, F. Salvarani et E. Sonnendrucker in order to simulate axisymmetric charged particle beams. Even if the studied physical phenomena are quite different from magnetic fusion experiments, the mathematical context of the one-dimensional paraxial Vlasov-Poisson model is very simple for establishing the basis of a two-scale semi-Lagrangian method. In a third part, we use the two-scale convergence theory in order to improve M. Bostan's weak-* convergence results about the finite

  3. 3D large-scale calculations using the method of characteristics

    International Nuclear Information System (INIS)

    Dahmani, M.; Roy, R.; Koclas, J.

    2004-01-01

    An overview of the computational requirements and the numerical developments made in order to be able to solve 3D large-scale problems using the characteristics method will be presented. To accelerate the MCI solver, efficient acceleration techniques were implemented and parallelization was performed. However, for the very large problems, the size of the tracking file used to store the tracks can still become prohibitive and exceed the capacity of the machine. The new 3D characteristics solver MCG will now be introduced. This methodology is dedicated to solve very large 3D problems (a part or a whole core) without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we define a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (author)

  4. Modeling aboveground tree woody biomass using national-scale allometric methods and airborne lidar

    Science.gov (United States)

    Chen, Qi

    2015-08-01

    Estimating tree aboveground biomass (AGB) and carbon (C) stocks using remote sensing is a critical component for understanding the global C cycle and mitigating climate change. However, the importance of allometry for remote sensing of AGB has not been recognized until recently. The overarching goals of this study are to understand the differences and relationships among three national-scale allometric methods (CRM, Jenkins, and the regional models) of the Forest Inventory and Analysis (FIA) program in the U.S. and to examine the impacts of using alternative allometry on the fitting statistics of remote sensing-based woody AGB models. Airborne lidar data from three study sites in the Pacific Northwest, USA were used to predict woody AGB estimated from the different allometric methods. It was found that the CRM and Jenkins estimates of woody AGB are related via the CRM adjustment factor. In terms of lidar-biomass modeling, CRM had the smallest model errors, while the Jenkins method had the largest ones and the regional method was between. The best model fitting from CRM is attributed to its inclusion of tree height in calculating merchantable stem volume and the strong dependence of non-merchantable stem biomass on merchantable stem biomass. This study also argues that it is important to characterize the allometric model errors for gaining a complete understanding of the remotely-sensed AGB prediction errors.

  5. An improved method to characterise the modulation of small-scale turbulent by large-scale structures

    Science.gov (United States)

    Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta

    2015-11-01

    A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures

  6. Maxwell iteration for the lattice Boltzmann method with diffusive scaling

    Science.gov (United States)

    Zhao, Weifeng; Yong, Wen-An

    2017-03-01

    In this work, we present an alternative derivation of the Navier-Stokes equations from Bhatnagar-Gross-Krook models of the lattice Boltzmann method with diffusive scaling. This derivation is based on the Maxwell iteration and can expose certain important features of the lattice Boltzmann solutions. Moreover, it will be seen to be much more straightforward and logically clearer than the existing approaches including the Chapman-Enskog expansion.

  7. Confirmation of general relativity on large scales from weak lensing and galaxy velocities

    Science.gov (United States)

    Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E.; Lombriser, Lucas; Smith, Robert E.

    2010-03-01

    Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, EG, that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to `galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of EG different from the general relativistic prediction because, in these theories, the `gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that EG = 0.39+/-0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of EG~0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f() theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.

  8. Mobility-related participation and user satisfaction

    DEFF Research Database (Denmark)

    Brandt, Aase; Kreiner, Svend; Iwarsson, Susanne

    2010-01-01

    Purpose. The aim of this study was to investigate the constructs of mobility-related participation and user satisfaction, two important outcome dimensions within praxis and research on mobility device interventions. Method. To fulfill this aim, validity and reliability of a 12-item scale on mobil......Purpose. The aim of this study was to investigate the constructs of mobility-related participation and user satisfaction, two important outcome dimensions within praxis and research on mobility device interventions. Method. To fulfill this aim, validity and reliability of a 12-item scale...... on mobility-related participation and a 10-item scale on user satisfaction were examined in the context of older people’s powered wheelchair use (n¼111). Rasch analysis and correlation analysis were applied. Results. Construct validity of both scales was confirmed.The reliability of the user satisfaction...... scale was good,while themobilityrelated participation scalewas not optimal in discriminating between personswith a high degree ofmobility-related participation. It was demonstrated that mobility-related participation and user satisfaction are separate, not related constructs. Conclusions. It can...

  9. Scale relativity theory and integrative systems biology: 2. Macroscopic quantum-type mechanics.

    Science.gov (United States)

    Nottale, Laurent; Auffray, Charles

    2008-05-01

    In these two companion papers, we provide an overview and a brief history of the multiple roots, current developments and recent advances of integrative systems biology and identify multiscale integration as its grand challenge. Then we introduce the fundamental principles and the successive steps that have been followed in the construction of the scale relativity theory, which aims at describing the effects of a non-differentiable and fractal (i.e., explicitly scale dependent) geometry of space-time. The first paper of this series was devoted, in this new framework, to the construction from first principles of scale laws of increasing complexity, and to the discussion of some tentative applications of these laws to biological systems. In this second review and perspective paper, we describe the effects induced by the internal fractal structures of trajectories on motion in standard space. Their main consequence is the transformation of classical dynamics into a generalized, quantum-like self-organized dynamics. A Schrödinger-type equation is derived as an integral of the geodesic equation in a fractal space. We then indicate how gauge fields can be constructed from a geometric re-interpretation of gauge transformations as scale transformations in fractal space-time. Finally, we introduce a new tentative development of the theory, in which quantum laws would hold also in scale space, introducing complexergy as a measure of organizational complexity. Initial possible applications of this extended framework to the processes of morphogenesis and the emergence of prokaryotic and eukaryotic cellular structures are discussed. Having founded elements of the evolutionary, developmental, biochemical and cellular theories on the first principles of scale relativity theory, we introduce proposals for the construction of an integrative theory of life and for the design and implementation of novel macroscopic quantum-type experiments and devices, and discuss their potential

  10. UPDATED MASS SCALING RELATIONS FOR NUCLEAR STAR CLUSTERS AND A COMPARISON TO SUPERMASSIVE BLACK HOLES

    International Nuclear Information System (INIS)

    Scott, Nicholas; Graham, Alister W.

    2013-01-01

    We investigate whether or not nuclear star clusters and supermassive black holes (SMBHs) follow a common set of mass scaling relations with their host galaxy's properties, and hence can be considered to form a single class of central massive object (CMO). We have compiled a large sample of galaxies with measured nuclear star cluster masses and host galaxy properties from the literature and fit log-linear scaling relations. We find that nuclear star cluster mass, M NC , correlates most tightly with the host galaxy's velocity dispersion: log M NC = (2.11 ± 0.31)log (σ/54) + (6.63 ± 0.09), but has a slope dramatically shallower than the relation defined by SMBHs. We find that the nuclear star cluster mass relations involving host galaxy (and spheroid) luminosity and stellar and dynamical mass, intercept with but are in general shallower than the corresponding black hole scaling relations. In particular, M NC ∝M 0.55±0.15 Gal,dyn ; the nuclear cluster mass is not a constant fraction of its host galaxy or spheroid mass. We conclude that nuclear stellar clusters and SMBHs do not form a single family of CMOs.

  11. Evaluating broad scale patterns among related species using resource experiments in tropical hummingbirds.

    Science.gov (United States)

    Weinstein, Ben G; Graham, Catherine H

    2016-08-01

    A challenge in community ecology is connecting biogeographic patterns with local scale observations. In Neotropical hummingbirds, closely related species often co-occur less frequently than expected (overdispersion) when compared to a regional species pool. While this pattern has been attributed to interspecific competition, it is important to connect these findings with local scale mechanisms of coexistence. We measured the importance of the presence of competitors and the availability of resources on selectivity at experimental feeders for Andean hummingbirds along a wide elevation gradient. Selectivity was measured as the time a bird fed at a feeder with a high sucrose concentration when presented with feeders of both low and high sucrose concentrations. Resource selection was measured using time-lapse cameras to identity which floral resources were used by each hummingbird species. We found that the increased abundance of preferred resources surrounding the feeder best explained increased species selectivity, and that related hummingbirds with similar morphology chose similar floral resources. We did not find strong support for direct agonism based on differences in body size or phylogenetic relatedness in predicting selectivity. These results suggest closely related hummingbird species have overlapping resource niches, and that the intensity of interspecific competition is related to the abundance of those preferred resources. If these competitive interactions have negative demographic effects, our results could help explain the pattern of phylogenetic overdispersion observed at regional scales. © 2016 by the Ecological Society of America.

  12. Some applications of the moving finite element method to fluid flow and related problems

    International Nuclear Information System (INIS)

    Berry, R.A.; Williamson, R.L.

    1983-01-01

    The Moving Finite Element (MFE) method is applied to one-dimensional, nonlinear wave type partial differential equations which are characteristics of fluid dynamic and related flow phenomena problems. These equation systems tend to be difficult to solve because their transient solutions exhibit a spacial stiffness property, i.e., they represent physical phenomena of widely disparate length scales which must be resolved simultaneously. With the MFE method the node points automatically move (in theory) to optimal locations giving a much better approximation than can be obtained with fixed mesh methods (with a reasonable number of nodes) and with significantly reduced artificial viscosity or diffusion content. Three applications are considered. In order of increasing complexity they are: (1) a thermal quench problem, (2) an underwater explosion problem, and (3) a gas dynamics shock tube problem. The results are briefly shown

  13. The Work-Related Quality of Life Scale for Higher Education Employees

    Science.gov (United States)

    Edwards, Julian A.; Van Laar, Darren; Easton, Simon; Kinman, Gail

    2009-01-01

    Previous research suggests that higher education employees experience comparatively high levels of job stress. A range of instruments, both generic and job-specific, has been used to measure stressors and strains in this occupational context. The Work-related Quality of Life (WRQoL) scale is a measure designed to capture perceptions of the working…

  14. Incremental Validity of the Subscales of the Emotional Regulation Related to Testing Scale for Predicting Test Anxiety

    Science.gov (United States)

    Feldt, Ronald; Lindley, Kyla; Louison, Rebecca; Roe, Allison; Timm, Megan; Utinkova, Nikola

    2015-01-01

    The Emotional Regulation Related to Testing Scale (ERT Scale) assesses strategies students use to regulate emotion related to academic testing. It has four dimensions: Cognitive Appraising Processes (CAP), Emotion-Focusing Processes (EFP), Task-Focusing Processes (TFP), and Regaining Task-Focusing Processes (RTFP). The study examined the factor…

  15. Cardinal Scales for Public Health Evaluation

    DEFF Research Database (Denmark)

    Harvey, Charles M.; Østerdal, Lars Peter

    Policy studies often evaluate health for a population by summing the individuals' health as measured by a scale that is ordinal or that depends on risk attitudes. We develop a method using a different type of preferences, called preference intensity or cardinal preferences, to construct scales...... that measure changes in health. The method is based on a social welfare model that relates preferences between changes in an individual's health to preferences between changes in health for a population...

  16. Workshop report on large-scale matrix diagonalization methods in chemistry theory institute

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S. [eds.

    1996-10-01

    The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems as well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of

  17. The effect of pore-scale geometry and wettability on two-phase relative permeabilities within elementary cells

    Science.gov (United States)

    Bianchi Janetti, Emanuela; Riva, Monica; Guadagnini, Alberto

    2017-04-01

    We study the relative role of the complex pore space geometry and wettability of the solid matrix on the quantification of relative permeabilities characterizing steady state immiscible two-phase flow in porous media. We do so by considering elementary cells, which are typically employed in upscaling frameworks based on, e.g., homogenization or volume averaging. In this context one typically relies on the solution of pore-scale physics at a scale which is much smaller than that of an investigated porous system. Pressure-driven two-phase flow following simultaneous co-current injection of water and oil is numerically solved for a suite of regular and stochastically generated two-dimensional explicit elementary cells with fixed porosity and sharing main topological/morphological features. We show that relative permeabilities of the randomly generated elementary cells are significantly influenced by the formation of preferential percolation paths (principal pathways), giving rise to a strongly nonuniform distribution of fluid fluxes. These pathways are a result of the spatially variable resistance that the random pore structures exert on the fluid. The overall effect on relative permeabilities of the diverse organization of principal pathways, as driven by a given random realization at the scale of the unit cell, is significantly larger than that of the wettability of the host rock. In contrast to what can be observed for the random cells analyzed, relative permeabilities of regular cells display a clear trend with contact angle at the investigated scale. Our findings suggest the need to perform systematic upscaling studies in a stochastic context, to propagate the effects of uncertain pore space geometries to a probabilistic description of relative permeability curves at the continuum scale.

  18. Developing an Assessment Method of Active Aging: University of Jyvaskyla Active Aging Scale.

    Science.gov (United States)

    Rantanen, Taina; Portegijs, Erja; Kokko, Katja; Rantakokko, Merja; Törmäkangas, Timo; Saajanaho, Milla

    2018-01-01

    To develop an assessment method of active aging for research on older people. A multiphase process that included drafting by an expert panel, a pilot study for item analysis and scale validity, a feedback study with focus groups and questionnaire respondents, and a test-retest study. Altogether 235 people aged 60 to 94 years provided responses and/or feedback. We developed a 17-item University of Jyvaskyla Active Aging Scale with four aspects in each item (goals, ability, opportunity, and activity; range 0-272). The psychometric and item properties are good and the scale assesses a unidimensional latent construct of active aging. Our scale assesses older people's striving for well-being through activities pertaining to their goals, abilities, and opportunities. The University of Jyvaskyla Active Aging Scale provides a quantifiable measure of active aging that may be used in postal questionnaires or interviews in research and practice.

  19. Factors associated with metabolic syndrome and related medical costs by the scale of enterprise in Korea.

    Science.gov (United States)

    Kong, Hyung-Sik; Lee, Kang-Sook; Yim, Eun-Shil; Lee, Seon-Young; Cho, Hyun-Young; Lee, Bin Na; Park, Jee Young

    2013-10-21

    The purpose of this study was to identify the risk factors of metabolic syndrome (MS) and to analyze the relationship between the risk factors of MS and medical cost of major diseases related to MS in Korean workers, according to the scale of the enterprise. Data was obtained from annual physical examinations, health insurance qualification and premiums, and health insurance benefits of 4,094,217 male and female workers who underwent medical examinations provided by the National Health Insurance Corporation in 2009. Logistic regression analyses were used to the identify risk factors of MS and multiple regression was used to find factors associated with medical expenditures due to major diseases related to MS. The study found that low-income workers were more likely to work in small-scale enterprises. The prevalence rate of MS in males and females, respectively, was 17.2% and 9.4% in small-scale enterprises, 15.9% and 8.9% in medium-scale enterprises, and 15.9% and 5.5% in large-scale enterprises. The risks of MS increased with age, lower income status, and smoking in small-scale enterprise workers. The medical costs increased in workers with old age and past smoking history. There was also a gender difference in the pattern of medical expenditures related to MS. Health promotion programs to manage metabolic syndrome should be developed to focus on workers who smoke, drink, and do little exercise in small scale enterprises.

  20. FDTD method for laser absorption in metals for large scale problems.

    Science.gov (United States)

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  1. SVC Planning in Large–scale Power Systems via a Hybrid Optimization Method

    DEFF Research Database (Denmark)

    Yang, Guang ya; Majumder, Rajat; Xu, Zhao

    2009-01-01

    The research on allocation of FACTS devices has attracted quite a lot interests from various aspects. In this paper, a hybrid model is proposed to optimise the number, location as well as the parameter settings of static Var compensator (SVC) deployed in large–scale power systems. The model...... utilises the result of vulnerability assessment for determining the candidate locations. A hybrid optimisation method including two stages is proposed to find out the optimal solution of SVC in large– scale planning problem. In the first stage, a conventional genetic algorithm (GA) is exploited to generate...... a candidate solution pool. Then in the second stage, the candidates are presented to a linear planning model to investigate the system optimal loadability, hence the optimal solution for SVC planning can be achieved. The method is presented to IEEE 300–bus system....

  2. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    International Nuclear Information System (INIS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  3. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  4. A fast method for large-scale isolation of phages from hospital ...

    African Journals Online (AJOL)

    This plaque-forming method could be adopted to isolate E. coli phage easily, rapidly and in large quantities. Among the 18 isolated E. coli phages, 10 of them had a broad host range in E. coli and warrant further study. Key words: Escherichia coli phages, large-scale isolation, drug resistance, biological properties.

  5. EI Scale: an environmental impact assessment scale related to the construction materials used in the reinforced concrete

    OpenAIRE

    Gilson Morales; Antonio Edésio Jungles; Sheila Elisa Scheidemantel Klein; Juliana Guarda

    2010-01-01

    This study aimed to create EI Scal, an environmental impact assessment scal, related to construction materials used in the reinforced concrete structure production. The main reason for that was based on the need to classify the environmental impact levels through indicators to assess the damage level process. The scale allowed converting information to estimate the environmental impact caused. Indicators were defined trough the requirements and classification criteria of impact aspects consid...

  6. Finite-size scaling method for the Berezinskii–Kosterlitz–Thouless transition

    International Nuclear Information System (INIS)

    Hsieh, Yun-Da; Kao, Ying-Jer; Sandvik, Anders W

    2013-01-01

    We test an improved finite-size scaling method for reliably extracting the critical temperature T BKT of a Berezinskii–Kosterlitz–Thouless (BKT) transition. Using known single-parameter logarithmic corrections to the spin stiffness ρ s at T BKT in combination with the Kosterlitz–Nelson relation between the transition temperature and the stiffness, ρ s (T BKT ) = 2T BKT /π, we define a size-dependent transition temperature T BKT (L 1 ,L 2 ) based on a pair of system sizes L 1 ,L 2 , e.g., L 2 = 2L 1 . We use Monte Carlo data for the standard two-dimensional classical XY model to demonstrate that this quantity is well behaved and can be reliably extrapolated to the thermodynamic limit using the next expected logarithmic correction beyond the ones included in defining T BKT (L 1 ,L 2 ). For the Monte Carlo calculations we use GPU (graphical processing unit) computing to obtain high-precision data for L up to 512. We find that the sub-leading logarithmic corrections have significant effects on the extrapolation. Our result T BKT = 0.8935(1) is several error bars above the previously best estimates of the transition temperature, T BKT ≈ 0.8929. If only the leading log-correction is used, the result is, however, consistent with the lower value, suggesting that previous works have underestimated T BKT because of the neglect of sub-leading logarithms. Our method is easy to implement in practice and should be applicable to generic BKT transitions. (paper)

  7. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  8. The spatial extent of rainfall events and its relation to precipitation scaling

    NARCIS (Netherlands)

    Lochbihler, K.U.; Lenderink, Geert; Siebesma, A.P.

    2017-01-01

    Observations show that subdaily precipitation extremes increase with dew point temperature at a rate exceeding the Clausius-Clapeyron (CC) relation. The understanding of this so-called super CC scaling is still incomplete, and observations of convective cell properties could provide important

  9. ADVANTAGES OF RAPID METHOD FOR DETERMINING SCALE MASS AND DECARBURIZED LAYER OF ROLLED COIL STEEL

    Directory of Open Access Journals (Sweden)

    E. V. Parusov

    2016-08-01

    Full Text Available Purpose. To determine the universal empirical relationships that allow for operational calculation of scale mass and decarbonized layer depth based on the parameters of the technological process for rolled coil steel production. Methodology. The research is carried out on the industrial batches of the rolled steel of SAE 1006 and SAE 1065 grades. Scale removability was determined in accordance with the procedure of «Bekaert» company by the specifi-cations: GA-03-16, GA-03-18, GS-03-02, GS-06-01. The depth of decarbonized layer was identified in accordance with GOST 1763-68 (M method. Findings. Analysis of experimental data allowed us to determine the rational temperature of coil formation of the investigated steel grades, which provide the best possible removal of scale from the metal surface, a minimal amount of scale, as well as compliance of the metal surface color with the require-ments of European consumers. Originality. The work allowed establishing correlation of the basic quality indicators of the rolled coil high carbon steel (scale mass, depth of decarbonized layer and inter-laminar distance in pearlite with one of the main parameters (coil formation temperature of the deformation and heat treatment mode. The re-sulting regression equations, without metallographic analysis, can be used to determine, with a minimum error, the quantitative values of the total scale mass, depth of decarbonized layer and the average inter-lamellar distance in pearlite of the rolled coil high carbon steel. Practical value. Based on the specifications of «Bekaert» company (GA-03-16, GA-03-18, GS-03-02 and GS-06-01 the method of testing descaling by mechanical means from the surface of the rolled coil steel of low- and high-carbon steel grades was developed and approved in the environment of PJSC «ArcelorMittal Kryvyi Rih». The work resulted in development of the rapid method for determination of total and remaining scale mass on the rolled coil steel

  10. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    Science.gov (United States)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  11. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Li; He, Ya-Ling [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Kang, Qinjun [Computational Earth Science Group (EES-16), Los Alamos National Laboratory, Los Alamos, NM (United States); Tao, Wen-Quan, E-mail: wqtao@mail.xjtu.edu.cn [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of which obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.

  12. Wave-particle duality through an extended model of the scale relativity theory

    International Nuclear Information System (INIS)

    Ioannou, P D; Nica, P; Agop, M; Paun, V; Vizureanu, P

    2008-01-01

    Considering that the chaotic effect of associated wave packet on the particle itself results in movements on the fractal (continuous and non-differentiable) curves of fractal dimension D F , wave-particle duality through an extension of the scale relativity theory is given. It results through an equation of motion for the complex speed field, that in a fractal fluid, the convection, dissipation and dispersion are reciprocally compensating at any scale (differentiable or non-differentiable). From here, for an irrotational movement, a generalized Schroedinger equation is obtained. The absence of dispersion implies a generalized Navier-Stokes type equation, whereas, for the irrotational movement and the fractal dimension, D F = 2, the usual Schroedinger equation results. The absence of dissipation implies a generalized Korteweg-de Vries type equation. In such conjecture, at the differentiable scale, the duality is achieved through the flowing regimes of the fractal fluid, i.e. the wave character by means of the non-quasi-autonomous flowing regime and the particle character by means of the quasi-autonomous flowing regime. These flowing regimes are separated by '0.7 structure'. At the non-differentiable scale, a fractal potential acts as an energy accumulator and controls through the coherence the duality. The correspondence between the differentiable and non-differentiable scales implies a Cantor space-time. Moreover, the wave-particle duality implies at any scale a fractal.

  13. Scaling of mode shapes from operational modal analysis using harmonic forces

    Science.gov (United States)

    Brandt, A.; Berardengo, M.; Manzoni, S.; Cigada, A.

    2017-10-01

    This paper presents a new method for scaling mode shapes obtained by means of operational modal analysis. The method is capable of scaling mode shapes on any structure, also structures with closely coupled modes, and the method can be used in the presence of ambient vibration from traffic or wind loads, etc. Harmonic excitation can be relatively easily accomplished by using general-purpose actuators, also for force levels necessary for driving large structures such as bridges and highrise buildings. The signal processing necessary for mode shape scaling by the proposed method is simple and the method can easily be implemented in most measurement systems capable of generating a sine wave output. The tests necessary to scale the modes are short compared to typical operational modal analysis test time. The proposed method is thus easy to apply and inexpensive relative to some other methods for scaling mode shapes that are available in literature. Although it is not necessary per se, we propose to excite the structure at, or close to, the eigenfrequencies of the modes to be scaled, since this provides better signal-to-noise ratio in the response sensors, thus permitting the use of smaller actuators. An extensive experimental activity on a real structure was carried out and the results reported demonstrate the feasibility and accuracy of the proposed method. Since the method utilizes harmonic excitation for the mode shape scaling, we propose to call the method OMAH.

  14. [Methods and Applications to estimate the conversion factor of Resource-Based Relative Value Scale for nurse-midwife's delivery service in the national health insurance].

    Science.gov (United States)

    Kim, Jinhyun; Jung, Yoomi

    2009-08-01

    This paper analyzed alternative methods of calculating the conversion factor for nurse-midwife's delivery services in the national health insurance and estimated the optimal reimbursement level for the services. A cost accounting model and Sustainable Growth Rate (SGR) model were developed to estimate the conversion factor of Resource-Based Relative Value Scale (RBRVS) for nurse-midwife's services, depending on the scope of revenue considered in financial analysis. The data and sources from the government and the financial statements from nurse-midwife clinics were used in analysis. The cost accounting model and SGR model showed a 17.6-37.9% increase and 19.0-23.6% increase, respectively, in nurse-midwife fee for delivery services in the national health insurance. The SGR model measured an overall trend of medical expenditures rather than an individual financial status of nurse-midwife clinics, and the cost analysis properly estimated the level of reimbursement for nurse-midwife's services. Normal vaginal delivery in nurse-midwife clinics is considered cost-effective in terms of insurance financing. Upon a declining share of health expenditures on midwife clinics, designing a reimbursement strategy for midwife's services could be an opportunity as well as a challenge when it comes to efficient resource allocation.

  15. Universal scaling behaviors of meteorological variables’ volatility and relations with original records

    Science.gov (United States)

    Lu, Feiyu; Yuan, Naiming; Fu, Zuntao; Mao, Jiangyu

    2012-10-01

    Volatility series (defined as the magnitude of the increments between successive elements) of five different meteorological variables over China are analyzed by means of detrended fluctuation analysis (DFA for short). Universal scaling behaviors are found in all volatility records, whose scaling exponents take similar distributions with similar mean values and standard deviations. To reconfirm the relation between long-range correlations in volatility and nonlinearity in original series, DFA is also applied to the magnitude records (defined as the absolute values of the original records). The results clearly indicate that the nonlinearity of the original series is more pronounced in the magnitude series.

  16. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  17. ALGORITHM FOR DYNAMIC SCALING RELATIONAL DATABASE IN CLOUDS

    Directory of Open Access Journals (Sweden)

    Alexander V. Boichenko

    2014-01-01

    Full Text Available This article analyzes the main methods of scalingdatabases (replication, sharding and their supportat the popular relational databases and NoSQLsolutions with different data models: document-oriented, key-value, column-oriented and graph.The article presents an algorithm for the dynamicscaling of a relational database (DB, that takesinto account the specifics of the different types of logic database model. This article was prepared with the support of RFBR (grant № 13-07-00749.

  18. TESTING THE ASTEROSEISMIC SCALING RELATIONS FOR RED GIANTS WITH ECLIPSING BINARIES OBSERVED BY KEPLER

    Energy Technology Data Exchange (ETDEWEB)

    Gaulme, P.; McKeever, J.; Jackiewicz, J.; Rawls, M. L. [Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, NM 88003-8001 (United States); Corsaro, E. [Laboratoire AIM, CEA/DRF-CNRS, Université Paris 7 Diderot, IRFU/SAp, Centre de Saclay, F-91191 Gif-sur-Yvette (France); Mosser, B. [LESIA, Observatoire de Paris, PSL Research University, CNRS, Université Pierre et Marie Curie, Université Denis Diderot, F-92195 Meudon (France); Southworth, J. [Astrophysics Group, Keele University, Staffordshire, ST5 5BG (United Kingdom); Mahadevan, S.; Bender, C.; Deshpande, R., E-mail: gaulme@nmsu.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States)

    2016-12-01

    Given the potential of ensemble asteroseismology for understanding fundamental properties of large numbers of stars, it is critical to determine the accuracy of the scaling relations on which these measurements are based. From several powerful validation techniques, all indications so far show that stellar radius estimates from the asteroseismic scaling relations are accurate to within a few percent. Eclipsing binary systems hosting at least one star with detectable solar-like oscillations constitute the ideal test objects for validating asteroseismic radius and mass inferences. By combining radial velocity (RV) measurements and photometric time series of eclipses, it is possible to determine the masses and radii of each component of a double-lined spectroscopic binary. We report the results of a four-year RV survey performed with the échelle spectrometer of the Astrophysical Research Consortium’s 3.5 m telescope and the APOGEE spectrometer at Apache Point Observatory. We compare the masses and radii of 10 red giants (RGs) obtained by combining radial velocities and eclipse photometry with the estimates from the asteroseismic scaling relations. We find that the asteroseismic scaling relations overestimate RG radii by about 5% on average and masses by about 15% for stars at various stages of RG evolution. Systematic overestimation of mass leads to underestimation of stellar age, which can have important implications for ensemble asteroseismology used for Galactic studies. As part of a second objective, where asteroseismology is used for understanding binary systems, we confirm that oscillations of RGs in close binaries can be suppressed enough to be undetectable, a hypothesis that was proposed in a previous work.

  19. The MUSIC of galaxy clusters - II. X-ray global properties and scaling relations

    Science.gov (United States)

    Biffi, V.; Sembolini, F.; De Petris, M.; Valdarnini, R.; Yepes, G.; Gottlöber, S.

    2014-03-01

    We present the X-ray properties and scaling relations of a large sample of clusters extracted from the Marenostrum MUltidark SImulations of galaxy Clusters (MUSIC) data set. We focus on a sub-sample of 179 clusters at redshift z ˜ 0.11, with 3.2 × 1014 h-1 M⊙ mass. We employed the X-ray photon simulator PHOX to obtain synthetic Chandra observations and derive observable-like global properties of the intracluster medium (ICM), as X-ray temperature (TX) and luminosity (LX). TX is found to slightly underestimate the true mass-weighted temperature, although tracing fairly well the cluster total mass. We also study the effects of TX on scaling relations with cluster intrinsic properties: total (M500 and gas Mg,500 mass; integrated Compton parameter (YSZ) of the Sunyaev-Zel'dovich (SZ) thermal effect; YX = Mg,500 TX. We confirm that YX is a very good mass proxy, with a scatter on M500-YX and YSZ-YX lower than 5 per cent. The study of scaling relations among X-ray, intrinsic and SZ properties indicates that simulated MUSIC clusters reasonably resemble the self-similar prediction, especially for correlations involving TX. The observational approach also allows for a more direct comparison with real clusters, from which we find deviations mainly due to the physical description of the ICM, affecting TX and, particularly, LX.

  20. SCALE-UP OF RAPID SMALL-SCALE ADSORPTION TESTS TO FIELD-SCALE ADSORBERS: THEORETICAL BASIS AND EXPERIMENTAL RESULTS FOR A CONSTANT DIFFUSIVITY

    Science.gov (United States)

    Granular activated carbon (GAC) is an effective treatment technique for the removal of some toxic organics from drinking water or wastewater, however, it can be a relatively expensive process, especially if it is designed improperly. A rapid method for the design of large-scale f...

  1. Evaluation of treatment related fear using a newly developed fear scale for children: "Fear assessment picture scale" and its association with physiological response.

    Science.gov (United States)

    Tiwari, Nishidha; Tiwari, Shilpi; Thakur, Ruchi; Agrawal, Nikita; Shashikiran, N D; Singla, Shilpy

    2015-01-01

    Dental treatment is usually a poignant phenomenon for children. Projective scales are preferred over psychometric scales to recognize it, and to obtain the self-report from children. The aims were to evaluate treatment related fear using a newly developed fear scale for children, fear assessment picture scale (FAPS), and anxiety with colored version of modified facial affective scale (MFAS) - three faces along with physiologic responses (pulse rate and oxygen saturation) obtained by pulse oximeter before and during pulpectomy procedure. Total, 60 children of age 6-8 years who were visiting the dental hospital for the first time and needed pulpectomy treatment were selected. Children selected were of sound physical, physiological, and mental condition. Two projective scales were used; one to assess fear - FAPS and to assess anxiety - colored version of MFAS - three faces. These were co-related with the physiological responses (oxygen saturation and pulse rate) of children obtained by pulse oximeter before and during the pulpectomy procedure. Shapiro-Wilk test, McNemar's test, Wilcoxon signed ranks test, Kruskal-Wallis test, Mann-Whitney test were applied in the study. The physiological responses showed association with FAPS and MFAS though not significant. However, oxygen saturation with MFAS showed a significant change between "no anxiety" and "some anxiety" as quantified by Kruskal-Wallis test value 6.287, P = 0.043 (test is easy and fast to apply on children and reduces the chair-side time.

  2. Mitigation of Power frequency Magnetic Fields. Using Scale Invariant and Shape Optimization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Salinas, Ener; Yueqiang Liu; Daalder, Jaap; Cruz, Pedro; Antunez de Souza, Paulo Roberto Jr; Atalaya, Juan Carlos; Paula Marciano, Fabianna de; Eskinasy, Alexandre

    2006-10-15

    The present report describes the development and application of two novel methods for implementing mitigation techniques of magnetic fields at power frequencies. The first method makes use of scaling rules for electromagnetic quantities, while the second one applies a 2D shape optimization algorithm based on gradient methods. Before this project, the first method had already been successfully applied (by some of the authors of this report) to electromagnetic designs involving pure conductive Material (e.g. copper, aluminium) which implied a linear formulation. Here we went beyond this approach and tried to develop a formulation involving ferromagnetic (i.e. non-linear) Materials. Surprisingly, we obtained good equivalent replacement for test-transformers by varying the input current. In spite of the validity of this equivalence constrained to regions not too close to the source, the results can still be considered useful, as most field mitigation techniques are precisely developed for reducing the magnetic field in regions relatively far from the sources. The shape optimization method was applied in this project to calculate the optimal geometry of a pure conductive plate to mitigate the magnetic field originated from underground cables. The objective function was a weighted combination of magnetic energy at the region of interest and dissipated heat at the shielding Material. To our surprise, shapes of complex structure, difficult to interpret (and probably even harder to anticipate) were the results of the applied process. However, the practical implementation (using some approximation of these shapes) gave excellent experimental mitigation factors.

  3. Soil organic matter dynamics and CO2 fluxes in relation to landscape scale processes: linking process understanding to regional scale carbon mass-balances

    Science.gov (United States)

    Van Oost, Kristof; Nadeu, Elisabet; Wiaux, François; Wang, Zhengang; Stevens, François; Vanclooster, Marnik; Tran, Anh; Bogaert, Patrick; Doetterl, Sebastian; Lambot, Sébastien; Van wesemael, Bas

    2014-05-01

    In this paper, we synthesize the main outcomes of a collaborative project (2009-2014) initiated at the UCL (Belgium). The main objective of the project was to increase our understanding of soil organic matter dynamics in complex landscapes and use this to improve predictions of regional scale soil carbon balances. In a first phase, the project characterized the emergent spatial variability in soil organic matter storage and key soil properties at the regional scale. Based on the integration of remote sensing, geomorphological and soil analysis techniques, we quantified the temporal and spatial variability of soil carbon stock and pool distribution at the local and regional scales. This work showed a linkage between lateral fluxes of C in relation with sediment transport and the spatial variation in carbon storage at multiple spatial scales. In a second phase, the project focused on characterizing key controlling factors and process interactions at the catena scale. In-situ experiments of soil CO2 respiration showed that the soil carbon response at the catena scale was spatially heterogeneous and was mainly controlled by the catenary variation of soil physical attributes (soil moisture, temperature, C quality). The hillslope scale characterization relied on advanced hydrogeophysical techniques such as GPR (Ground Penetrating Radar), EMI (Electromagnetic induction), ERT (Electrical Resistivity Tomography), and geophysical inversion and data mining tools. Finally, we report on the integration of these insights into a coupled and spatially explicit model and its application. Simulations showed that C stocks and redistribution of mass and energy fluxes are closely coupled, they induce structured spatial and temporal patterns with non negligible attached uncertainties. We discuss the main outcomes of these activities in relation to sink-source behavior and relevance of erosion processes for larger-scale C budgets.

  4. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    Science.gov (United States)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well

  5. Methods for assessing the socioeconomic impacts of large-scale resource developments: implications for nuclear repository siting

    International Nuclear Information System (INIS)

    Murdock, S.H.; Leistritz, F.L.

    1983-03-01

    An overview of the major methods presently available for assessing the socioeconomic impacts of large-scale resource developments and includes discussion of the implications and applications of such methods for nuclear-waste-repository siting are provided. The report: (1) summarizes conceptual approaches underlying, and methodological alternatives for, the conduct of impact assessments in each substantive area, and then enumerates advantages and disadvantages of each alternative; (2) describes factors related to the impact-assessment process, impact events, and the characteristics of rural areas that affect the magnitude and distribution of impacts and the assessment of impacts in each area; (3) provides a detailed review of those methodologies actually used in impact assessment for each area, describes advantages and problems encountered in the use of each method, and identifies the frequency of use and the general level of acceptance of each technique; and (4) summarizes the implications of each area of projection for the repository-siting process, the applicability of the methods for each area to the special and standard features of repositories, and makes general recommendations concerning specific methods and procedures that should be incorporated in assessments for siting areas

  6. Rosenberg Self-Esteem Scale: Method Effects, Factorial Structure and Scale Invariance Across Migrant Child and Urban Child Populations in China.

    Science.gov (United States)

    Wu, Yang; Zuo, Bin; Wen, Fangfang; Yan, Lei

    2017-01-01

    Using confirmatory factor analyses, this study examined the method effects on a Chinese version of the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965 ) in a sample of migrant and urban children in China. In all, 982 children completed the RSES, and 9 models and 9 corresponding variants were specified and tested. The results indicated that the method effects are associated with both positively and negatively worded items and that Item 8 should be treated as a positively worded item. Additionally, the method effects models were invariant across migrant and urban children in China.

  7. Scaling criteria and an assessment of Semiscale Mod-3 scaling for small-break loss-of-coolant transients

    International Nuclear Information System (INIS)

    Larson, T.K.; Anderson, J.L.; Shimeck, D.J.

    1982-01-01

    Various methods of scaling fluid thermal-hydraulic test facilities and their relative merits and disadvantages are examined in light of nuclear reactor safety considerations. Particular emphasis is placed on examination of the scaling of the Semiscale Mod-3 system and determination of thermal-hydraulic phenomena thought to be important during a small break loss-of-coolant accident in a pressurized water nuclear reactor. The influence of geometric and dynamic scaling concerns in the Mod-3 system on small break behavior are addressed from an engineering viewpoint and corrective measures contemplated or required to make results from Semiscale tests more meaningful relative to expected PWR response are discussed

  8. Scaling relations for plasma production and acceleration of rotating plasma flows

    International Nuclear Information System (INIS)

    Ikehata, Takashi; Tanabe, Toshio; Mase, Hiroshi; Sekine, Ryusuke; Hasegawa, Kazuyuki.

    1989-01-01

    Scaling relations are investigated theoretically and experimentally of the plasma production and acceleration in the rotating plasma gun which has been developed as a new means of plasma centrifuge. Two operational modes: the gas-discharge mode for gaseous elements and the vacuum-discharge mode for solid elements are studied. Relations of the plasma density and velocities to the discharge current and the magnetic field are derived. The agreement between experiment and theory is quite well. It is found that fully-ionized rotating plasmas produced in the gas-discharge mode is most advantageous to realize efficient plasma centrifuge. (author)

  9. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Mental Illness Related Internalized Stigma: Psychometric Properties of the Brief ISMI Scale in Greece.

    Science.gov (United States)

    Paraskevoulakou, Alexia; Vrettou, Kassiani; Pikouli, Katerina; Triantafillou, Evgenia; Lykou, Anastasia; Economou, Marina

    2017-09-01

    Since evaluation regarding the impact of mental illness related internalized stigma is scarce, there is a great need for psychometric instruments which could contribute to understanding its adverse effects among Greek patients with severe mental illness. The Brief Internalized Stigma of Mental Illness (ISMI) scale is one of the most widely used measures designed to assess the subjective experience of stigma related to mental illness. The present study aimed to investigate the psychometric properties of the Greek version of the Brief ISMI scale. In addition to presenting psychometric findings, we explored the relationship of the Greek version of the Brief ISMI subscales with indicators of self-esteem and quality of life. 272 outpatients (108 males, 164 females) meeting the DSM-IV TR criteria for severe mental disorder (schizophrenia, bipolar disorder, major depression) completed the Brief ISMI, the RSES and the WHOQOL-BREF scales. Patients reported age and educational level. A retest was conducted with 124 patients. The Chronbach's alpha coefficient was 0 0.83. The test-retest reliability coefficients varied from 0.81 to 0.91, indicating substantial agreement. The ICC was for the total score 0.83 and for the two factors, 0.69 and 0.77 respectively. Factor analysis provided strong evidence for a two factor model. Factors 1 and 2 were named respectively "how others view me" and "how I view myself". They were negatively correlated with both RSES and WHOQOL-BREF scales, as well as with educational level. Factor 2 was significantly associated with the type of diagnosis. The Greek version of the Brief ISMI scale can be used as a reliable and valid tool for assessing mental illness related internalized stigma among Greek patients with severe mental illness.

  11. Interspecific scaling patterns of talar articular surfaces within primates and their closest living relatives

    Science.gov (United States)

    Yapuncich, Gabriel S; Boyer, Doug M

    2014-01-01

    The articular facets of interosseous joints must transmit forces while maintaining relatively low stresses. To prevent overloading, joints that transmit higher forces should therefore have larger facet areas. The relative contributions of body mass and muscle-induced forces to joint stress are unclear, but generate opposing hypotheses. If mass-induced forces dominate, facet area should scale with positive allometry to body mass. Alternatively, muscle-induced forces should cause facets to scale isometrically with body mass. Within primates, both scaling patterns have been reported for articular surfaces of the femoral and humeral heads, but more distal elements are less well studied. Additionally, examination of complex articular surfaces has largely been limited to linear measurements, so that ‘true area' remains poorly assessed. To re-assess these scaling relationships, we examine the relationship between body size and articular surface areas of the talus. Area measurements were taken from microCT scan-generated surfaces of all talar facets from a comprehensive sample of extant euarchontan taxa (primates, treeshrews, and colugos). Log-transformed data were regressed on literature-derived log-body mass using reduced major axis and phylogenetic least squares regressions. We examine the scaling patterns of muscle mass and physiological cross-sectional area (PCSA) to body mass, as these relationships may complicate each model. Finally, we examine the scaling pattern of hindlimb muscle PCSA to talar articular surface area, a direct test of the effect of mass-induced forces on joint surfaces. Among most groups, there is an overall trend toward positive allometry for articular surfaces. The ectal (= posterior calcaneal) facet scales with positive allometry among all groups except ‘sundatherians', strepsirrhines, galagids, and lorisids. The medial tibial facet scales isometrically among all groups except lemuroids. Scaling coefficients are not correlated with sample

  12. TESTING SCALING RELATIONS FOR SOLAR-LIKE OSCILLATIONS FROM THE MAIN SEQUENCE TO RED GIANTS USING KEPLER DATA

    Energy Technology Data Exchange (ETDEWEB)

    Huber, D.; Bedding, T. R.; Stello, D. [Sydney Institute for Astronomy (SIfA), School of Physics, University of Sydney, NSW 2006 (Australia); Hekker, S. [Astronomical Institute ' Anton Pannekoek' , University of Amsterdam, Science Park 904, 1098 XH Amsterdam (Netherlands); Mathur, S. [High Altitude Observatory, NCAR, P.O. Box 3000, Boulder, CO 80307 (United States); Mosser, B. [LESIA, CNRS, Universite Pierre et Marie Curie, Universite Denis, Diderot, Observatoire de Paris, 92195 Meudon cedex (France); Verner, G. A.; Elsworth, Y. P.; Hale, S. J.; Chaplin, W. J. [School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT (United Kingdom); Bonanno, A. [INAF Osservatorio Astrofisico di Catania (Italy); Buzasi, D. L. [Eureka Scientific, 2452 Delmer Street Suite 100, Oakland, CA 94602-3017 (United States); Campante, T. L. [Centro de Astrofisica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Kallinger, T. [Department of Physics and Astronomy, University of British Columbia, Vancouver (Canada); Silva Aguirre, V. [Max-Planck-Institut fuer Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching (Germany); De Ridder, J. [Instituut voor Sterrenkunde, K.U.Leuven (Belgium); Garcia, R. A. [Laboratoire AIM, CEA/DSM-CNRS, Universite Paris 7 Diderot, IRFU/SAp, Centre de Saclay, 91191, Gif-sur-Yvette (France); Appourchaux, T. [Institut d' Astrophysique Spatiale, UMR 8617, Universite Paris Sud, 91405 Orsay Cedex (France); Frandsen, S. [Danish AsteroSeismology Centre (DASC), Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C (Denmark); Houdek, G., E-mail: dhuber@physics.usyd.edu.au [Institute of Astronomy, University of Vienna, 1180 Vienna (Austria); and others

    2011-12-20

    We have analyzed solar-like oscillations in {approx}1700 stars observed by the Kepler Mission, spanning from the main sequence to the red clump. Using evolutionary models, we test asteroseismic scaling relations for the frequency of maximum power ({nu}{sub max}), the large frequency separation ({Delta}{nu}), and oscillation amplitudes. We show that the difference of the {Delta}{nu}-{nu}{sub max} relation for unevolved and evolved stars can be explained by different distributions in effective temperature and stellar mass, in agreement with what is expected from scaling relations. For oscillation amplitudes, we show that neither (L/M){sup s} scaling nor the revised scaling relation by Kjeldsen and Bedding is accurate for red-giant stars, and demonstrate that a revised scaling relation with a separate luminosity-mass dependence can be used to calculate amplitudes from the main sequence to red giants to a precision of {approx}25%. The residuals show an offset particularly for unevolved stars, suggesting that an additional physical dependency is necessary to fully reproduce the observed amplitudes. We investigate correlations between amplitudes and stellar activity, and find evidence that the effect of amplitude suppression is most pronounced for subgiant stars. Finally, we test the location of the cool edge of the instability strip in the Hertzsprung-Russell diagram using solar-like oscillations and find the detections in the hottest stars compatible with a domain of hybrid stochastically excited and opacity driven pulsation.

  13. Scaling relations between trabecular bone volume fraction and microstructure at different skeletal sites.

    Science.gov (United States)

    Räth, Christoph; Baum, Thomas; Monetti, Roberto; Sidorenko, Irina; Wolf, Petra; Eckstein, Felix; Matsuura, Maiko; Lochmüller, Eva-Maria; Zysset, Philippe K; Rummeny, Ernst J; Link, Thomas M; Bauer, Jan S

    2013-12-01

    In this study, we investigated the scaling relations between trabecular bone volume fraction (BV/TV) and parameters of the trabecular microstructure at different skeletal sites. Cylindrical bone samples with a diameter of 8mm were harvested from different skeletal sites of 154 human donors in vitro: 87 from the distal radius, 59/69 from the thoracic/lumbar spine, 51 from the femoral neck, and 83 from the greater trochanter. μCT images were obtained with an isotropic spatial resolution of 26μm. BV/TV and trabecular microstructure parameters (TbN, TbTh, TbSp, scaling indices ( and σ of α and αz), and Minkowski Functionals (Surface, Curvature, Euler)) were computed for each sample. The regression coefficient β was determined for each skeletal site as the slope of a linear fit in the double-logarithmic representations of the correlations of BV/TV versus the respective microstructure parameter. Statistically significant correlation coefficients ranging from r=0.36 to r=0.97 were observed for BV/TV versus microstructure parameters, except for Curvature and Euler. The regression coefficients β were 0.19 to 0.23 (TbN), 0.21 to 0.30 (TbTh), -0.28 to -0.24 (TbSp), 0.58 to 0.71 (Surface) and 0.12 to 0.16 (), 0.07 to 0.11 (), -0.44 to -0.30 (σ(α)), and -0.39 to -0.14 (σ(αz)) at the different skeletal sites. The 95% confidence intervals of β overlapped for almost all microstructure parameters at the different skeletal sites. The scaling relations were independent of vertebral fracture status and similar for subjects aged 60-69, 70-79, and >79years. In conclusion, the bone volume fraction-microstructure scaling relations showed a rather universal character. © 2013.

  14. Fault diagnosis of rolling element bearing using a new optimal scale morphology analysis method.

    Science.gov (United States)

    Yan, Xiaoan; Jia, Minping; Zhang, Wan; Zhu, Lin

    2018-02-01

    Periodic transient impulses are key indicators of rolling element bearing defects. Efficient acquisition of impact impulses concerned with the defects is of much concern to the precise detection of bearing defects. However, transient features of rolling element bearing are generally immersed in stochastic noise and harmonic interference. Therefore, in this paper, a new optimal scale morphology analysis method, named adaptive multiscale combination morphological filter-hat transform (AMCMFH), is proposed for rolling element bearing fault diagnosis, which can both reduce stochastic noise and reserve signal details. In this method, firstly, an adaptive selection strategy based on the feature energy factor (FEF) is introduced to determine the optimal structuring element (SE) scale of multiscale combination morphological filter-hat transform (MCMFH). Subsequently, MCMFH containing the optimal SE scale is applied to obtain the impulse components from the bearing vibration signal. Finally, fault types of bearing are confirmed by extracting the defective frequency from envelope spectrum of the impulse components. The validity of the proposed method is verified through the simulated analysis and bearing vibration data derived from the laboratory bench. Results indicate that the proposed method has a good capability to recognize localized faults appeared on rolling element bearing from vibration signal. The study supplies a novel technique for the detection of faulty bearing. Copyright © 2018. Published by Elsevier Ltd.

  15. A Revised Method For Estimating Oxide Basicity Per The Smith Scale With Example Application To Glass Durability

    International Nuclear Information System (INIS)

    Reynolds, J.G.

    2011-01-01

    Previous researchers have developed correlations between oxide electronegativity and oxide basicity. The present paper revises those correlations using a newer method of calculating electronegativity of the oxygen anion. Basicity is expressed using the Smith α parameter scale. A linear relation was found between the oxide electronegativity and the Smith α parameter, with an R 2 of 0.92. An example application of this new correlation to the durability of high-level nuclear waste glass is demonstrated. The durability of waste glass was found to be directly proportional to the quantity and basicity of the oxides of tetrahedrally coordinated network forming ions.

  16. Multi-Scale Entropy Analysis as a Method for Time-Series Analysis of Climate Data

    Directory of Open Access Journals (Sweden)

    Heiko Balzter

    2015-03-01

    Full Text Available Evidence is mounting that the temporal dynamics of the climate system are changing at the same time as the average global temperature is increasing due to multiple climate forcings. A large number of extreme weather events such as prolonged cold spells, heatwaves, droughts and floods have been recorded around the world in the past 10 years. Such changes in the temporal scaling behaviour of climate time-series data can be difficult to detect. While there are easy and direct ways of analysing climate data by calculating the means and variances for different levels of temporal aggregation, these methods can miss more subtle changes in their dynamics. This paper describes multi-scale entropy (MSE analysis as a tool to study climate time-series data and to identify temporal scales of variability and their change over time in climate time-series. MSE estimates the sample entropy of the time-series after coarse-graining at different temporal scales. An application of MSE to Central European, variance-adjusted, mean monthly air temperature anomalies (CRUTEM4v is provided. The results show that the temporal scales of the current climate (1960–2014 are different from the long-term average (1850–1960. For temporal scale factors longer than 12 months, the sample entropy increased markedly compared to the long-term record. Such an increase can be explained by systems theory with greater complexity in the regional temperature data. From 1961 the patterns of monthly air temperatures are less regular at time-scales greater than 12 months than in the earlier time period. This finding suggests that, at these inter-annual time scales, the temperature variability has become less predictable than in the past. It is possible that climate system feedbacks are expressed in altered temporal scales of the European temperature time-series data. A comparison with the variance and Shannon entropy shows that MSE analysis can provide additional information on the

  17. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  18. Superhydrophobic multi-scale ZnO nanostructures fabricated by chemical vapor deposition method.

    Science.gov (United States)

    Zhou, Ming; Feng, Chengheng; Wu, Chunxia; Ma, Weiwei; Cai, Lan

    2009-07-01

    The ZnO nanostructures were synthesized on Si(100) substrates by chemical vapor deposition (CVD) method. Different Morphologies of ZnO nanostructures, such as nanoparticle film, micro-pillar and micro-nano multi-structure, were obtained with different conditions. The results of XRD and TEM showed the good quality of ZnO crystal growth. Selected area electron diffraction analysis indicates the individual nano-wire is single crystal. The wettability of ZnO was studied by contact angle admeasuring apparatus. We found that the wettability can be changed from hydrophobic to super-hydrophobic when the structure changed from smooth particle film to single micro-pillar, nano-wire and micro-nano multi-scale structure. Compared with the particle film with contact angle (CA) of 90.7 degrees, the CA of single scale microstructure and sparse micro-nano multi-scale structure is 130-140 degrees, 140-150 degrees respectively. But when the surface is dense micro-nano multi-scale structure such as nano-lawn, the CA can reach to 168.2 degrees . The results indicate that microstructure of surface is very important to the surface wettability. The wettability on the micro-nano multi-structure is better than single-scale structure, and that of dense micro-nano multi-structure is better than sparse multi-structure.

  19. High precision micro-scale Hall Effect characterization method using in-line micro four-point probes

    DEFF Research Database (Denmark)

    Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong

    2008-01-01

    Accurate characterization of ultra shallow junctions (USJ) is important in order to understand the principles of junction formation and to develop the appropriate implant and annealing technologies. We investigate the capabilities of a new micro-scale Hall effect measurement method where Hall...... effect is measured with collinear micro four-point probes (M4PP). We derive the sensitivity to electrode position errors and describe a position error suppression method to enable rapid reliable Hall effect measurements with just two measurement points. We show with both Monte Carlo simulations...... and experimental measurements, that the repeatability of a micro-scale Hall effect measurement is better than 1 %. We demonstrate the ability to spatially resolve Hall effect on micro-scale by characterization of an USJ with a single laser stripe anneal. The micro sheet resistance variations resulting from...

  20. Evaluation of ground motion scaling methods for analysis of structural systems

    Science.gov (United States)

    O'Donnell, A. P.; Beltsar, O.A.; Kurama, Y.C.; Kalkan, E.; Taflanidis, A.A.

    2011-01-01

    Ground motion selection and scaling comprises undoubtedly the most important component of any seismic risk assessment study that involves time-history analysis. Ironically, this is also the single parameter with the least guidance provided in current building codes, resulting in the use of mostly subjective choices in design. The relevant research to date has been primarily on single-degree-of-freedom systems, with only a few studies using multi-degree-of-freedom systems. Furthermore, the previous research is based solely on numerical simulations with no experimental data available for the validation of the results. By contrast, the research effort described in this paper focuses on an experimental evaluation of selected ground motion scaling methods based on small-scale shake-table experiments of re-configurable linearelastic and nonlinear multi-story building frame structure models. Ultimately, the experimental results will lead to the development of guidelines and procedures to achieve reliable demand estimates from nonlinear response history analysis in seismic design. In this paper, an overview of this research effort is discussed and preliminary results based on linear-elastic dynamic response are presented. ?? ASCE 2011.

  1. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  2. Fractional Nottale's Scale Relativity and emergence of complexified gravity

    International Nuclear Information System (INIS)

    EL-Nabulsi, Ahmad Rami

    2009-01-01

    Fractional calculus of variations has recently gained significance in studying weak dissipative and nonconservative dynamical systems ranging from classical mechanics to quantum field theories. In this paper, fractional Nottale's Scale Relativity (NSR) for an arbitrary fractal dimension is introduced within the framework of fractional action-like variational approach recently introduced by the author. The formalism is based on fractional differential operators that generalize the differential operators of conventional NSR but that reduces to the standard formalism in the integer limit. Our main aim is to build the fractional setting for the NSR dynamical equations. Many interesting consequences arise, in particular the emergence of complexified gravity and complex time.

  3. A Hamiltonian-based derivation of Scaled Boundary Finite Element Method for elasticity problems

    International Nuclear Information System (INIS)

    Hu Zhiqiang; Lin Gao; Wang Yi; Liu Jun

    2010-01-01

    The Scaled Boundary Finite Method (SBFEM) is a semi-analytical solution approach for solving partial differential equation. For problem in elasticity, the governing equations can be obtained by mechanically based formulation, Scaled-boundary-transformation-based formulation and principle of virtual work. The governing equations are described in the frame of Lagrange system and the unknowns are displacements. But in the solution procedure, the auxiliary variables are introduced and the equations are solved in the state space. Based on the observation that the duality system to solve elastic problem proposed by W.X. Zhong is similar to the above solution approach, the discretization of the SBFEM and the duality system are combined to derive the governing equations in the Hamilton system by introducing the dual variables in this paper. The Precise Integration Method (PIM) used in Duality system is also an efficient method for the solution of the governing equations of SBFEM in displacement and boundary stiffness matrix especially for the case which results some numerical difficulties in the usually uses the eigenvalue method. Numerical examples are used to demonstrate the validity and effectiveness of the PIM for solution of boundary static stiffness.

  4. Multi-scale multi-physics computational chemistry simulation based on ultra-accelerated quantum chemical molecular dynamics method for structural materials in boiling water reactor

    International Nuclear Information System (INIS)

    Miyamoto, Akira; Sato, Etsuko; Sato, Ryo; Inaba, Kenji; Hatakeyama, Nozomu

    2014-01-01

    In collaboration with experimental experts we have reported in the present conference (Hatakeyama, N. et al., “Experiment-integrated multi-scale, multi-physics computational chemistry simulation applied to corrosion behaviour of BWR structural materials”) the results of multi-scale multi-physics computational chemistry simulations applied to the corrosion behaviour of BWR structural materials. In macro-scale, a macroscopic simulator of anode polarization curve was developed to solve the spatially one-dimensional electrochemical equations on the material surface in continuum level in order to understand the corrosion behaviour of typical BWR structural material, SUS304. The experimental anode polarization behaviours of each pure metal were reproduced by fitting all the rates of electrochemical reactions and then the anode polarization curve of SUS304 was calculated by using the same parameters and found to reproduce the experimental behaviour successfully. In meso-scale, a kinetic Monte Carlo (KMC) simulator was applied to an actual-time simulation of the morphological corrosion behaviour under the influence of an applied voltage. In micro-scale, an ultra-accelerated quantum chemical molecular dynamics (UA-QCMD) code was applied to various metallic oxide surfaces of Fe 2 O 3 , Fe 3 O 4 , Cr 2 O 3 modelled as same as water molecules and dissolved metallic ions on the surfaces, then the dissolution and segregation behaviours were successfully simulated dynamically by using UA-QCMD. In this paper we describe details of the multi-scale, multi-physics computational chemistry method especially the UA-QCMD method. This method is approximately 10,000,000 times faster than conventional first-principles molecular dynamics methods based on density-functional theory (DFT), and the accuracy was also validated for various metals and metal oxides compared with DFT results. To assure multi-scale multi-physics computational chemistry simulation based on the UA-QCMD method for

  5. THE NON-CAUSAL ORIGIN OF THE BLACK-HOLE-GALAXY SCALING RELATIONS

    International Nuclear Information System (INIS)

    Jahnke, Knud; Maccio, Andrea V.

    2011-01-01

    We show that the M BH -M bulge scaling relations observed from the local to the high-z universe can be largely or even entirely explained by a non-causal origin, i.e., they do not imply the need for any physically coupled growth of black hole (BH) and bulge mass, for example, through feedback by active galactic nuclei (AGNs). Provided some physics for the absolute normalization, the creation of the scaling relations can be fully explained by the hierarchical assembly of BH and stellar mass through galaxy merging, from an initially uncorrelated distribution of BH and stellar masses in the early universe. We show this with a suite of dark matter halo merger trees for which we make assumptions about (uncorrelated) BH and stellar mass values at early cosmic times. We then follow the halos in the presence of global star formation and BH accretion recipes that (1) work without any coupling of the two properties per individual galaxy and (2) correctly reproduce the observed star formation and BH accretion rate density in the universe. With disk-to-bulge conversion in mergers included, our simulations even create the observed slope of ∼1.1 for the M BH -M bulge relation at z = 0. This also implies that AGN feedback is not a required (though still a possible) ingredient in galaxy evolution. In light of this, other mechanisms that can be invoked to truncate star formation in massive galaxies are equally justified.

  6. Development and preliminary testing of a scale to assess pain-related fear in children and adolescents.

    Science.gov (United States)

    Huguet, Anna; McGrath, Patrick J; Pardos, Judit

    2011-08-01

    It is assumed that pain-related fear, a present response to an immediate danger or threat such as pain, plays a significant role in the experience of pediatric pain. However, there are no measures to adequately measure this construct in children and adolescents. The purpose of this study was to develop and test the psychometric properties of a scale to assess pain-related fear to be used with Catalan-speaking children and adolescents between 7- and 16-years-old. We initially developed a list of items that reflected the physiological, cognitive, and behavioral components of pain-related fear components. We also queried an international group of experts, and interviewed children and adolescents. After pilot testing the initial version with a sample of 10 children, we administered the questionnaire to a sample of schoolchildren (n = 273) and children from medical clinics (n = 164) through individual interviews. Additional information was also collected during the interview to study the psychometric properties of the scale. Ten days after the initial interview, participating schoolchildren were requested to answer the questionnaire again. Item analysis and exploratory factor analysis with data from the school sample produced 2 meaningful factors (namely, Fearful thoughts and Fearful physical feelings and behaviors). Findings also showed that the Pediatric Pain Fear Scale (total scale and the 2 subscales) was both reliable and valid. This scale could help researchers to gain a better understanding about the role of pain-related fear in children and adolescents and support clinical decision-making. This article presents a new measure of fear associated with pain in children and adolescents. This measure could potentially help researchers to gain a better understanding about the role of pain-related fear in children and adolescents and support clinical decision-making. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  7. CoRE: A context-aware relation extraction method for relation completion

    KAUST Repository

    Li, Zhixu; Sharaf, Mohamed Abdel Fattah; Sitbon, Laurianne; Du, Xiaoyong; Zhou, Xiaofang

    2014-01-01

    We identify relation completion (RC) as one recurring problem that is central to the success of novel big data applications such as Entity Reconstruction and Data Enrichment. Given a semantic relation {\\cal R}, RC attempts at linking entity pairs between two entity lists under the relation {\\cal R}. To accomplish the RC goals, we propose to formulate search queries for each query entity \\alpha based on some auxiliary information, so that to detect its target entity \\beta from the set of retrieved documents. For instance, a pattern-based method (PaRE) uses extracted patterns as the auxiliary information in formulating search queries. However, high-quality patterns may decrease the probability of finding suitable target entities. As an alternative, we propose CoRE method that uses context terms learned surrounding the expression of a relation as the auxiliary information in formulating queries. The experimental results based on several real-world web data collections demonstrate that CoRE reaches a much higher accuracy than PaRE for the purpose of RC. © 1989-2012 IEEE.

  8. CoRE: A context-aware relation extraction method for relation completion

    KAUST Repository

    Li, Zhixu

    2014-04-01

    We identify relation completion (RC) as one recurring problem that is central to the success of novel big data applications such as Entity Reconstruction and Data Enrichment. Given a semantic relation {\\\\cal R}, RC attempts at linking entity pairs between two entity lists under the relation {\\\\cal R}. To accomplish the RC goals, we propose to formulate search queries for each query entity \\\\alpha based on some auxiliary information, so that to detect its target entity \\\\beta from the set of retrieved documents. For instance, a pattern-based method (PaRE) uses extracted patterns as the auxiliary information in formulating search queries. However, high-quality patterns may decrease the probability of finding suitable target entities. As an alternative, we propose CoRE method that uses context terms learned surrounding the expression of a relation as the auxiliary information in formulating queries. The experimental results based on several real-world web data collections demonstrate that CoRE reaches a much higher accuracy than PaRE for the purpose of RC. © 1989-2012 IEEE.

  9. Committee Representation and Medicare Reimbursements-An Examination of the Resource-Based Relative Value Scale.

    Science.gov (United States)

    Gao, Y Nina

    2018-04-06

    The Resource-Based Relative Value Scale Update Committee (RUC) submits recommended reimbursement values for physician work (wRVUs) under Medicare Part B. The RUC includes rotating representatives from medical specialties. To identify changes in physician reimbursements associated with RUC rotating seat representation. Relative Value Scale Update Committee members 1994-2013; Medicare Part B Relative Value Scale 1994-2013; Physician/Supplier Procedure Summary Master File 2007; Part B National Summary Data File 2000-2011. I match service and procedure codes to specialties using 2007 Medicare billing data. Subsequently, I model wRVUs as a function of RUC rotating committee representation and level of code specialization. An annual RUC rotating seat membership is associated with a statistically significant 3-5 percent increase in Medicare expenditures for codes billed to that specialty. For codes that are performed by a small number of physicians, the association between reimbursement and rotating subspecialty representation is positive, 0.177 (SE = 0.024). For codes that are performed by a large number of physicians, the association is negative, -0.183 (SE = 0.026). Rotating representation on the RUC is correlated with overall reimbursement rates. The resulting differential changes may exacerbate existing reimbursement discrepancies between generalist and specialist practitioners. © Health Research and Educational Trust.

  10. Measuring death-related anxiety in advanced cancer: preliminary psychometrics of the Death and Dying Distress Scale.

    Science.gov (United States)

    Lo, Christopher; Hales, Sarah; Zimmermann, Camilla; Gagliese, Lucia; Rydall, Anne; Rodin, Gary

    2011-10-01

    The alleviation of distress associated with death and dying is a central goal of palliative care, despite the lack of routine measurement of this outcome. In this study, we introduce the Death and Dying Distress Scale (DADDS), a new, brief measure we have developed to assess death-related anxiety in advanced cancer and other palliative populations. We describe its preliminary psychometrics based on a sample of 33 patients with advanced or metastatic cancer. The DADDS broadly captures distress about the loss of time and opportunity, the process of death and dying, and its impact on others. The initial version of the scale has a one-factor structure and good internal reliability. Dying and death-related distress was positively associated with depression and negatively associated with spiritual, emotional, physical, and functional well-being, providing early evidence of construct validity. This distress was relatively common, with 45% of the sample scoring in the upper reaches of the scale, suggesting that the DADDS may be a relevant outcome for palliative intervention. We conclude by presenting a revised 15-item version of the scale for further study in advanced cancer and other palliative populations.

  11. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    International Nuclear Information System (INIS)

    Cruz, Roberto de la; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-01-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction–diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction–diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge

  12. Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Directory of Open Access Journals (Sweden)

    Lijuan Duan

    2017-01-01

    Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.

  13. Age-related changes in the plasticity and toughness of human cortical bone at multiple length-scales

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, Elizabeth A.; Schaible, Eric; Bale, Hrishikesh; Barth, Holly D.; Tang, Simon Y.; Reichert, Peter; Busse, Bjoern; Alliston, Tamara; Ager III, Joel W.; Ritchie, Robert O.

    2011-08-10

    The structure of human cortical bone evolves over multiple length-scales from its basic constituents of collagen and hydroxyapatite at the nanoscale to osteonal structures at nearmillimeter dimensions, which all provide the basis for its mechanical properties. To resist fracture, bone’s toughness is derived intrinsically through plasticity (e.g., fibrillar sliding) at structural-scales typically below a micron and extrinsically (i.e., during crack growth) through mechanisms (e.g., crack deflection/bridging) generated at larger structural-scales. Biological factors such as aging lead to a markedly increased fracture risk, which is often associated with an age-related loss in bone mass (bone quantity). However, we find that age-related structural changes can significantly degrade the fracture resistance (bone quality) over multiple lengthscales. Using in situ small-/wide-angle x-ray scattering/diffraction to characterize sub-micron structural changes and synchrotron x-ray computed tomography and in situ fracture-toughness measurements in the scanning electron microscope to characterize effects at micron-scales, we show how these age-related structural changes at differing size-scales degrade both the intrinsic and extrinsic toughness of bone. Specifically, we attribute the loss in toughness to increased non-enzymatic collagen cross-linking which suppresses plasticity at nanoscale dimensions and to an increased osteonal density which limits the potency of crack-bridging mechanisms at micron-scales. The link between these processes is that the increased stiffness of the cross-linked collagen requires energy to be absorbed by “plastic” deformation at higher structural levels, which occurs by the process of microcracking.

  14. Mercury Pollution from Small-Scale Gold Mining Can Be Stopped by Implementing the Gravity-Borax Method

    DEFF Research Database (Denmark)

    Køster-Rasmussen, Rasmus; Westergaard, Maria L; Brasholt, Marie

    2016-01-01

    Mercury is used globally to extract gold in artisanal and small-scale gold mining. The mercury-free gravity-borax method for gold extraction was introduced in two mining communities using mercury in the provinces Kalinga and Camarines Norte. This article describes project activities...... organization facilitated the shift in Kalinga. In conclusion, the gravity-borax method is a doable alternative to mercury use in artisanal and small-scale gold mining, but support from the civil society is needed....

  15. Intrinsic symmetry of the scaling laws and generalized relations for critical indices

    International Nuclear Information System (INIS)

    Plechko, V.N.

    1982-01-01

    It is shown that the scating taws for criticat induces can be expressed as a consequence of a simple symmetry principle. Heuristic relations for critical induces of generalizing scaling laws for the case of arbitrary order parameters are presented, which manifestiy have a symmetric form and include the standard scalling laws as a particular case

  16. Scale and scaling in agronomy and environmental sciences

    Science.gov (United States)

    Scale is of paramount importance in environmental studies, engineering, and design. The unique course covers the following topics: scale and scaling, methods and theories, scaling in soils and other porous media, scaling in plants and crops; scaling in landscapes and watersheds, and scaling in agro...

  17. Relating Lagrangian passive scalar scaling exponents to Eulerian scaling exponents in turbulence

    OpenAIRE

    Schmitt , François G

    2005-01-01

    Intermittency is a basic feature of fully developed turbulence, for both velocity and passive scalars. Intermittency is classically characterized by Eulerian scaling exponent of structure functions. The same approach can be used in a Lagrangian framework to characterize the temporal intermittency of the velocity and passive scalar concentration of a an element of fluid advected by a turbulent intermittent field. Here we focus on Lagrangian passive scalar scaling exponents, and discuss their p...

  18. An uncertainty principle for star formation - II. A new method for characterising the cloud-scale physics of star formation and feedback across cosmic history

    Science.gov (United States)

    Kruijssen, J. M. Diederik; Schruba, Andreas; Hygate, Alexander P. S.; Hu, Chia-Yu; Haydon, Daniel T.; Longmore, Steven N.

    2018-05-01

    The cloud-scale physics of star formation and feedback represent the main uncertainty in galaxy formation studies. Progress is hampered by the limited empirical constraints outside the restricted environment of the Local Group. In particular, the poorly-quantified time evolution of the molecular cloud lifecycle, star formation, and feedback obstructs robust predictions on the scales smaller than the disc scale height that are resolved in modern galaxy formation simulations. We present a new statistical method to derive the evolutionary timeline of molecular clouds and star-forming regions. By quantifying the excess or deficit of the gas-to-stellar flux ratio around peaks of gas or star formation tracer emission, we directly measure the relative rarity of these peaks, which allows us to derive their lifetimes. We present a step-by-step, quantitative description of the method and demonstrate its practical application. The method's accuracy is tested in nearly 300 experiments using simulated galaxy maps, showing that it is capable of constraining the molecular cloud lifetime and feedback time-scale to <0.1 dex precision. Access to the evolutionary timeline provides a variety of additional physical quantities, such as the cloud-scale star formation efficiency, the feedback outflow velocity, the mass loading factor, and the feedback energy or momentum coupling efficiencies to the ambient medium. We show that the results are robust for a wide variety of gas and star formation tracers, spatial resolutions, galaxy inclinations, and galaxy sizes. Finally, we demonstrate that our method can be applied out to high redshift (z≲ 4) with a feasible time investment on current large-scale observatories. This is a major shift from previous studies that constrained the physics of star formation and feedback in the immediate vicinity of the Sun.

  19. Evaluation of treatment related fear using a newly developed fear scale for children: "Fear assessment picture scale" and its association with physiological response

    Directory of Open Access Journals (Sweden)

    Nishidha Tiwari

    2015-01-01

    Full Text Available Introduction: Dental treatment is usually a poignant phenomenon for children. Projective scales are preferred over psychometric scales to recognize it, and to obtain the self-report from children. Aims: The aims were to evaluate treatment related fear using a newly developed fear scale for children, fear assessment picture scale (FAPS, and anxiety with colored version of modified facial affective scale (MFAS - three faces along with physiologic responses (pulse rate and oxygen saturation obtained by pulse oximeter before and during pulpectomy procedure. Settings and Design: Total, 60 children of age 6-8 years who were visiting the dental hospital for the first time and needed pulpectomy treatment were selected. Children selected were of sound physical, physiological, and mental condition. Two projective scales were used; one to assess fear - FAPS and to assess anxiety - colored version of MFAS - three faces. These were co-related with the physiological responses (oxygen saturation and pulse rate of children obtained by pulse oximeter before and during the pulpectomy procedure. Statistical Analysis Used: Shapiro-Wilk test, McNemar′s test, Wilcoxon signed ranks test, Kruskal-Wallis test, Mann-Whitney test were applied in the study. Results: The physiological responses showed association with FAPS and MFAS though not significant. However, oxygen saturation with MFAS showed a significant change between "no anxiety" and "some anxiety" as quantified by Kruskal-Wallis test value 6.287, P = 0.043 (<0.05 before pulpectomy procedure. Conclusions: The FAPS can prove to be a pragmatic tool in spotting the fear among young children. This test is easy and fast to apply on children and reduces the chair-side time.

  20. A PORTRAIT OF COLD GAS IN GALAXIES AT 60 pc RESOLUTION AND A SIMPLE METHOD TO TEST HYPOTHESES THAT LINK SMALL-SCALE ISM STRUCTURE TO GALAXY-SCALE PROCESSES

    International Nuclear Information System (INIS)

    Leroy, Adam K.; Hughes, Annie; Schruba, Andreas; Rosolowsky, Erik; Blanc, Guillermo A.; Escala, Andres; Bolatto, Alberto D.; Colombo, Dario; Kramer, Carsten; Kruijssen, J. M. Diederik; Meidt, Sharon; Querejeta, Miguel; Schinnerer, Eva; Sliwa, Kazimierz; Pety, Jerome; Sandstrom, Karin

    2016-01-01

    The cloud-scale density, velocity dispersion, and gravitational boundedness of the interstellar medium (ISM) vary within and among galaxies. In turbulent models, these properties play key roles in the ability of gas to form stars. New high-fidelity, high-resolution surveys offer the prospect to measure these quantities across galaxies. We present a simple approach to make such measurements and to test hypotheses that link small-scale gas structure to star formation and galactic environment. Our calculations capture the key physics of the Larson scaling relations, and we show good correspondence between our approach and a traditional “cloud properties” treatment. However, we argue that our method is preferable in many cases because of its simple, reproducible characterization of all emission. Using, low- J 12 CO data from recent surveys, we characterize the molecular ISM at 60 pc resolution in the Antennae, the Large Magellanic Cloud (LMC), M31, M33, M51, and M74. We report the distributions of surface density, velocity dispersion, and gravitational boundedness at 60 pc scales and show galaxy-to-galaxy and intragalaxy variations in each. The distribution of flux as a function of surface density appears roughly lognormal with a 1 σ width of ∼0.3 dex, though the center of this distribution varies from galaxy to galaxy. The 60 pc resolution line width and molecular gas surface density correlate well, which is a fundamental behavior expected for virialized or free-falling gas. Varying the measurement scale for the LMC and M31, we show that the molecular ISM has higher surface densities, lower line widths, and more self-gravity at smaller scales.

  1. A PORTRAIT OF COLD GAS IN GALAXIES AT 60 pc RESOLUTION AND A SIMPLE METHOD TO TEST HYPOTHESES THAT LINK SMALL-SCALE ISM STRUCTURE TO GALAXY-SCALE PROCESSES

    Energy Technology Data Exchange (ETDEWEB)

    Leroy, Adam K. [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Hughes, Annie [CNRS, IRAP, 9 av. du Colonel Roche, BP 44346, F-31028 Toulouse cedex 4 (France); Schruba, Andreas [Max-Planck-Institut für extraterrestrische Physik, Giessenbachstrasse 1, D-85748 Garching (Germany); Rosolowsky, Erik [Department of Physics, University of Alberta, Edmonton, AB (Canada); Blanc, Guillermo A.; Escala, Andres [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Bolatto, Alberto D. [Department of Astronomy, Laboratory for Millimeter-wave Astronomy, and Joint Space Institute, University of Maryland, College Park, MD 20742 (United States); Colombo, Dario [Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany); Kramer, Carsten [Instituto Radioastronomía Milimétrica (IRAM), Av. Divina Pastora 7, Nucleo Central, E-18012 Granada (Spain); Kruijssen, J. M. Diederik [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstrasse 12-14, D-69120 Heidelberg (Germany); Meidt, Sharon; Querejeta, Miguel; Schinnerer, Eva; Sliwa, Kazimierz [Max Planck Institute für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Pety, Jerome [Institut de Radioastronomie Millimtrique (IRAM), 300 Rue de la Piscine, F-38406 Saint-Martin-d’Hères (France); Sandstrom, Karin [Center for Astrophysics and Space Sciences, Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093 (United States); and others

    2016-11-01

    The cloud-scale density, velocity dispersion, and gravitational boundedness of the interstellar medium (ISM) vary within and among galaxies. In turbulent models, these properties play key roles in the ability of gas to form stars. New high-fidelity, high-resolution surveys offer the prospect to measure these quantities across galaxies. We present a simple approach to make such measurements and to test hypotheses that link small-scale gas structure to star formation and galactic environment. Our calculations capture the key physics of the Larson scaling relations, and we show good correspondence between our approach and a traditional “cloud properties” treatment. However, we argue that our method is preferable in many cases because of its simple, reproducible characterization of all emission. Using, low- J {sup 12}CO data from recent surveys, we characterize the molecular ISM at 60 pc resolution in the Antennae, the Large Magellanic Cloud (LMC), M31, M33, M51, and M74. We report the distributions of surface density, velocity dispersion, and gravitational boundedness at 60 pc scales and show galaxy-to-galaxy and intragalaxy variations in each. The distribution of flux as a function of surface density appears roughly lognormal with a 1 σ width of ∼0.3 dex, though the center of this distribution varies from galaxy to galaxy. The 60 pc resolution line width and molecular gas surface density correlate well, which is a fundamental behavior expected for virialized or free-falling gas. Varying the measurement scale for the LMC and M31, we show that the molecular ISM has higher surface densities, lower line widths, and more self-gravity at smaller scales.

  2. The scale-dependent market trend: Empirical evidences using the lagged DFA method

    Science.gov (United States)

    Li, Daye; Kou, Zhun; Sun, Qiankun

    2015-09-01

    In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.

  3. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  4. A new method to determine large scale structure from the luminosity distance

    International Nuclear Information System (INIS)

    Romano, Antonio Enea; Chiang, Hsu-Wen; Chen, Pisin

    2014-01-01

    The luminosity distance can be used to determine the properties of large scale structure around the observer. To this purpose we develop a new inversion method to map luminosity distance to a Lemaitre–Tolman–Bondi (LTB) metric based on the use of the exact analytical solution for Einstein equations. The main advantages of this approach are an improved numerical accuracy and stability, an exact analytical setting of the initial conditions for the differential equations which need to be solved and the validity for any sign of the functions determining the LTB geometry. Given the fully analytical form of the differential equations, this method also simplifies the calculation of the red-shift expansion around the apparent horizon point where the numerical solution becomes unstable. We test the method by inverting the supernovae Ia luminosity distance function corresponding to the best fit ΛCDM model. We find that only a limited range of initial conditions is compatible with observations, or a transition from red to blue-shift can occur at relatively low red-shift. Despite LTB solutions without a cosmological constant have been shown not to be compatible with all different set of available observational data, those studies normally fit data assuming a special functional ansatz for the inhomogeneity profile, which often depend only on few parameters. Inversion methods on the contrary are able to fully explore the freedom in fixing the functions which determine a LTB solution. Another important possible application is not about LTB solutions as cosmological models, but rather as tools to study the effects on the observations made by a generic observer located in an inhomogeneous region of the Universe where a fully non perturbative treatment involving exact solutions of Einstein equations is required. (paper)

  5. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  6. Scaling considerations related to interactions of hydrologic, pedologic and geomorphic processes (Invited)

    Science.gov (United States)

    Sidle, R. C.

    2013-12-01

    Hydrologic, pedologic, and geomorphic processes are strongly interrelated and affected by scale. These interactions exert important controls on runoff generation, preferential flow, contaminant transport, surface erosion, and mass wasting. Measurement of hydraulic conductivity (K) and infiltration capacity at small scales generally underestimates these values for application at larger field, hillslope, or catchment scales. Both vertical and slope-parallel saturated flow and related contaminant transport are often influenced by interconnected networks of preferential flow paths, which are not captured in K measurements derived from soil cores. Using such K values in models may underestimate water and contaminant fluxes and runoff peaks. As shown in small-scale runoff plot studies, infiltration rates are typically lower than integrated infiltration across a hillslope or in headwater catchments. The resultant greater infiltration-excess overland flow in small plots compared to larger landscapes is attributed to the lack of preferential flow continuity; plot border effects; greater homogeneity of rainfall inputs, topography and soil physical properties; and magnified effects of hydrophobicity in small plots. At the hillslope scale, isolated areas with high infiltration capacity can greatly reduce surface runoff and surface erosion at the hillslope scale. These hydropedologic and hydrogeomorphic processes are also relevant to both occurrence and timing of landslides. The focus of many landslide studies has typically been either on small-scale vadose zone process and how these affect soil mechanical properties or on larger scale, more descriptive geomorphic studies. One of the issues in translating laboratory-based investigations on geotechnical behavior of soils to field scales where landslides occur is the characterization of large-scale hydrological processes and flow paths that occur in heterogeneous and anisotropic porous media. These processes are not only affected

  7. Scoping Evaluation of the IRIS Radiation Environment by Using the FW-CADIS Method and SCALE MAVRIC Code

    International Nuclear Information System (INIS)

    Petrovic, B.

    2008-01-01

    IRIS is an advanced pressurized water reactor of integral configuration. This integral configuration with its relatively large reactor vessel and thick downcomer (1.7 m) results in a significant reduction of radiation field and material activation. It thus enables setting up aggressive dose reduction objectives, but at the same time presents challenges for the shielding analysis which needs to be performed over a large spatial domain and include flux attenuation by many orders of magnitude. The Monte Carlo method enables accurately representing irregular geometry and potential streaming paths, but may require significant computational efforts to reduce statistical uncertainty within the acceptable range. Variance reduction methods do exist, but they are designed to provide results for individual detectors and in limited regions, whereas in the scoping phase of IRIS shielding analysis the results are sought throughout the whole containment. To facilitate such analysis, the SCALE MAVRIC was employed. Based on the recently developed FW-CADIS method, MAVRIC uses forward and adjoint deterministic transport theory calculations to generate effective biasing parameters for Monte Carlo simulations throughout the problem. Previous studies have confirmed the potential of this method for obtaining Monte Carlo solutions with acceptable statistics over large spatial domains. The objective of this work was to evaluate the capability of the FW-CADIS/MAVRIC to efficiently perform the required shielding analysis of IRIS. For that purpose, a representative model was prepared, retaining the main problem characteristics, i.e., a large spatial domain (over 10 m in each dimension) and significant attenuation (over 12 orders of magnitude), but geometrically rather simplified. The obtained preliminary results indicate that the FW-CADIS method implemented through the MAVRIC sequence in SCALE will enable determination of radiation field throughout the large spatial domain of the IRIS nuclear

  8. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  9. Comparative Study of Laboratory-Scale and Prototypic Production-Scale Fuel Fabrication Processes and Product Characteristics

    International Nuclear Information System (INIS)

    2014-01-01

    An objective of the High Temperature Gas Reactor fuel development and qualification program for the United States Department of Energy has been to qualify fuel fabricated in prototypic production-scale equipment. The quality and characteristics of the tristructural isotropic coatings on fuel kernels are influenced by the equipment scale and processing parameters. Some characteristics affecting product quality were suppressed while others have become more significant in the larger equipment. Changes to the composition and method of producing resinated graphite matrix material has eliminated the use of hazardous, flammable liquids and enabled it to be procured as a vendor-supplied feed stock. A new method of overcoating TRISO particles with the resinated graphite matrix eliminates the use of hazardous, flammable liquids, produces highly spherical particles with a narrow size distribution, and attains product yields in excess of 99%. Compact fabrication processes have been scaled-up and automated with relatively minor changes to compact quality to manual laboratory-scale processes. The impact on statistical variability of the processes and the products as equipment was scaled are discussed. The prototypic production-scale processes produce test fuels that meet fuel quality specifications.

  10. A typology of health marketing research methods--combining public relations methods with organizational concern.

    Science.gov (United States)

    Rotarius, Timothy; Wan, Thomas T H; Liberman, Aaron

    2007-01-01

    Research plays a critical role throughout virtually every conduit of the health services industry. The key terms of research, public relations, and organizational interests are discussed. Combining public relations as a strategic methodology with the organizational concern as a factor, a typology of four different research methods emerges. These four health marketing research methods are: investigative, strategic, informative, and verification. The implications of these distinct and contrasting research methods are examined.

  11. On estimates of the pion-nucleon sigma term by the dispersion relations and taking into account the interrelation between the chiral and scale invariance breaking

    International Nuclear Information System (INIS)

    Efrosinin, V.P.; Zaikin, D.A.

    1983-01-01

    Possible reasons of disagreement between estimates of the pion-nucleon σ term obtained by the method of dispersion relations with extrapolation to the Chang-Dashen point and by alternative methods, making no use of such extrapolation are investigated. One of the reasons may be, that the πN amplitude is not analytic in the variable t at ν=0. A method, which is not so strongly influenced by the nonanalyticity, is suggested to estimate the σ term making use of the threshold data for the πN amplitude. Relation between the scale and chiral invariance breakings is discussed and the resulting estimate of the σ term is presented. Both estimates give close results (42 and 34 MeV) which do not contradict one another within the uncertainties of the methods

  12. Scaling Relations of Local Magnitude versus Moment Magnitude for Sequences of Similar Earthquakes in Switzerland

    KAUST Repository

    Bethmann, F.

    2011-03-22

    Theoretical considerations and empirical regressions show that, in the magnitude range between 3 and 5, local magnitude, ML, and moment magnitude, Mw, scale 1:1. Previous studies suggest that for smaller magnitudes this 1:1 scaling breaks down. However, the scatter between ML and Mw at small magnitudes is usually large and the resulting scaling relations are therefore uncertain. In an attempt to reduce these uncertainties, we first analyze the ML versus Mw relation based on 195 events, induced by the stimulation of a geothermal reservoir below the city of Basel, Switzerland. Values of ML range from 0.7 to 3.4. From these data we derive a scaling of ML ~ 1:5Mw over the given magnitude range. We then compare peak Wood-Anderson amplitudes to the low-frequency plateau of the displacement spectra for six sequences of similar earthquakes in Switzerland in the range of 0:5 ≤ ML ≤ 4:1. Because effects due to the radiation pattern and to the propagation path between source and receiver are nearly identical at a particular station for all events in a given sequence, the scatter in the data is substantially reduced. Again we obtain a scaling equivalent to ML ~ 1:5Mw. Based on simulations using synthetic source time functions for different magnitudes and Q values estimated from spectral ratios between downhole and surface recordings, we conclude that the observed scaling can be explained by attenuation and scattering along the path. Other effects that could explain the observed magnitude scaling, such as a possible systematic increase of stress drop or rupture velocity with moment magnitude, are masked by attenuation along the path.

  13. Top-spray fluid bed coating: Scale-up in terms of relative droplet size and drying force

    DEFF Research Database (Denmark)

    Hede, Peter Dybdahl; Bach, P.; Jensen, Anker Degn

    2008-01-01

    in terms of particle size fractions larger than 425 mu m determined by sieve analysis. Results indicated that the particle size distribution may be reproduced across scale with statistical valid precision by keeping the drying force and the relative droplet size constant across scale. It is also shown...

  14. Scaling of Metabolic Scaling within Physical Limits

    Directory of Open Access Journals (Sweden)

    Douglas S. Glazier

    2014-10-01

    Full Text Available Both the slope and elevation of scaling relationships between log metabolic rate and log body size vary taxonomically and in relation to physiological or developmental state, ecological lifestyle and environmental conditions. Here I discuss how the recently proposed metabolic-level boundaries hypothesis (MLBH provides a useful conceptual framework for explaining and predicting much, but not all of this variation. This hypothesis is based on three major assumptions: (1 various processes related to body volume and surface area exert state-dependent effects on the scaling slope for metabolic rate in relation to body mass; (2 the elevation and slope of metabolic scaling relationships are linked; and (3 both intrinsic (anatomical, biochemical and physiological and extrinsic (ecological factors can affect metabolic scaling. According to the MLBH, the diversity of metabolic scaling relationships occurs within physical boundary limits related to body volume and surface area. Within these limits, specific metabolic scaling slopes can be predicted from the metabolic level (or scaling elevation of a species or group of species. In essence, metabolic scaling itself scales with metabolic level, which is in turn contingent on various intrinsic and extrinsic conditions operating in physiological or evolutionary time. The MLBH represents a “meta-mechanism” or collection of multiple, specific mechanisms that have contingent, state-dependent effects. As such, the MLBH is Darwinian in approach (the theory of natural selection is also meta-mechanistic, in contrast to currently influential metabolic scaling theory that is Newtonian in approach (i.e., based on unitary deterministic laws. Furthermore, the MLBH can be viewed as part of a more general theory that includes other mechanisms that may also affect metabolic scaling.

  15. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  16. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  17. Spatial dependence of predictions from image segmentation: a methods to determine appropriate scales for producing land-management information

    Science.gov (United States)

    A challenge in ecological studies is defining scales of observation that correspond to relevant ecological scales for organisms or processes. Image segmentation has been proposed as an alternative to pixel-based methods for scaling remotely-sensed data into ecologically-meaningful units. However, to...

  18. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)

    2015-05-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores

  19. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    International Nuclear Information System (INIS)

    Daily, Jeffrey A.

    2015-01-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore's law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or 'homologous') on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K

  20. Laboratory-scale method for enzymatic saccharification of lignocellulosic biomass at high-solids loadings

    Directory of Open Access Journals (Sweden)

    Dibble Clare J

    2009-11-01

    Full Text Available Abstract Background Screening new lignocellulosic biomass pretreatments and advanced enzyme systems at process relevant conditions is a key factor in the development of economically viable lignocellulosic ethanol. Shake flasks, the reaction vessel commonly used for screening enzymatic saccharifications of cellulosic biomass, do not provide adequate mixing at high-solids concentrations when shaking is not supplemented with hand mixing. Results We identified roller bottle reactors (RBRs as laboratory-scale reaction vessels that can provide adequate mixing for enzymatic saccharifications at high-solids biomass loadings without any additional hand mixing. Using the RBRs, we developed a method for screening both pretreated biomass and enzyme systems at process-relevant conditions. RBRs were shown to be scalable between 125 mL and 2 L. Results from enzymatic saccharifications of five biomass pretreatments of different severities and two enzyme preparations suggest that this system will work well for a variety of biomass substrates and enzyme systems. A study of intermittent mixing regimes suggests that mass transfer limitations of enzymatic saccharifications at high-solids loadings are significant but can be mitigated with a relatively low amount of mixing input. Conclusion Effective initial mixing to promote good enzyme distribution and continued, but not necessarily continuous, mixing is necessary in order to facilitate high biomass conversion rates. The simplicity and robustness of the bench-scale RBR system, combined with its ability to accommodate numerous reaction vessels, will be useful in screening new biomass pretreatments and advanced enzyme systems at high-solids loadings.

  1. Scale-invariant gravity: geometrodynamics

    International Nuclear Information System (INIS)

    Anderson, Edward; Barbour, Julian; Foster, Brendan; Murchadha, Niall O

    2003-01-01

    We present a scale-invariant theory, conformal gravity, which closely resembles the geometrodynamical formulation of general relativity (GR). While previous attempts to create scale-invariant theories of gravity have been based on Weyl's idea of a compensating field, our direct approach dispenses with this and is built by extension of the method of best matching w.r.t. scaling developed in the parallel particle dynamics paper by one of the authors. In spatially compact GR, there is an infinity of degrees of freedom that describe the shape of 3-space which interact with a single volume degree of freedom. In conformal gravity, the shape degrees of freedom remain, but the volume is no longer a dynamical variable. Further theories and formulations related to GR and conformal gravity are presented. Conformal gravity is successfully coupled to scalars and the gauge fields of nature. It should describe the solar system observations as well as GR does, but its cosmology and quantization will be completely different

  2. Vibrational frequency scaling factors for correlation consistent basis sets and the methods CC2 and MP2 and their spin-scaled SCS and SOS variants

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no [Centre for Theoretical and Computational Chemistry CTCC, Department of Chemistry, University of Tromsø, N-9037 Tromsø (Norway); Törk, Lisa; Hättig, Christof, E-mail: christof.haettig@rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44801 Bochum (Germany)

    2014-11-21

    We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.

  3. A Real-time Generalization and Multi-scale Visualization Method for POI Data in Volunteered Geographic Information

    Directory of Open Access Journals (Sweden)

    YANG Min

    2015-02-01

    Full Text Available With the development of mobile and Web technologies, there has been an increasing number of map-based mushups which display different kinds of POI data in volunteered geographic information. Due to the lack of suitable mechanisms for multi-scale visualization, the display of the POI data often result in the icon clustering problem with icons touching and overlapping each other. This paper introduces a multi-scale visualization method for urban facility POI data by combing the classic methods of generalization and on-line environment. Firstly, we organize the POI data into hierarchical structure by preprocessing in the server-side; the POI features then will be obtained based on the display scale in the client-side and the displacement operation will be executed to resolve the local icon conflicts. Experiments show that this approach can not only achieve the requirements of real-time online, but also can get better multi-scale representation of POI data.

  4. Large-scale photochemical reactions of nanocrystalline suspensions: a promising green chemistry method.

    Science.gov (United States)

    Veerman, Marcel; Resendiz, Marino J E; Garcia-Garibay, Miguel A

    2006-06-08

    Photochemical reactions in the solid state can be scaled up from a few milligrams to 10 grams by using colloidal suspensions of a photoactive molecular crystal prepared by the solvent shift method. Pure products are recovered by filtration, and the use of H(2)O as a suspension medium makes this method a very attractive one from a green chemistry perspective. Using the photodecarbonylation of dicumyl ketone (DCK) as a test system, we show that reaction efficiencies in colloidal suspensions rival those observed in solution. [reaction: see text

  5. An accurate calibration method for accelerometer nonlinear scale factor on a low-cost three-axis turntable

    International Nuclear Information System (INIS)

    Pan, Jianye; Zhang, Chunxi; Cai, Qingzhong

    2014-01-01

    Strapdown inertial navigation system (SINS) requirements are very demanding on gyroscopes and accelerometers as well as on calibration. To improve the accuracy of SINS, high-accuracy calibration is needed. Adding the accelerometer nonlinear scale factor into the model and reducing estimation errors is essential for improving calibration methods. In this paper, the inertial navigation error model is simplified, including only velocity and tilt errors. Based on the simplified error model, the relationship between the navigation errors (the rates of change of velocity errors) and the inertial measurement unit (IMU) calibration parameters is presented. A tracking model is designed to estimate the rates of change of velocity errors. With a special calibration procedure consisting of six rotation sequences, the accelerometer nonlinear scale factor errors can be computed by the estimates of the rates of change of velocity errors. Simulation and laboratory test results show that the accelerometer nonlinear scale factor can be calibrated with satisfactory accuracy on a low-cost three-axis turntable in several minutes. The comparison with the traditional calibration method highlights the superior performance of the proposed calibration method without precise orientation control. In addition, the proposed calibration method saves a lot of time in comparison with the multi-position calibration method. (paper)

  6. Dynamical properties of the growing continuum using multiple-scale method

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2008-12-01

    Full Text Available The theory of growth and remodeling is applied to the 1D continuum. This can be mentioned e.g. as a model of the muscle fibre or piezo-electric stack. Hyperelastic material described by free energy potential suggested by Fung is used whereas the change of stiffness is taken into account. Corresponding equations define the dynamical system with two degrees of freedom. Its stability and the properties of bifurcations are studied using multiple-scale method. There are shown the conditions under which the degenerated Hopf's bifurcation is occuring.

  7. Indomethacin nanocrystals prepared by different laboratory scale methods: effect on crystalline form and dissolution behavior

    Energy Technology Data Exchange (ETDEWEB)

    Martena, Valentina; Censi, Roberta [University of Camerino, School of Pharmacy (Italy); Hoti, Ela; Malaj, Ledjan [University of Tirana, Department of Pharmacy (Albania); Di Martino, Piera, E-mail: piera.dimartino@unicam.it [University of Camerino, School of Pharmacy (Italy)

    2012-12-15

    The objective of this study is to select very simple and well-known laboratory scale methods able to reduce particle size of indomethacin until the nanometric scale. The effect on the crystalline form and the dissolution behavior of the different samples was deliberately evaluated in absence of any surfactants as stabilizers. Nanocrystals of indomethacin (native crystals are in the {gamma} form) (IDM) were obtained by three laboratory scale methods: A (Batch A: crystallization by solvent evaporation in a nano-spray dryer), B (Batch B-15 and B-30: wet milling and lyophilization), and C (Batch C-20-N and C-40-N: Cryo-milling in the presence of liquid nitrogen). Nanocrystals obtained by the method A (Batch A) crystallized into a mixture of {alpha} and {gamma} polymorphic forms. IDM obtained by the two other methods remained in the {gamma} form and a different attitude to the crystallinity decrease were observed, with a more considerable decrease in crystalline degree for IDM milled for 40 min in the presence of liquid nitrogen. The intrinsic dissolution rate (IDR) revealed a higher dissolution rate for Batches A and C-40-N, due to the higher IDR of {alpha} form than {gamma} form for the Batch A, and the lower crystallinity degree for both the Batches A and C-40-N. These factors, as well as the decrease in particle size, influenced the IDM dissolution rate from the particle samples. Modifications in the solid physical state that may occur using different particle size reduction treatments have to be taken into consideration during the scale up and industrial development of new solid dosage forms.

  8. The development of a Codependency Scale

    Directory of Open Access Journals (Sweden)

    Cremonte, Mariana

    2013-04-01

    Full Text Available Codependency is defined as a dysfunctional pattern of relating to others, present in relatives of those with a substance use disorder or other chronic disease. It is characterized by emotional dependence, extreme focus in the other person, and self-neglect. Aim: to present results of the process of development and validation of a new measure to evaluate codependency. Method: The Argentinean codependency scale was administered to a convenience sample of 347 subjects between 15 and 80 years, in Mar del Plata. AFE was used (main axes extraction method. The number of factors was determined by parallel analysis. Internal consistency was assessed using Cronbach's Alpha. Item level statistics were also estimated. Results: Three factors were obtained. The internal consistency of the total scale and the three subscales was satisfactory. Relatives of alcohol or drugs dependents and of people with other chronic diseases had scores significantly higher in the codependency measure than those in the general population group.

  9. Methods Dealing with Complexity in Selecting Joint Venture Contractors for Large-Scale Infrastructure Projects

    Directory of Open Access Journals (Sweden)

    Ru Liang

    2018-01-01

    Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.

  10. Stepwise integral scaling method for severe accident analysis and its application to corium dispersion in direct containment heating

    International Nuclear Information System (INIS)

    Ishii, M.; Zhang, G.; No, H. C.; Eltwila, F.

    1994-01-01

    Accident sequences which lead to severe core damage and to possible radioactive fission products into the environment have a very low probability. However, the interest in this area increased significantly due to the occurrence of the small break loss-of-coolant accident at TMI-2 which led to partial core damage, and of the Chernobyl accident in the former USSR which led to extensive core disassembly and significant release of fission products over several countries. In particular, the latter accident raised the international concern over the potential consequences of severe accidents in nuclear reactor systems. One of the significant shortcomings in the analyses of severe accidents is the lack of well-established and reliable scaling criteria for various multiphase flow phenomena. However, the scaling criteria are essential to the severe accident, because the full scale tests are basically impossible to perform. They are required for (1) designing scaled down or simulation experiments, (2) evaluating data and extrapolating the data to prototypic conditions, and (3) developing correctly scaled physical models and correlations. In view of this, a new scaling method is developed for the analysis of severe accidents. Its approach is quite different from the conventional methods. In order to demonstrate its applicability, this new stepwise integral scaling method has been applied to the analysis of the corium dispersion problem in the direct containment heating. ((orig.))

  11. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  12. Application of the Hybrid Simulation Method for the Full-Scale Precast Reinforced Concrete Shear Wall Structure

    Directory of Open Access Journals (Sweden)

    Zaixian Chen

    2018-02-01

    Full Text Available The hybrid simulation (HS testing method combines physical test and numerical simulation, and provides a viable alternative to evaluate the structural seismic performance. Most studies focused on the accuracy, stability and reliability of the HS method in the small-scale tests. It is a challenge to evaluate the seismic performance of a twelve-story pre-cast reinforced concrete shear-wall structure using this HS method which takes the full-scale bottom three-story structural model as the physical substructure and the elastic non-linear model as the numerical substructure. This paper employs an equivalent force control (EFC method with implicit integration algorithm to deal with the numerical integration of the equation of motion (EOM and the control of the loading device. Because of the arrangement of the test model, an elastic non-linear numerical model is used to simulate the numerical substructure. And non-subdivision strategy for the displacement inflection point of numerical substructure is used to easily realize the simulation of the numerical substructure and thus reduce the measured error. The parameters of the EFC method are calculated basing on analytical and numerical studies and used to the actual full-scale HS test. Finally, the accuracy and feasibility of the EFC-based HS method is verified experimentally through the substructure HS tests of the pre-cast reinforced concrete shear-wall structure model. And the testing results of the descending stage can be conveniently obtained from the EFC-based HS method.

  13. On BLM scale fixing in exclusive processes

    International Nuclear Information System (INIS)

    Anikin, I.V.; Pire, B.; Szymanowski, L.; Teryaev, O.V.; Wallon, S.

    2005-01-01

    We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x B . We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)

  14. On BLM scale fixing in exclusive processes

    Energy Technology Data Exchange (ETDEWEB)

    Anikin, I.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Universite Paris-Sud, LPT, Orsay (France); Pire, B. [Ecole Polytechnique, CPHT, Palaiseau (France); Szymanowski, L. [Soltan Institute for Nuclear Studies, Warsaw (Poland); Univ. de Liege, Inst. de Physique, Liege (Belgium); Teryaev, O.V. [JINR, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Wallon, S. [Universite Paris-Sud, LPT, Orsay (France)

    2005-07-01

    We discuss the BLM scale fixing procedure in exclusive electroproduction processes in the Bjorken regime with rather large x{sub B}. We show that in the case of vector meson production dominated in this case by quark exchange the usual way to apply the BLM method fails due to singularities present in the equations fixing the BLM scale. We argue that the BLM scale should be extracted from the squared amplitudes which are directly related to observables. (orig.)

  15. Large Scale Water Vapor Sources Relative to the October 2000 Piedmont Flood

    Science.gov (United States)

    Turato, Barbara; Reale, Oreste; Siccardi, Franco

    2003-01-01

    Very intense mesoscale or synoptic-scale rainfall events can occasionally be observed in the Mediterranean region without any deep cyclone developing over the areas affected by precipitation. In these perplexing cases the synoptic situation can superficially look similar to cases in which very little precipitation occurs. These situations could possibly baffle the operational weather forecasters. In this article, the major precipitation event that affected Piedmont (Italy) between 13 and 16 October 2000 is investigated. This is one of the cases in which no intense cyclone was observed within the Mediterranean region at any time, only a moderate system was present, and yet exceptional rainfall and flooding occurred. The emphasis of this study is on the moisture origin and transport. Moisture and energy balances are computed on different space- and time-scales, revealing that precipitation exceeds evaporation over an area inclusive of Piedmont and the northwestern Mediterranean region, on a time-scale encompassing the event and about two weeks preceding it. This is suggestive of an important moisture contribution originating from outside the region. A synoptic and dynamic analysis is then performed to outline the potential mechanisms that could have contributed to the large-scale moisture transport. The central part of the work uses a quasi-isentropic water-vapor back trajectory technique. The moisture sources obtained by this technique are compared with the results of the balances and with the synoptic situation, to unveil possible dynamic mechanisms and physical processes involved. It is found that moisture sources on a variety of atmospheric scales contribute to this event. First, an important contribution is caused by the extratropical remnants of former tropical storm Leslie. The large-scale environment related to this system allows a significant amount of moisture to be carried towards Europe. This happens on a time- scale of about 5-15 days preceding the

  16. New asteroseismic scaling relations based on the Hayashi track relation applied to red giant branch stars in NGC 6791 and NGC 6819

    International Nuclear Information System (INIS)

    Wu, T.; Li, Y.; Hekker, S.

    2014-01-01

    Stellar mass M, radius R, and gravity g are important basic parameters in stellar physics. Accurate values for these parameters can be obtained from the gravitational interaction between stars in multiple systems or from asteroseismology. Stars in a cluster are thought to be formed coevally from the same interstellar cloud of gas and dust. The cluster members are therefore expected to have some properties in common. These common properties strengthen our ability to constrain stellar models and asteroseismically derived M, R, and g when tested against an ensemble of cluster stars. Here we derive new scaling relations based on a relation for stars on the Hayashi track (√(T eff )∼g p R q ) to determine the masses and metallicities of red giant branch stars in open clusters NGC 6791 and NGC 6819 from the global oscillation parameters Δν (the large frequency separation) and ν max (frequency of maximum oscillation power). The Δν and ν max values are derived from Kepler observations. From the analysis of these new relations we derive: (1) direct observational evidence that the masses of red giant branch stars in a cluster are the same within their uncertainties, (2) new methods to derive M and z of the cluster in a self-consistent way from Δν and ν max , with lower intrinsic uncertainties, and (3) the mass dependence in the Δν - ν max relation for red giant branch stars.

  17. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    International Nuclear Information System (INIS)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; Nørskov, Jens K.

    2017-01-01

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying these methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.

  18. Scaling of lifting forces in relation to object size in whole body lifting

    NARCIS (Netherlands)

    Kingma, I.; van Dieen, J.H.; Toussaint, H.M.

    2005-01-01

    Subjects prepare for a whole body lifting movement by adjusting their posture and scaling their lifting forces to the expected object weight. The expectancy is based on visual and haptic size cues. This study aimed to find out whether lifting force overshoots related to object size cues disappear or

  19. Full-scale demonstration of EBS construction technology I. Block, pellet and in-situ compaction method

    International Nuclear Information System (INIS)

    Toguri, Satohito; Asano, Hidekazu; Takao, Hajime; Matsuda, Takeshi; Amemiya, Kiyoshi

    2008-01-01

    (i) Bentonite Block: Applicability of manufacturing technology of buffer material was verified by manufacturing of full scale bentonite ring which consists of one-eight (1/8) dividing block (Outside Diameter (OD): 2.220 mm H: 300 mm). Density characteristic, dimension and scale effect, which were considered the tunnel environment under transportation, were evaluated. Vacuum suction technology was selected as handling technology for the ring. Hoisting characteristic of vacuum suction technology was presented through evaluation of the mechanical property of buffer material, the friction between blocks, etc. by using a full-scale bentonite ring (OD 2.200 mm, H 300 mm). And design of bentonite block and emplacement equipment were presented in consideration of manufacturability of the block, stability of handling and improvement of emplacement efficiency. (ii) Bentonite Pellet Filling: Basic characteristics such as water penetration, swelling and thermal conductivity of various kinds of bentonite pellet were collected by laboratory scale tests. Applicability of pellet filling technology was evaluated by horizontal filling test using a simulated full-scale drift tunnel (OD 2.200 mm, L 6 m) . Filling density, grain size distribution, etc. were also measured. (iii) In-Situ Compaction of Bentonite: Dynamic compaction method (heavy weight fall method) was selected as in-situ compaction technology. Compacting examination which used a full scale disposal pit (OD 2.360 mm) was carried out. Basic specification of compacting equipment and applicability of in-situ compaction technology were presented. Density, density distribution of buffer material and energy acted on the wall of the pit, were also measured. (author)

  20. GRAPHICS-IMAGE MIXED METHOD FOR LARGE-SCALE BUILDINGS RENDERING

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Urban 3D model data is huge and unstructured, LOD and Out-of-core algorithm are usually used to reduce the amount of data that drawn in each frame to improve the rendering efficiency. When the scene is large enough, even the complex optimization algorithm is difficult to achieve better results. Based on the traditional study, a novel idea was developed. We propose a graphics and image mixed method for large-scale buildings rendering. Firstly, the view field is divided into several regions, the graphics-image mixed method used to render the scene on both screen and FBO, then blending the FBO with scree. The algorithm is tested on the huge CityGML model data in the urban areas of New York which contained 188195 public building models, and compared with the Cesium platform. The experiment result shows the system was running smoothly. The experimental results confirm that the algorithm can achieve more massive building scene roaming under the same hardware conditions, and can rendering the scene without vision loss.

  1. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    Science.gov (United States)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of

  2. Inhibitory effect of glutamic acid on the scale formation process using electrochemical methods.

    Science.gov (United States)

    Karar, A; Naamoune, F; Kahoul, A; Belattar, N

    2016-08-01

    The formation of calcium carbonate CaCO3 in water has some important implications in geoscience researches, ocean chemistry studies, CO2 emission issues and biology. In industry, the scaling phenomenon may cause technical problems, such as reduction in heat transfer efficiency in cooling systems and obstruction of pipes. This paper focuses on the study of the glutamic acid (GA) for reducing CaCO3 scale formation on metallic surfaces in the water of Bir Aissa region. The anti-scaling properties of glutamic acid (GA), used as a complexing agent of Ca(2+) ions, have been evaluated by the chronoamperometry and electrochemical impedance spectroscopy methods in conjunction with a microscopic examination. Chemical and electrochemical study of this water shows a high calcium concentration. The characterization using X-ray diffraction reveals that while the CaCO3 scale formed chemically is a mixture of calcite, aragonite and vaterite, the one deposited electrochemically is a pure calcite. The effect of temperature on the efficiency of the inhibitor was investigated. At 30 and 40°C, a complete scaling inhibition was obtained at a GA concentration of 18 mg/L with 90.2% efficiency rate. However, the efficiency of GA decreased at 50 and 60°C.

  3. A NEW SCALING RELATION FOR H II REGIONS IN SPIRAL GALAXIES: UNVEILING THE TRUE NATURE OF THE MASS-METALLICITY RELATION

    Energy Technology Data Exchange (ETDEWEB)

    Rosales-Ortega, F. F.; Diaz, A. I. [Departamento de Fisica Teorica, Universidad Autonoma de Madrid, E-28049 Madrid (Spain); Sanchez, S. F.; Iglesias-Paramo, J.; Vilchez, J. M.; Mast, D. [Instituto de Astrofisica de Andalucia (CSIC), Camino Bajo de Huetor s/n, Aptdo. 3004, E-18080 Granada (Spain); Bland-Hawthorn, J. [Sydney Institute for Astronomy, School of Physics A28, University of Sydney, NSW 2006 (Australia); Husemann, B., E-mail: frosales@cantab.net [Leibniz-Institut fuer Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam (Germany)

    2012-09-10

    We demonstrate the existence of a local mass, metallicity, star formation relation using spatially resolved optical spectroscopy of H II regions in the local universe. One of the projections of this distribution-the local mass-metallicity relation-extends over a wide range in this parameter space: three orders of magnitude in mass and a factor of eight in metallicity. We explain the new relation as the combined effect of the differential distributions of mass and metallicity in the disks of galaxies, and a selective star formation efficiency. We use this local relation to reproduce-with a noticeable agreement-the mass-metallicity relation seen in galaxies, and conclude that the latter is a scale-up integrated effect of a local relation, supporting the inside-out growth and downsizing scenarios of galaxy evolution.

  4. Geant4-related R&D for new particle transport methods

    CERN Document Server

    Augelli, M; Evans, T; Gargioni, E; Hauf, S; Kim, C H; Kuster, M; Pia, M G; Filho, P Queiroz; Quintieri, L; Saracco, P; Santos, D Souza; Weidenspointner, G; Zoglauer, A

    2009-01-01

    A R&D project has been launched in 2009 to address fundamental methods in radiation transport simulation and revisit Geant4 kernel design to cope with new experimental requirements. The project focuses on simulation at different scales in the same experimental environment: this set of problems requires new methods across the current boundaries of condensed-random-walk and discrete transport schemes. An exploration is also foreseen about exploiting and extending already existing Geant4 features to apply Monte Carlo and deterministic transport methods in the same simulation environment. An overview of this new R&D associated with Geant4 is presented, together with the first developments in progress.

  5. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers

    Directory of Open Access Journals (Sweden)

    Stochl Jan

    2012-06-01

    Full Text Available Abstract Background Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Methods Scalability of data from 1 a cross-sectional health survey (the Scottish Health Education Population Survey and 2 a general population birth cohort study (the National Child Development Study illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. Results and conclusions After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items we show that all items from the 12-item General Health Questionnaire (GHQ-12 – when binary scored – were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech’s “well-being” and “distress” clinical scales. An illustration of ordinal item analysis

  6. Quantum cosmological relational model of shape and scale in 1D

    International Nuclear Information System (INIS)

    Anderson, Edward

    2011-01-01

    Relational particle models are useful toy models for quantum cosmology and the problem of time in quantum general relativity. This paper shows how to extend existing work on concrete examples of relational particle models in 1D to include a notion of scale. This is useful as regards forming a tight analogy with quantum cosmology and the emergent semiclassical time and hidden time approaches to the problem of time. This paper shows furthermore that the correspondence between relational particle models and classical and quantum cosmology can be strengthened using judicious choices of the mechanical potential. This gives relational particle mechanics models with analogues of spatial curvature, cosmological constant, dust and radiation terms. A number of these models are then tractable at the quantum level. These models can be used to study important issues (1) in canonical quantum gravity: the problem of time, the semiclassical approach to it and timeless approaches to it (such as the naive Schroedinger interpretation and records theory) and (2) in quantum cosmology, such as in the investigation of uniform states, robustness and the qualitative understanding of the origin of structure formation.

  7. Conformal methods in general relativity

    CERN Document Server

    Valiente Kroon, Juan A

    2016-01-01

    This book offers a systematic exposition of conformal methods and how they can be used to study the global properties of solutions to the equations of Einstein's theory of gravity. It shows that combining these ideas with differential geometry can elucidate the existence and stability of the basic solutions of the theory. Introducing the differential geometric, spinorial and PDE background required to gain a deep understanding of conformal methods, this text provides an accessible account of key results in mathematical relativity over the last thirty years, including the stability of de Sitter and Minkowski spacetimes. For graduate students and researchers, this self-contained account includes useful visual models to help the reader grasp abstract concepts and a list of further reading, making this the perfect reference companion on the topic.

  8. Comparison of two different methods for evaluating the hydrodynamic performance of an industrial-scale fish-rearing unit

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; McLean, Ewen

    2004-01-01

    Laboratory-scale physical and mathematical models were evaluated for their utility in examining the hydrodynamic performance of a commercial fish-rearing tank. Each method was appraised with the common objective of predicting characteristic hydrodynamic behaviour of a full-scale tank. The two...

  9. Accessible methods for the dynamic time-scale decomposition of biochemical systems.

    Science.gov (United States)

    Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula

    2009-11-01

    The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.

  10. Cross-cultural measurement invariance in the satisfaction with food-related life scale in older adults from two developing countries.

    Science.gov (United States)

    Schnettler, Berta; Miranda-Zapata, Edgardo; Lobos, Germán; Lapo, María; Grunert, Klaus G; Adasme-Berríos, Cristian; Hueche, Clementina

    2017-05-30

    Nutrition is one of the major determinants of successful aging. The Satisfaction with Food-related Life (SWFL) scale measures a person's overall assessment regarding their food and eating habits. The SWFL scale has been used in older adult samples across different countries in Europe, Asia and America, however, there are no studies that have evaluated the cross-cultural measurement invariance of the scale in older adult samples. Therefore, we evaluated the measurement invariance of the SWFL scale across older adults from Chile and Ecuador. Stratified random sampling was used to recruit a sample of older adults of both genders from Chile (mean age = 71.38, SD = 6.48, range = 60-92) and from Ecuador (mean age = 73.70, SD = 7.45, range = 60-101). Participants reported their levels of satisfaction with food-related life by completing the SWFL scale, which consists of five items grouped into a single dimension. Confirmatory factor analysis (CFA) was used to examine cross-cultural measurement invariance of the SWFL scale. Results showed that the SWFL scale exhibited partial measurement invariance, with invariance of all factor loadings, invariance in all but one item's threshold (item 1) and invariance in all items' uniqueness (residuals), which leads us to conclude that there is a reasonable level of partial measurement invariance for the CFA model of the SWFL scale, when comparing the Chilean and Ecuadorian older adult samples. The lack of invariance in item 1 confirms previous studies with adults and emerging adults in Chile that suggest this item is culture-sensitive. We recommend revising the wording of the first item of the SWFL in order to relate the statement with the person's life. The SWFL scale shows partial measurement invariance across older adults from Chile and Ecuador. A 4-item version of the scale (excluding item 1) provides the basis for international comparisons of satisfaction with food-related life in older adults from developing

  11. FT-IR spectra of the anti-HIV nucleoside analogue d4T (Stavudine). Solid state simulation by DFT methods and scaling by different procedures

    Science.gov (United States)

    Alcolea Palafox, M.; Kattan, D.; Afseth, N. K.

    2018-04-01

    A theoretical and experimental vibrational study of the anti-HIV d4T (stavudine or Zerit) nucleoside analogue was carried out. The predicted spectra in the three most stable conformers in the biological active anti-form of the isolated state were compared. Comparison of the conformers with those of the natural nucleoside thymidine was carried out. The calculated spectra were scaled by using different scaling procedures and three DFT methods. The TLSE procedure leads to the lowest error and is thus recommended for scaling. With the population of these conformers the IR gas-phase spectra were predicted. The crystal unit cell of the different polymorphism forms of d4T were simulated through dimer forms by using DFT methods. The scaled spectra of these dimer forms were compared. The FT-IR spectrum was recorded in the solid state in the 400-4000 cm-1 range. The respective vibrational bands were analyzed and assigned to different normal modes of vibration by comparison with the scaled vibrational values of the different dimer forms. Through this comparison, the polymorphous form of the solid state sample was identified. The study indicates that d4T exist only in the ketonic form in the solid state. The results obtained were in agreement with those determined in related anti-HIV nucleoside analogues.

  12. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    Science.gov (United States)

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  13. Sensitivity and responsiveness of the health-related quality of life in stroke patients-40 (HRQOLISP-40) scale.

    Science.gov (United States)

    Vincent-Onabajo, Grace O; Owolabi, Mayowa O; Hamzat, Talhatu K

    2014-01-01

    To investigate the sensitivity and responsiveness of the Health-Related Quality of Life in Stroke Patients-40 (HRQOLISP-40) scale in evaluating stroke patients from onset to 12 months. Fifty-five patients with first-incidence stroke were followed-up for 12 months. The HRQOLISP-40 scale was used to assess health-related quality of life (HRQOL) while stroke severity was assessed with the Stroke Levity Scale. Sensitivity to change was assessed by analyzing changes in the HRQOLISP-40 scores between pairs of months with paired samples t-test. Standardized effect size (SES) and standardized response mean (SRM) were used to express responsiveness. Overall HRQOL and domains in the physical sphere of the HRQOLISP-40 were sensitive to change at different time intervals in the first 12 months post-stroke. Marked responsiveness (SES and SRM >0.7) was demonstrated by the overall scale, and the physical, psycho-emotional and cognitive domains at varying time intervals. For instance, SRM was greater than 0.7 between 1 and 6, 3 and 12, 1 and 9, and 1 and 12 months for both the physical and psycho-emotional domains. The HRQOLISP-40 is a sensitive and responsive stroke-specific quality of life measure that can be used to evaluate the outcome of stroke rehabilitation. Enhancing the health-related quality of life (HRQOL) of stroke survivors can be regarded as the ultimate goal of stroke rehabilitation. Sensitive and responsive stroke-specific HRQOL measures are required for use in evaluative studies, and clinical trials and practice. The Health-Related Quality of Life in Stroke Patients-40 (HRQOLISP-40) is a sensitive and responsive stroke-specific scale.

  14. Methods of studying oxide scales grown on zirconium alloys in autoclaves and in a PWR

    International Nuclear Information System (INIS)

    Blank, H.; Bart, G.; Thiele, H.

    1992-01-01

    The analysis of water-side corrosion of zirconium alloys has been a field of research for more than 25 years, but the details of the mechanisms involved still cannot be put into a coherent picture. Improved methods are required to establish the details of the microstructure of the oxide scales. A new approach has been made for a general analysis of oxide specimens from scales grown on the zirconium-based cladding alloys of PWR rods in order to analyse the morphology of these scales, the topography of the oxide/metal interface and the crystal structures close to this interface: a) Instead of using the conventional pickling solutions, the Zr-alloys are dissolved using a 'softer' solution (Br 2 in an organic solvent) in order to avoid damage to the oxide at the oxide/metal interface to be analysed by SEM (scanning electron microscopy). A second advantage of this method is easy etching of the grain structure of Zr-alloys for SEM analysis; b) By using the particular properties of the oxide scales, the corrosion-rate-determining innermost part of the oxide layer at the oxide/metal interface can be separated from the rest of the oxide scale and then analysed by SEM, STEM (scanning transmission electron microscopy), TEM (transmission electron microscopy) and electron diffraction after dissolution of the alloy. Examples are given from oxides grown on Zr-alloys in a pressurized water reactor and in autoclaves. (author) 8 figs., 3 tabs., 9 refs

  15. Comparison of two down-scaling methods for climate study and climate change on the mountain areas in France

    International Nuclear Information System (INIS)

    Piazza, Marie; Page, Christian; Sanchez-Gomez, Emilia; Terray, Laurent; Deque, Michel

    2013-01-01

    Mountain regions are highly vulnerable to climate change and are likely to be among the areas most impacted by global warming. But climate projections for the end of the 21. century are developed with general circulation models of climate, which do not present a sufficient horizontal resolution to accurately evaluate the impacts of warming on these regions. Several techniques are then used to perform a spatial down-scaling (on the order of 10 km). There are two categories of down-scaling methods: dynamical methods that require significant computational resources for the achievement of regional climate simulations at high resolution, and statistical methods that require few resources but an observation dataset over a long period and of good quality. In this study, climate simulations of the global atmospheric model ARPEGE projections over France are down-scaled according to a dynamical method, performed with the ALADIN-Climate regional model, and a statistical method performed with the software DSClim developed at CERFACS. The two down-scaling methods are presented and the results on the climate of the French mountains are evaluated for the current climate. Both methods give similar results for average snowfall. However extreme events of total precipitation (droughts, intense precipitation events) are largely underestimated by the statistical method. Then, the results of both methods are compared for two future climate projections, according to the greenhouse gas emissions scenario A1B of IPCC. The two methods agree on fewer frost days, a significant decrease in the amounts of solid precipitation and an average increase in the percentage of dry days of more than 10%. The results obtained on Corsica are more heterogeneous but they are questionable because the reduced spatial domain is probably not very relevant regarding statistical sampling. (authors)

  16. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  17. Application of wavelet scaling function expansion continuous-energy resonance calculation method to MOX fuel problem

    International Nuclear Information System (INIS)

    Yang, W.; Wu, H.; Cao, L.

    2012-01-01

    More and more MOX fuels are used in all over the world in the past several decades. Compared with UO 2 fuel, it contains some new features. For example, the neutron spectrum is harder and more resonance interference effects within the resonance energy range are introduced because of more resonant nuclides contained in the MOX fuel. In this paper, the wavelets scaling function expansion method is applied to study the resonance behavior of plutonium isotopes within MOX fuel. Wavelets scaling function expansion continuous-energy self-shielding method is developed recently. It has been validated and verified by comparison to Monte Carlo calculations. In this method, the continuous-energy cross-sections are utilized within resonance energy, which means that it's capable to solve problems with serious resonance interference effects without iteration calculations. Therefore, this method adapts to treat the MOX fuel resonance calculation problem natively. Furthermore, plutonium isotopes have fierce oscillations of total cross-section within thermal energy range, especially for 240 Pu and 242 Pu. To take thermal resonance effect of plutonium isotopes into consideration the wavelet scaling function expansion continuous-energy resonance calculation code WAVERESON is enhanced by applying the free gas scattering kernel to obtain the continuous-energy scattering source within thermal energy range (2.1 eV to 4.0 eV) contrasting against the resonance energy range in which the elastic scattering kernel is utilized. Finally, all of the calculation results of WAVERESON are compared with MCNP calculation. (authors)

  18. Pain point system scale (PPSS: a method for postoperative pain estimation in retrospective studies

    Directory of Open Access Journals (Sweden)

    Gkotsi A

    2012-11-01

    Full Text Available Anastasia Gkotsi,1 Dimosthenis Petsas,2 Vasilios Sakalis,3 Asterios Fotas,3 Argyrios Triantafyllidis,3 Ioannis Vouros,3 Evangelos Saridakis,2 Georgios Salpiggidis,3 Athanasios Papathanasiou31Department of Experimental Physiology, Aristotle University of Thessaloniki, Thessaloniki, Greece; 2Department of Anesthesiology, 3Department of Urology, Hippokration General Hospital, Thessaloniki, GreecePurpose: Pain rating scales are widely used for pain assessment. Nevertheless, a new tool is required for pain assessment needs in retrospective studies.Methods: The postoperative pain episodes, during the first postoperative day, of three patient groups were analyzed. Each pain episode was assessed by a visual analog scale, numerical rating scale, verbal rating scale, and a new tool – pain point system scale (PPSS – based on the analgesics administered. The type of analgesic was defined based on the authors’ clinic protocol, patient comorbidities, pain assessment tool scores, and preadministered medications by an artificial neural network system. At each pain episode, each patient was asked to fill the three pain scales. Bartlett’s test and Kaiser–Meyer–Olkin criterion were used to evaluate sample sufficiency. The proper scoring system was defined by varimax rotation. Spearman’s and Pearson’s coefficients assessed PPSS correlation to the known pain scales.Results: A total of 262 pain episodes were evaluated in 124 patients. The PPSS scored one point for each dose of paracetamol, three points for each nonsteroidal antiinflammatory drug or codeine, and seven points for each dose of opioids. The correlation between the visual analog scale and PPSS was found to be strong and linear (rho: 0.715; P <0.001 and Pearson: 0.631; P < 0.001.Conclusion: PPSS correlated well with the known pain scale and could be used safely in the evaluation of postoperative pain in retrospective studies.Keywords: pain scale, retrospective studies, pain point system

  19. Precision Scaling Relations for Disk Galaxies in the Local Universe

    Science.gov (United States)

    Lapi, A.; Salucci, P.; Danese, L.

    2018-05-01

    We build templates of rotation curves as a function of the I-band luminosity via the mass modeling (by the sum of a thin exponential disk and a cored halo profile) of suitably normalized, stacked data from wide samples of local spiral galaxies. We then exploit such templates to determine fundamental stellar and halo properties for a sample of about 550 local disk-dominated galaxies with high-quality measurements of the optical radius R opt and of the corresponding rotation velocity V opt. Specifically, we determine the stellar M ⋆ and halo M H masses, the halo size R H and velocity scale V H, and the specific angular momenta of the stellar j ⋆ and dark matter j H components. We derive global scaling relationships involving such stellar and halo properties both for the individual galaxies in our sample and for their mean within bins; the latter are found to be in pleasing agreement with previous determinations by independent methods (e.g., abundance matching techniques, weak-lensing observations, and individual rotation curve modeling). Remarkably, the size of our sample and the robustness of our statistical approach allow us to attain an unprecedented level of precision over an extended range of mass and velocity scales, with 1σ dispersion around the mean relationships of less than 0.1 dex. We thus set new standard local relationships that must be reproduced by detailed physical models, which offer a basis for improving the subgrid recipes in numerical simulations, that provide a benchmark to gauge independent observations and check for systematics, and that constitute a basic step toward the future exploitation of the spiral galaxy population as a cosmological probe.

  20. An enquiry into the method of paired comparison: reliability, scaling, and Thurstone's Law of Comparative Judgment

    Science.gov (United States)

    Thomas C. Brown; George L. Peterson

    2009-01-01

    The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...

  1. Assessment of nuclear data needs for broad-group SCALE library related to WWER spent fuel applications

    International Nuclear Information System (INIS)

    Zalesky, K.; Markova, L.

    1999-12-01

    A preliminary study aimed at the issue of feasibility to generate a broad-group SCALE library related to WWER spent fuel applications was made. The SCALE code system has been installed and is being used in many countries operating WWER-type reactors for criticality and shielding analyses as well as spent fuel isotopic inventory calculations but still without an extensive validation and verification for the WWER environment. This study should be a contribution to QA connected with the SCALE code system application for the WWER calculations as a basis on which the generation of the specific WWER SCALE library can be prepared. Possible ways of the broad-group library development are described. (author)

  2. A mixed-methods study of system-level sustainability of evidence-based practices in 12 large-scale implementation initiatives.

    Science.gov (United States)

    Scudder, Ashley T; Taber-Thomas, Sarah M; Schaffner, Kristen; Pemberton, Joy R; Hunter, Leah; Herschell, Amy D

    2017-12-07

    In recent decades, evidence-based practices (EBPs) have been broadly promoted in community behavioural health systems in the United States of America, yet reported EBP penetration rates remain low. Determining how to systematically sustain EBPs in complex, multi-level service systems has important implications for public health. This study examined factors impacting the sustainability of parent-child interaction therapy (PCIT) in large-scale initiatives in order to identify potential predictors of sustainment. A mixed-methods approach to data collection was used. Qualitative interviews and quantitative surveys examining sustainability processes and outcomes were completed by participants from 12 large-scale initiatives. Sustainment strategies fell into nine categories, including infrastructure, training, marketing, integration and building partnerships. Strategies involving integration of PCIT into existing practices and quality monitoring predicted sustainment, while financing also emerged as a key factor. The reported factors and strategies impacting sustainability varied across initiatives; however, integration into existing practices, monitoring quality and financing appear central to high levels of sustainability of PCIT in community-based systems. More detailed examination of the progression of specific activities related to these strategies may aide in identifying priorities to include in strategic planning of future large-scale initiatives. ClinicalTrials.gov ID NCT02543359 ; Protocol number PRO12060529.

  3. Evaluation method of economic efficiency of industrial scale research based on an example of coking blend pre-drying technology

    Directory of Open Access Journals (Sweden)

    Żarczyński Piotr

    2017-01-01

    Full Text Available The research on new and innovative solutions, technologies and products carried out on an industrial scale is the most reliable method of verifying the validity of their implementation. The results obtained in this research method give almost one hundred percent certainty although, at the same time, the research on an industrial scale requires the expenditure of the highest amount of money. Therefore, this method is not commonly applied in the industrial practices. In the case of the decision to implement new and innovative technologies, it is reasonable to carry out industrial research, both because of the cognitive values and its economic efficiency. Research on an industrial scale may prevent investment failure as well as lead to an improvement of technologies, which is the source of economic efficiency. In this paper, an evaluation model of economic efficiency of the industrial scale research has been presented. This model is based on the discount method and the decision tree model. A practical application of this proposed evaluation model has been presented based on an example of the coal charge pre-drying technology before coke making in a coke oven battery, which may be preceded by industrial scale research on a new type of coal charge dryer.

  4. A NEW SCALING RELATION FOR H II REGIONS IN SPIRAL GALAXIES: UNVEILING THE TRUE NATURE OF THE MASS-METALLICITY RELATION

    International Nuclear Information System (INIS)

    Rosales-Ortega, F. F.; Díaz, A. I.; Sánchez, S. F.; Iglesias-Páramo, J.; Vílchez, J. M.; Mast, D.; Bland-Hawthorn, J.; Husemann, B.

    2012-01-01

    We demonstrate the existence of a local mass, metallicity, star formation relation using spatially resolved optical spectroscopy of H II regions in the local universe. One of the projections of this distribution—the local mass-metallicity relation—extends over a wide range in this parameter space: three orders of magnitude in mass and a factor of eight in metallicity. We explain the new relation as the combined effect of the differential distributions of mass and metallicity in the disks of galaxies, and a selective star formation efficiency. We use this local relation to reproduce—with a noticeable agreement—the mass-metallicity relation seen in galaxies, and conclude that the latter is a scale-up integrated effect of a local relation, supporting the inside-out growth and downsizing scenarios of galaxy evolution.

  5. III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.

    Science.gov (United States)

    Davis-Kean, Pamela E; Jager, Justin

    2017-06-01

    For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.

  6. Large-scale nanofabrication of periodic nanostructures using nanosphere-related techniques for green technology applications (Conference Presentation)

    Science.gov (United States)

    Yen, Chen-Chung; Wu, Jyun-De; Chien, Yi-Hsin; Wang, Chang-Han; Liu, Chi-Ching; Ku, Chen-Ta; Chen, Yen-Jon; Chou, Meng-Cheng; Chang, Yun-Chorng

    2016-09-01

    Nanotechnology has been developed for decades and many interesting optical properties have been demonstrated. However, the major hurdle for the further development of nanotechnology depends on finding economic ways to fabricate such nanostructures in large-scale. Here, we demonstrate how to achieve low-cost fabrication using nanosphere-related techniques, such as Nanosphere Lithography (NSL) and Nanospherical-Lens Lithography (NLL). NSL is a low-cost nano-fabrication technique that has the ability to fabricate nano-triangle arrays that cover a very large area. NLL is a very similar technique that uses polystyrene nanospheres to focus the incoming ultraviolet light and exposure the underlying photoresist (PR) layer. PR hole arrays form after developing. Metal nanodisk arrays can be fabricated following metal evaporation and lifting-off processes. Nanodisk or nano-ellipse arrays with various sizes and aspect ratios are routinely fabricated in our research group. We also demonstrate we can fabricate more complicated nanostructures, such as nanodisk oligomers, by combining several other key technologies such as angled exposure and deposition, we can modify these methods to obtain various metallic nanostructures. The metallic structures are of high fidelity and in large scale. The metallic nanostructures can be transformed into semiconductor nanostructures and be used in several green technology applications.

  7. Concepts: Integrating population survey data from different spatial scales, sampling methods, and species

    Science.gov (United States)

    Dorazio, Robert; Delampady, Mohan; Dey, Soumen; Gopalaswamy, Arjun M.; Karanth, K. Ullas; Nichols, James D.

    2017-01-01

    Conservationists and managers are continually under pressure from the public, the media, and political policy makers to provide “tiger numbers,” not just for protected reserves, but also for large spatial scales, including landscapes, regions, states, nations, and even globally. Estimating the abundance of tigers within relatively small areas (e.g., protected reserves) is becoming increasingly tractable (see Chaps. 9 and 10), but doing so for larger spatial scales still presents a formidable challenge. Those who seek “tiger numbers” are often not satisfied by estimates of tiger occupancy alone, regardless of the reliability of the estimates (see Chaps. 4 and 5). As a result, wherever tiger conservation efforts are underway, either substantially or nominally, scientists and managers are frequently asked to provide putative large-scale tiger numbers based either on a total count or on an extrapolation of some sort (see Chaps. 1 and 2).

  8. Aeroelastic scaling laws for gust load alleviation control system

    Directory of Open Access Journals (Sweden)

    Tang Bo

    2016-02-01

    Full Text Available Gust load alleviation (GLA tests are widely conducted to study the effectiveness of the control laws and methods. The physical parameters of models in these tests are aeroelastic scaled, while the scaling of GLA control system is always unreached. This paper concentrates on studying the scaling laws of GLA control system. Through theoretical demonstration, the scaling criterion of a classical PID control system has been come up and a scaling methodology is provided and verified. By adopting the scaling laws in this paper, gust response of the scaled model could be directly related to the full-scale aircraft theoretically under both open-loop and closed-loop conditions. Also, the influences of different scaling choices of an important non-dimensional parameter, the Froude number, have been studied in this paper. Furthermore for practical application, a compensating method is given when the theoretical scaled actuators or sensors cannot be obtained. Also, the scaling laws of some non-linear elements in control system such as the rate and amplitude saturations in actuator have been studied and examined by a numerical simulation.

  9. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  10. A NEW METHOD TO DIRECTLY MEASURE THE JEANS SCALE OF THE INTERGALACTIC MEDIUM USING CLOSE QUASAR PAIRS

    International Nuclear Information System (INIS)

    Rorai, Alberto; Hennawi, Joseph F.; White, Martin

    2013-01-01

    only 20 close quasar pair spectra can pinpoint the Jeans scale to ≅ 5% precision, independent of the amplitude T 0 and slope γ of the temperature-density relation of the IGM T=T 0 (ρ/ ρ-bar ) γ-1 . This exquisite sensitivity arises because even long-wavelength one-dimensional Fourier modes ∼10 Mpc, i.e., two orders of magnitude larger than the Jeans scale, are nevertheless dominated by projected small-scale three-dimensional (3D) power. Hence phase angle differences between all modes of quasar pair spectra actually probe the shape of the 3D power spectrum on scales comparable to the pair separation. We show that this new method for measuring the Jeans scale is unbiased and is insensitive to a battery of systematics that typically plague Lyα forest measurements, such as continuum fitting errors, imprecise knowledge of the noise level and/or spectral resolution, and metal-line absorption

  11. A NEW METHOD TO DIRECTLY MEASURE THE JEANS SCALE OF THE INTERGALACTIC MEDIUM USING CLOSE QUASAR PAIRS

    Energy Technology Data Exchange (ETDEWEB)

    Rorai, Alberto; Hennawi, Joseph F. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); White, Martin [Department of Astronomy, University of California at Berkeley, 601 Campbell Hall, Berkeley, CA 94720-3411 (United States)

    2013-10-01

    only 20 close quasar pair spectra can pinpoint the Jeans scale to ≅ 5% precision, independent of the amplitude T{sub 0} and slope γ of the temperature-density relation of the IGM T=T{sub 0}(ρ/ ρ-bar ){sup γ-1}. This exquisite sensitivity arises because even long-wavelength one-dimensional Fourier modes ∼10 Mpc, i.e., two orders of magnitude larger than the Jeans scale, are nevertheless dominated by projected small-scale three-dimensional (3D) power. Hence phase angle differences between all modes of quasar pair spectra actually probe the shape of the 3D power spectrum on scales comparable to the pair separation. We show that this new method for measuring the Jeans scale is unbiased and is insensitive to a battery of systematics that typically plague Lyα forest measurements, such as continuum fitting errors, imprecise knowledge of the noise level and/or spectral resolution, and metal-line absorption.

  12. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    Science.gov (United States)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  13. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  14. Relative amplitude of medium-scale traveling ionospheric disturbances as deduced from global GPS network

    Science.gov (United States)

    Voeykov, S. V.; Afraimovich, E. L.; Kosogorov, E. A.; Perevalova, N. P.; Zhivetiev, I. V.

    We worked out a new method for estimation of relative amplitude dI I of total electron content TEC variations corresponding to medium-scale 30-300 km traveling ionospheric disturbances MS TIDs Daily and latitudinal dependences of dI I and dI I probability distributions are obtained for 52 days of 1999-2005 with different level of geomagnetic activity Statistical estimations were obtained for the analysis of 10 6 series of TEC with 2 3-hour duration To obtain statistically significant results three latitudinal regions were chosen North America high-latitudinal region 50-80 r N 200-300 r E 59 GPS receivers North America mid-latitudinal region 20-50 r N 200-300 r E 817 receivers equatorial belt -20 20 r N 0-360 r E 76 receivers We found that average daily value of the relative amplitude of TEC variations dI I changes from 0 3 to 10 proportionally to the value of geomagnetic index Kp This dependence is strong at high latitudes dI I 0 37 cdot Kp 1 5 and it is some weaker at mid latitudes dI I 0 2 cdot Kp 0 35 At the equator belt we found the weakest dependence dI I on the geomagnetic activity level dI I 0 1 cdot Kp 0 6 The most important and the most interesting result of our work is that during geomagnetic quiet conditions the relative amplitude of TEC variations at night considerably exceeds daily values by 3-5 times at equatorial and at high latitudes and by 2 times at mid latitudes But during strong magnetic storms the relative amplitude dI I at high

  15. Restoring large-scale brain networks in PTSD and related disorders: a proposal for neuroscientifically-informed treatment interventions

    Directory of Open Access Journals (Sweden)

    Ruth A. Lanius

    2015-03-01

    Full Text Available Background: Three intrinsic connectivity networks in the brain, namely the central executive, salience, and default mode networks, have been identified as crucial to the understanding of higher cognitive functioning, and the functioning of these networks has been suggested to be impaired in psychopathology, including posttraumatic stress disorder (PTSD. Objective: 1 To describe three main large-scale networks of the human brain; 2 to discuss the functioning of these neural networks in PTSD and related symptoms; and 3 to offer hypotheses for neuroscientifically-informed interventions based on treating the abnormalities observed in these neural networks in PTSD and related disorders. Method: Literature relevant to this commentary was reviewed. Results: Increasing evidence for altered functioning of the central executive, salience, and default mode networks in PTSD has been demonstrated. We suggest that each network is associated with specific clinical symptoms observed in PTSD, including cognitive dysfunction (central executive network, increased and decreased arousal/interoception (salience network, and an altered sense of self (default mode network. Specific testable neuroscientifically-informed treatments aimed to restore each of these neural networks and related clinical dysfunction are proposed. Conclusions: Neuroscientifically-informed treatment interventions will be essential to future research agendas aimed at targeting specific PTSD and related symptoms.

  16. The predictive validity of the Drinking-Related Cognitions Scale in alcohol-dependent patients under abstinence-oriented treatment

    Directory of Open Access Journals (Sweden)

    Sawayama Toru

    2012-05-01

    Full Text Available Abstract Background Cognitive factors associated with drinking behavior such as positive alcohol expectancies, self-efficacy, perception of impaired control over drinking and perception of drinking problems are considered to have a significant influence on treatment effects and outcome in alcohol-dependent patients. However, the development of a rating scale on lack of perception or denial of drinking problems and impaired control over drinking has not been substantial, even though these are important factors in patients under abstinence-oriented treatment as well as participants in self-help groups such as Alcoholics Anonymous (AA. The Drinking-Related Cognitions Scale (DRCS is a new self-reported rating scale developed to briefly measure cognitive factors associated with drinking behavior in alcohol-dependent patients under abstinence-oriented treatment, including positive alcohol expectancies, abstinence self-efficacy, perception of impaired control over drinking, and perception of drinking problems. Here, we conducted a prospective cohort study to explore the predictive validity of DRCS. Methods Participants in this study were 175 middle-aged and elderly Japanese male patients who met the DSM-IV Diagnostic Criteria for Alcohol Dependence. DRCS scores were recorded before and after the inpatient abstinence-oriented treatment program, and treatment outcome was evaluated one year after discharge. Results Of the 175 participants, 30 were not available for follow-up; thus the number of subjects for analysis in this study was 145. When the total DRCS score and subscale scores were compared before and after inpatient treatment, a significant increase was seen for both scores. Both the total DRCS score and each subscale score were significantly related to total abstinence, percentage of abstinent days, and the first drinking occasion during the one-year post-treatment period. Therefore, good treatment outcome was significantly predicted by low

  17. Test-retest reliability of Antonovsky's 13-item sense of coherence scale in patients with hand-related disorders

    DEFF Research Database (Denmark)

    Hansen, Alice Ørts; Kristensen, Hanne Kaae; Cederlund, Ragnhild

    2017-01-01

    to be a powerful tool to measure the ICF component personal factors, which could have an impact on patients' rehabilitation outcomes. Implications for rehabilitation Antonovsky's SOC-13 scale showed test-retest reliability for patients with hand-related disorders. The SOC-13 scale could be a suitable tool to help...... measure personal factors....

  18. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    Science.gov (United States)

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  19. Development and Validation of a PTSD-Related Impairment Scale

    Science.gov (United States)

    2012-06-01

    Social Adjustment Scale (SAS-SR) (58] Dyadic Adjustment Scale (DAS) [59] Life Stressors and Social Resources Inventory ( LISRES ) [60] 3...measure that gauges on- 200 Social Resources lnven- 2. Spouse/partner going life stressors and social resources tory ( LISRES ; Moos & 3. Finances as well...measures (e.g., ICF checklist, LISRES ; Moos, Penn, & Billings, 1988) may nor be practical or desirable in many healthcare settings or in large-scale

  20. Scale Space Methods for Analysis of Type 2 Diabetes Patients' Blood Glucose Values

    Directory of Open Access Journals (Sweden)

    Stein Olav Skrøvseth

    2011-01-01

    Full Text Available We describe how scale space methods can be used for quantitative analysis of blood glucose concentrations from type 2 diabetes patients. Blood glucose values were recorded voluntarily by the patients over one full year as part of a self-management process, where the time and frequency of the recordings are decided by the patients. This makes a unique dataset in its extent, though with a large variation in reliability of the recordings. Scale space and frequency space techniques are suited to reveal important features of unevenly sampled data, and useful for identifying medically relevant features for use both by patients as part of their self-management process, and provide useful information for physicians.

  1. Valuing Treatments for Parkinson Disease Incorporating Process Utility: Performance of Best-Worst Scaling, Time Trade-Off, and Visual Analogue Scales.

    Science.gov (United States)

    Weernink, Marieke G M; Groothuis-Oudshoorn, Catharina G M; IJzerman, Maarten J; van Til, Janine A

    2016-01-01

    The objective of this study was to compare treatment profiles including both health outcomes and process characteristics in Parkinson disease using best-worst scaling (BWS), time trade-off (TTO), and visual analogue scales (VAS). From the model comprising of seven attributes with three levels, six unique profiles were selected representing process-related factors and health outcomes in Parkinson disease. A Web-based survey (N = 613) was conducted in a general population to estimate process-related utilities using profile-based BWS (case 2), multiprofile-based BWS (case 3), TTO, and VAS. The rank order of the six profiles was compared, convergent validity among methods was assessed, and individual analysis focused on the differentiation between pairs of profiles with methods used. The aggregated health-state utilities for the six treatment profiles were highly comparable for all methods and no rank reversals were identified. On the individual level, the convergent validity between all methods was strong; however, respondents differentiated less in the utility of closely related treatment profiles with a VAS or TTO than with BWS. For TTO and VAS, this resulted in nonsignificant differences in mean utilities for closely related treatment profiles. This study suggests that all methods are equally able to measure process-related utility when the aim is to estimate the overall value of treatments. On an individual level, such as in shared decision making, BWS allows for better prioritization of treatment alternatives, especially if they are closely related. The decision-making problem and the need for explicit trade-off between attributes should determine the choice for a method. Copyright © 2016. Published by Elsevier Inc.

  2. Stand-scale soil respiration estimates based on chamber methods in a Bornean tropical rainforest

    Science.gov (United States)

    Kume, T.; Katayama, A.; Komatsu, H.; Ohashi, M.; Nakagawa, M.; Yamashita, M.; Otsuki, K.; Suzuki, M.; Kumagai, T.

    2009-12-01

    This study was undertaken to estimate stand-scale soil respiration in an aseasonal tropical rainforest on Borneo Island. To this aim, we identified critical and practical factors explaining spatial variations in soil respiration based on the soil respiration measurements conducted at 25 points in a 40 × 40 m subplot of a 4 ha study plot for five years in relation to soil, root, and forest structural factors. Consequently, we found significant positive correlation between the soil respiration and forest structural parameters. The most important factor was the mean DBH within 6 m of the measurement points, which had a significant linear relationship with soil respiration. Using the derived linear regression and an inventory dataset, we estimated the 4 ha-scale soil respiration. The 4 ha-scale estimation (6.0 μmol m-2 s-1) was nearly identical to the subplot scale measurements (5.7 μmol m-2 s-1), which were roughly comparable to the nocturnal CO2 fluxes calculated using the eddy covariance technique. To confirm the spatial representativeness of soil respiration estimates in the subplot, we performed variogram analysis. Semivariance of DBH(6) in the 4 ha plot showed that there was autocorrelation within the separation distance of about 20 m, and that the spatial dependence was unclear at a separation distance of greater than 20 m. This ascertained that the 40 × 40 m subplot could represent the whole forest structure in the 4 ha plot. In addition, we discuss characteristics of the stand-scale soil respiration at this site by comparing with those of other forests reported in previous literature in terms of the soil C balance. Soil respiration at our site was noticeably greater, relative to the incident litterfall amount, than soil respiration in other tropical and temperate forests probably owing to the larger total belowground C allocation by emergent trees. Overall, this study suggests the arrangement of emergent trees and their bellow ground C allocation could be

  3. An easy, low-cost method to transfer large-scale graphene onto polyethylene terephthalate as a transparent conductive flexible substrate

    International Nuclear Information System (INIS)

    Chen, Chih-Sheng; Hsieh, Chien-Kuo

    2014-01-01

    In this study, we develop a low-cost method for transferring a large-scale graphene film onto a flexible transparent substrate. An easily accessible method for home-made chemical vapor deposition (CVD) and a commercial photograph laminator were utilized to fabricate the low-cost graphene-based transparent conductive flexible substrate. The graphene was developed based on CVD growth on nickel foil using a carbon gas source, and the graphene thin film was easily transferred onto the laminating film via a heated photograph laminator. Field emission scanning electron microscopy and atomic force microscopy were utilized to examine the morphological characteristics of the graphene surface. Raman spectroscopy and transmission electron microscopy were utilized to examine the microstructure of the graphene. The optical–electronic properties of the transferred graphene flexible thin film were measured by ultraviolet–visible spectrometry and a four-point probe. The advantage of this method is that large-scale graphene-based thin films can be easily obtained. We provide an economical method for fabricating a graphene-based transparent conductive flexible substrate. - Highlight: • We synthesized the large-scale graphene by thermal CVD method. • A low-cost commercial photograph laminator was used to transfer graphene. • A large-scale transparent and flexible graphene substrate was obtained easily

  4. An easy, low-cost method to transfer large-scale graphene onto polyethylene terephthalate as a transparent conductive flexible substrate

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chih-Sheng; Hsieh, Chien-Kuo, E-mail: jack_hsieh@mail.mcut.edu.tw

    2014-11-03

    In this study, we develop a low-cost method for transferring a large-scale graphene film onto a flexible transparent substrate. An easily accessible method for home-made chemical vapor deposition (CVD) and a commercial photograph laminator were utilized to fabricate the low-cost graphene-based transparent conductive flexible substrate. The graphene was developed based on CVD growth on nickel foil using a carbon gas source, and the graphene thin film was easily transferred onto the laminating film via a heated photograph laminator. Field emission scanning electron microscopy and atomic force microscopy were utilized to examine the morphological characteristics of the graphene surface. Raman spectroscopy and transmission electron microscopy were utilized to examine the microstructure of the graphene. The optical–electronic properties of the transferred graphene flexible thin film were measured by ultraviolet–visible spectrometry and a four-point probe. The advantage of this method is that large-scale graphene-based thin films can be easily obtained. We provide an economical method for fabricating a graphene-based transparent conductive flexible substrate. - Highlight: • We synthesized the large-scale graphene by thermal CVD method. • A low-cost commercial photograph laminator was used to transfer graphene. • A large-scale transparent and flexible graphene substrate was obtained easily.

  5. Prediction of Coal Face Gas Concentration by Multi-Scale Selective Ensemble Hybrid Modeling

    Directory of Open Access Journals (Sweden)

    WU Xiang

    2014-06-01

    Full Text Available A selective ensemble hybrid modeling prediction method based on wavelet transformation is proposed to improve the fitting and generalization capability of the existing prediction models of the coal face gas concentration, which has a strong stochastic volatility. Mallat algorithm was employed for the multi-scale decomposition and single-scale reconstruction of the gas concentration time series. Then, it predicted every subsequence by sparsely weighted multi unstable ELM(extreme learning machine predictor within method SERELM(sparse ensemble regressors of ELM. At last, it superimposed the predicted values of these models to obtain the predicted values of the original sequence. The proposed method takes advantage of characteristics of multi scale analysis of wavelet transformation, accuracy and fast characteristics of ELM prediction and the generalization ability of L1 regularized selective ensemble learning method. The results show that the forecast accuracy has large increase by using the proposed method. The average relative error is 0.65%, the maximum relative error is 4.16% and the probability of relative error less than 1% reaches 0.785.

  6. Renormalization group and relations between scattering amplitudes in a theory with different mass scales

    International Nuclear Information System (INIS)

    Gulov, A.V.; Skalozub, V.V.

    2000-01-01

    In the Yukawa model with two different mass scales the renormalization group equation is used to obtain relations between scattering amplitudes at low energies. Considering fermion-fermion scattering as an example, a basic one-loop renormalization group relation is derived which gives possibility to reduce the problem to the scattering of light particles on the external field substituting a heavy virtual state. Applications of the results to problem of searching new physics beyond the Standard Model are discussed [ru

  7. Social network extraction based on Web: 1. Related superficial methods

    Science.gov (United States)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  8. Schinus terebinthifolius countercurrent chromatography (Part II): Intra-apparatus scale-up and inter-apparatus method transfer.

    Science.gov (United States)

    Costa, Fernanda das Neves; Vieira, Mariana Neves; Garrard, Ian; Hewitson, Peter; Jerz, Gerold; Leitão, Gilda Guimarães; Ignatova, Svetlana

    2016-09-30

    Countercurrent chromatography (CCC) is being widely used across the world for purification of various materials, especially in natural product research. The predictability of CCC scale-up has been successfully demonstrated using specially designed instruments of the same manufacturer. The reality is that the most of CCC users do not have access to such instruments and do not have enough experience to transfer methods from one CCC column to another. This unique study of three international teams is based on innovative approach to simplify the scale-up between different CCC machines using fractionation of Schinus terebinthifolius berries dichloromethane extract as a case study. The optimized separation methodology, recently developed by the authors (Part I), was repeatedly performed on CCC columns of different design available at most research laboratories across the world. Hexane - ethyl acetate - methanol - water (6:1:6:1, v/v/v/v) was used as solvent system with masticadienonic and 3β-masticadienolic acids as target compounds to monitor stationary phase retention and calculate peak resolution. It has been demonstrated that volumetric, linear and length scale-up transfer factors based on column characteristics can be directly applied to different i.d., volume and length columns independently on instrument make in an intra-apparatus scale-up and inter-apparatus method transfer. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. [Development of social activities-related daily life satisfaction scale for the elderly and evaluation of its reliability and validity].

    Science.gov (United States)

    Okamoto, Hideaki

    2010-07-01

    The purpose of this study was to develop a Social Activities-Related Daily Life Satisfaction Scale specifically applicable to elderly people in communities and to evaluate its reliability and validity. Sixteen items were extracted from an initial pool and assessed for inclusion in the scale by correlation and exploratory factor analyses. To confirm validity, confirmatory factor analysis was conducted and correlation coefficients were calculated. In addition, t-tests were performed in order to generate scores of the subscale related to activity. To prove reliability, Cronbach's coefficient alpha values were calculated. Data for 755 older adults aged 65 to 84 years were obtained from a mail survey in Ichikawa City, Chiba Prefecture. Exploratory factor analyses indicated that four factors, "satisfaction with learning" (four items), "satisfaction with usefulness to others and society" (four items), "satisfaction with health and physical strength" (three items), and "satisfaction with friends" (three items) should be extracted. Confirmatory factor analysis for assessing the 14-item four-factor model showed high goodness of fit indices (GFI = 0.943, AGFI = 0.915, RMSEA = 0.068). Concurrent validity was established by comparing the score of the scale with five external variables (Activity and Daily Life Satisfaction Scale for the Elderly, Life Satisfaction Index K, etc). Student's t-tests revealed that each score of the subscale was positively associated with activity variable. The overall Cronbach's coefficient alpha for the scale was 0.919 and for its four subscales values ranged from 0.814 to 0.887. A Social Activities-Related Daily Life Satisfaction Scale was derived consisting of four subscales, "satisfaction with learning", "satisfaction with usefulness to others and society", "satisfaction with health and physical strength", and "satisfaction with friends". The results of the present study suggested that the Social Activities-Related Daily Life Satisfaction Scale

  10. STRAIGHTENING THE DENSITY-DISPLACEMENT RELATION WITH A LOGARITHMIC TRANSFORM

    International Nuclear Information System (INIS)

    Falck, Bridget L.; Neyrinck, Mark C.; Aragon-Calvo, Miguel A.; Lavaux, Guilhem; Szalay, Alexander S.

    2012-01-01

    We investigate the use of a logarithmic density variable in estimating the Lagrangian displacement field motivated by the success of a logarithmic transformation in restoring information to the matter power spectrum. The logarithmic relation is an extension of the linear relation, motivated by the continuity equation, in which the density field is assumed to be proportional to the divergence of the displacement field; we compare the linear and logarithmic relations by measuring both of these fields directly in a cosmological N-body simulation. The relative success of the logarithmic and linear relations depends on the scale at which the density field is smoothed. Thus we explore several ways of measuring the density field, including Cloud-In-Cell smoothing, adaptive smoothing, and the (scale-independent) Delaunay tessellation, and we use both a Fourier-space and a geometrical tessellation approach to measuring the divergence. We find that the relation between the divergence of the displacement field and the density is significantly tighter and straighter with a logarithmic density variable, especially at low redshifts and for very small (∼2 h –1 Mpc) smoothing scales. We find that the grid-based methods are more reliable than the tessellation-based method of calculating both the density and the divergence fields, though in both cases the logarithmic relation works better in the appropriate regime, which corresponds to nonlinear scales for the grid-based methods and low densities for the tessellation-based method.

  11. Illness Attitudes Scale dimensions and their associations with anxiety-related constructs in a nonclinical sample.

    Science.gov (United States)

    Stewart, S H; Watt, M C

    2000-01-01

    The Illness Attitudes Scale (IAS) is a self-rated measure that consists of nine subscales designed to assess fears, attitudes and beliefs associated with hypochondriacal concerns and abnormal illness behavior [Kellner, R. (1986). Somatization and hypochondriasis. New York: Praeger; Kellner, R. (1987). Abridged manual of the Illness Attitudes Scale. Department of Psychiatry, School of Medicine, University of New Mexico]. The purposes of the present study were to explore the hierarchical factor structure of the IAS in a nonclinical sample of young adult volunteers and to examine the relations of each illness attitudes dimension to a set of anxiety-related measures. One-hundred and ninety-seven undergraduate university students (156 F, 41 M; mean age = 21.9 years) completed the IAS as well as measures of anxiety sensitivity, trait anxiety and panic attack history. The results of principal components analyses with oblique (Oblimin) rotation suggested that the IAS is best conceptualized as a four-factor measure at the lower order level (with lower-order dimensions tapping illness-related Fears, Behavior, Beliefs and Effects, respectively), and a unifactorial measure at the higher-order level (i.e. higher-order dimension tapping General Hypochondriacal Concerns). The factor structure overlapped to some degree with the scoring of the IAS proposed by Kellner (1986, 1987), as well as with the factor structures identified in previously-tested clinical and nonclinical samples [Ferguson, E. & Daniel, E. (1995). The Illness Attitudes Scale (IAS): a psychometric evaluation on a nonclinical population. Personality and Individual Differences, 18, 463-469; Hadjistavropoulos, H. D. & Asmundson, G. J. G. (1998). Factor analytic investigation of the Illness Attitudes Scale in a chronic pain sample. Behaviour Research and Therapy, 36, 1185-1195; Hadjistavropoulos, H. D., Frombach, I. & Asmundson, G. J. G. (in press). Exploratory and confirmatory factor analytic investigations of the

  12. Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.

    Science.gov (United States)

    Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L

    2002-09-01

    We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.

  13. Two micro-models of tourism capitalism and the (re)scaling of state-business relations

    NARCIS (Netherlands)

    Erkuş-Öztürk, H.; Terhorst, P.

    2011-01-01

    This paper aims to show that (i) there are two micro-models of tourism capitalism in Antalya (Turkey) and (ii) different trajectories of (re)scaling of state-business relations form an integral part of each model of tourism capitalism. The paper bridges two debates in the literature that generally

  14. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    Science.gov (United States)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging

  15. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  16. Validation of the bipolar disorder etiology scale based on psychological behaviorism theory and factors related to the onset of bipolar disorder.

    Directory of Open Access Journals (Sweden)

    Jae Woo Park

    Full Text Available OBJECTIVES: The aim of this study was to identify psychosocial factors related to the onset of bipolar I disorder (BD. To do so, the Bipolar Disorder Etiology Scale (BDES, based on psychological behaviorism, was developed and validated. Using the BDES, common factors related to both major depressive disorder (MDD and BD and specific factors related only to BD were investigated. METHOD: The BDES, which measures 17 factors based on psychological behaviorism hypotheses, was developed and validated. This scale was administered to 113 non-clinical control subjects, 30 subjects with MDD, and 32 people with BD. ANOVA and post hoc analyses were conducted. Subscales on which MDD and BD groups scored higher than controls were classified as common factors, while those on which the BD group scored higher than MDD and control groups were classified as specific factors. RESULTS: The BDES has acceptable reliability and validity. Twelve common factors influence both MDD and BD and one specific factor influences only BD. Common factors include the following: learning grandiose self-labeling, learning dangerous behavior, reinforcing impulsive behavior, exposure to irritability, punishment of negative emotional expression, lack of support, sleep problems, antidepressant problems, positive arousal to threat, lack of social skills, and pursuit of short-term pleasure. The specific factor is manic emotional response. CONCLUSIONS: Manic emotional response was identified as a specific factor related to the onset of BD, while parents' grandiose labeling is a candidate for a specific factor. Many factors are related to the onset of both MDD and BD.

  17. Aging on a different scale--chronological versus pathology-related aging.

    Science.gov (United States)

    Melis, Joost P M; Jonker, Martijs J; Vijg, Jan; Hoeijmakers, Jan H J; Breit, Timo M; van Steeg, Harry

    2013-10-01

    In the next decades the elderly population will increase dramatically, demanding appropriate solutions in health care and aging research focusing on healthy aging to prevent high burdens and costs in health care. For this, research targeting tissue-specific and individual aging is paramount to make the necessary progression in aging research. In a recently published study we have attempted to make a step interpreting aging data on chronological as well as pathological scale. For this, we sampled five major tissues at regular time intervals during the entire C57BL/6J murine lifespan from a controlled in vivo aging study, measured the whole transcriptome and incorporated temporal as well as physical health aspects into the analyses. In total, we used 18 different age-related pathological parameters and transcriptomic profiles of liver, kidney, spleen, lung and brain and created a database that can now be used for a broad systems biology approach. In our study, we focused on the dynamics of biological processes during chronological aging and the comparison between chronological and pathology-related aging.

  18. On one method of realization of commutation relation algebra

    International Nuclear Information System (INIS)

    Sveshnikov, K.A.

    1983-01-01

    Method for constructing the commulation relation representations based on the purely algebraic construction of joined algebraic representation with specially selected composition law has been suggested9 Purely combinatorial construction realizing commulation relations representation has been obtained proceeding from formal equivalence of operatopr action on vector and adding a simbol to a sequences of symbols. The above method practically has the structure of calculating algorithm, which assigns some rule of ''word'' formation of an initial set of ''letters''. In other words, a computer language with definite relations between words (an analogy between quantum mechanics and computer linguistics has been applied)

  19. A review of empirical research related to the use of small quantitative samples in clinical outcome scale development.

    Science.gov (United States)

    Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S

    2016-11-01

    There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.

  20. The use of bootstrap methods for analysing health-related quality of life outcomes (particularly the SF-36

    Directory of Open Access Journals (Sweden)

    Campbell Michael J

    2004-12-01

    Full Text Available Abstract Health-Related Quality of Life (HRQoL measures are becoming increasingly used in clinical trials as primary outcome measures. Investigators are now asking statisticians for advice on how to analyse studies that have used HRQoL outcomes. HRQoL outcomes, like the SF-36, are usually measured on an ordinal scale. However, most investigators assume that there exists an underlying continuous latent variable that measures HRQoL, and that the actual measured outcomes (the ordered categories, reflect contiguous intervals along this continuum. The ordinal scaling of HRQoL measures means they tend to generate data that have discrete, bounded and skewed distributions. Thus, standard methods of analysis such as the t-test and linear regression that assume Normality and constant variance may not be appropriate. For this reason, conventional statistical advice would suggest that non-parametric methods be used to analyse HRQoL data. The bootstrap is one such computer intensive non-parametric method for analysing data. We used the bootstrap for hypothesis testing and the estimation of standard errors and confidence intervals for parameters, in four datasets (which illustrate the different aspects of study design. We then compared and contrasted the bootstrap with standard methods of analysing HRQoL outcomes. The standard methods included t-tests, linear regression, summary measures and General Linear Models. Overall, in the datasets we studied, using the SF-36 outcome, bootstrap methods produce results similar to conventional statistical methods. This is likely because the t-test and linear regression are robust to the violations of assumptions that HRQoL data are likely to cause (i.e. non-Normality. While particular to our datasets, these findings are likely to generalise to other HRQoL outcomes, which have discrete, bounded and skewed distributions. Future research with other HRQoL outcome measures, interventions and populations, is required to

  1. Contextual Factors Related to Stereotype Threat and Student Success in Science Technology Engineering Mathematics Education: A Mixed Methods Study

    Science.gov (United States)

    Leker, Lindsey Beth

    Stereotype threat is a widely researched phenomenon shown to impact performance in testing and evaluation situations (Katz, Roberts, & Robinson, 1965; Steele & Aronson, 1995). When related to gender, stereotype threat can lead women to score lower than men on standardized math exams (Spencer, Steele, & Quinn, 1999). Stereotype threat may be one reason women have lower enrollment in most science, technology, engineering, and mathematics (STEM) majors, hold a smaller number of STEM careers than men, and have a higher attrition rate in STEM professions (Hill, Corbet, & Rose, 2010; Picho & Brown 2011; Sorby & Baartmans, 2000). Most research has investigated stereotype threat using experiments yielding mixed results (Stoet & Geary, 2012). Thus, there is a need to explore stereotype threat using quantitative surveys and qualitative methods to examine other contextual factors that contribute to gender difference in STEM fields. This dissertation outlined a mixed methods study designed to, first, qualitatively explore stereotype threat and contextual factors related to high achieving women in STEM fields, as well as women who have failed and/or avoided STEM fields. Then, the quantitative portion of the study used the themes from the qualitative phase to create a survey that measured stereotype threat and other contextual variables related to STEM success and failure/avoidance. Fifteen participants were interviewed for the qualitative phase of the study and six themes emerged. The quantitative survey was completed 242 undergraduate participants. T-tests, correlations, regressions, and mediation analyses were used to analyze the data. There were significant relationships between stereotype threat and STEM confidence, STEM anxiety, giving up in STEM, and STEM achievement. Overall, this mixed methods study advanced qualitative research on stereotype threat, developed a much-needed scale for the measurement of stereotype threat, and tested the developed scale.

  2. The nonlinear Galerkin method: A multi-scale method applied to the simulation of homogeneous turbulent flows

    Science.gov (United States)

    Debussche, A.; Dubois, T.; Temam, R.

    1993-01-01

    Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.

  3. Scale changes in air quality modelling and assessment of associated uncertainties

    International Nuclear Information System (INIS)

    Korsakissok, Irene

    2009-01-01

    After an introduction of issues related to a scale change in the field of air quality (existing scales for emissions, transport, turbulence and loss processes, hierarchy of data and models, methods of scale change), the author first presents Gaussian models which have been implemented within the Polyphemus modelling platform. These models are assessed by comparison with experimental observations and with other commonly used Gaussian models. The second part reports the coupling of the puff-based Gaussian model with the Eulerian Polair3D model for the sub-mesh processing of point sources. This coupling is assessed at the continental scale for a passive tracer, and at the regional scale for photochemistry. Different statistical methods are assessed

  4. The applicability of the decisional conflict scale in nursing home placement decision among Chinese family caregivers: A mixed methods approach

    Directory of Open Access Journals (Sweden)

    Yu-Ping Chang

    2017-12-01

    Full Text Available This study aimed to 1 examine relationships between uncertainty, perceived information, personal values, social support, and filial obligation among Chinese family caregivers faced with nursing home placement of an older adult family member with dementia; and 2 describe the applicability of the Decisional Conflict Scale in nursing home placement decision making among Chinese family caregivers through the integration of quantitative and qualitative data. We used a mixed-methods approach. Quantitative data analysis consisted of descriptive and correlational statistics. We utilized a thematic analysis for the qualitative data. Data transformation and data comparison techniques were used to combine qualitative and quantitative data. Thirty Chinese family caregivers living in Taiwan caring for an older adult with dementia participated in this study. We found a significant association among the quantitative findings, which indicated that perceived information, personal values, social support, and filial obligation, and nursing home placement decisional conflict. Mixed-method data analysis additionally revealed that conflicting differences existed between the traditional role of Chinese family collective decision making and the contemporary role of single family member surrogate decision making. Although the Decisional Conflict Scale can be utilized when exploring nursing home placement for an older adult with dementia among Chinese family caregivers, applicability issues existed regarding cultural beliefs and values related to filial piety and family collectivism. Findings strongly support the need for researchers to consider cultural beliefs and values when selecting tools that assess health-related decision making across cultures. Further research is needed to explore the role culture plays in nursing home decision making.

  5. Development and validation of the work-family-school role conflicts and role-related social support scales among registered nurses with multiple roles.

    Science.gov (United States)

    Xu, Lijuan; Song, Rhayun

    2013-10-01

    The purpose of this study was to develop work-family-school role conflicts and role-related social support scales, and to validate the psychometrics of those scales among registered nurses with multiple roles. The concepts, generation of items, and the scale domains of work-family-school role conflicts and role-related social support scales were constructed based on a review of the literature. The validity and reliability of the scales were examined by administering them to 201 registered nurses who were recruited from 8 university hospitals in South Korea. The content validity was examined by nursing experts using a content validity index. Exploratory factor analysis and confirmatory factor analysis were used to establish the construct validity. The correlation with depression was examined to assess concurrent validity. Finally, internal consistency was assessed using Cronbach's alpha coefficients. The work-family-school role conflicts scale comprised ten items with three factors: work-school-to-family conflict (three items), family-school-to-work conflict (three items), and work-family-to-school conflict (four items). The role-related social support scale comprised nine items with three factors: support from family (three items), support from work (three items), and support from school (three items). Cronbach's alphas were 0.83 and 0.76 for the work-family-school role conflicts and role-related social support scales, respectively. Both instruments exhibited acceptable construct and concurrent validity. The validity and reliability of the developed scales indicate their potential usefulness for the assessment of work-family-school role conflict and role-related social support among registered nurses with multiple roles in Korea. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers.

    Science.gov (United States)

    Stochl, Jan; Jones, Peter B; Croudace, Tim J

    2012-06-11

    Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12)--when binary scored--were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech's "well-being" and "distress" clinical scales). An illustration of ordinal item analysis confirmed that all 14 positively worded items of the Warwick-Edinburgh Mental

  7. A Real-Time Analysis Method for Pulse Rate Variability Based on Improved Basic Scale Entropy

    Directory of Open Access Journals (Sweden)

    Yongxin Chou

    2017-01-01

    Full Text Available Base scale entropy analysis (BSEA is a nonlinear method to analyze heart rate variability (HRV signal. However, the time consumption of BSEA is too long, and it is unknown whether the BSEA is suitable for analyzing pulse rate variability (PRV signal. Therefore, we proposed a method named sliding window iterative base scale entropy analysis (SWIBSEA by combining BSEA and sliding window iterative theory. The blood pressure signals of healthy young and old subjects are chosen from the authoritative international database MIT/PhysioNet/Fantasia to generate PRV signals as the experimental data. Then, the BSEA and the SWIBSEA are used to analyze the experimental data; the results show that the SWIBSEA reduces the time consumption and the buffer cache space while it gets the same entropy as BSEA. Meanwhile, the changes of base scale entropy (BSE for healthy young and old subjects are the same as that of HRV signal. Therefore, the SWIBSEA can be used for deriving some information from long-term and short-term PRV signals in real time, which has the potential for dynamic PRV signal analysis in some portable and wearable medical devices.

  8. Assessment methods for rehabilitation.

    Science.gov (United States)

    Biefang, S; Potthoff, P

    1995-09-01

    Diagnostics and evaluation in medical rehabilitation should be based on methods that are as objective as possible. In this context quantitative methods are an important precondition. We conducted for the German Pensions Insurance Institutions (which are in charge of the medical and vocational rehabilitation of workers and employees) a survey on assessment methods for rehabilitation which included an evaluation of American literature, with the aim to indicate procedures that can be considered for adaptation in Germany and to define further research requirements. The survey identified: (1) standardized procedures and instrumented tests for the assessment of musculoskeletal, cardiopulmonary and neurophysiological function; (2) personality, intelligence, achievement, neuropsychological and alcoholism screening tests for the assessment of mental or cognitive function; (3) rating scales and self-administered questionnaires for the assessment of Activities of Daily Living and Instrumental Activities of Daily Living (ADL/IADL Scales); (4) generic profiles and indexes as well as disease-specific measures for the assessment of health-related quality of life and health status; and (5) rating scales for vocational assessment. German equivalents or German versions exist only for a part of the procedures identified. Translation and testing of Anglo-Saxon procedures should have priority over the development of new German methods. The following procedures will be taken into account: (a) instrumented tests for physical function, (b) IADL Scales, (c) generic indexes of health-related quality of life, (d) specific quality of life and health status measures for disorders of the circulatory system, metabolic system, digestive organs, respiratory tract and for cancer, and (e) vocational rating scales.

  9. Music Therapy as Psychotherapy in Psychiatry at all Levels of the GAF Scale

    DEFF Research Database (Denmark)

    Pedersen, Inge Nygaard

    2009-01-01

    Presentation and disussion on how to apply different music therapy methods and techniques in psychiatry at different levels of the GAF (Global Functioning Scoring system) scale described in combination with McGlashan's relational process levels and other therapeutic principles as illustrated in 5...... books on 'relational treatment in psychiatry' by Lars Thorgaard (DK) and Ejvind Haga (N). Is music therapy as psychotherapy applicable also at the lower GAF scorings? Which methods/techniques?......Presentation and disussion on how to apply different music therapy methods and techniques in psychiatry at different levels of the GAF (Global Functioning Scoring system) scale described in combination with McGlashan's relational process levels and other therapeutic principles as illustrated in 5...

  10. 2002 Status of the Armed Forces Survey - Workplace and Gender Relations: Report on Scales and Measures

    National Research Council Canada - National Science Library

    Ormerod, Alayne

    2003-01-01

    ...: Workplace and Gender Relations Survey (2002 WGR). This report describes advances from previous surveys and presents results on scale development as obtained from 19,960 respondents to this survey...

  11. Validity and reliability analysis of the planned behavior theory scale related to the testicular self-examination in a Turkish context.

    Science.gov (United States)

    Iyigun, Emine; Tastan, Sevinc; Ayhan, Hatice; Kose, Gulsah; Acikel, Cengizhan

    2016-06-01

    This study aimed to determine the validity and reliability levels of the Planned Behavior Theory Scale as related to a testicular self-examination. The study was carried out in a health-profession higher-education school in Ankara, Turkey, from April to June 2012. The study participants comprised 215 male students. Study data were collected by using a questionnaire, a planned behavior theory scale related to testicular self-examination, and Champion's Health Belief Model Scale (CHBMS). The sub-dimensions of the planned behavior theory scale, namely those of intention, attitude, subjective norms and self-efficacy, were found to have Cronbach's alpha values of between 0.81 and 0.89. Exploratory factor analysis showed that items of the scale had five factors that accounted for 75% of the variance. Of these, the sub-dimension of intention was found to have the highest level of contribution. A significant correlation was found between the sub-dimensions of the testicular self-examination planned behavior theory scale and those of CHBMS (p Planned Behavior Theory Scale is a valid and reliable measurement for Turkish society.

  12. Two scale damage model and related numerical issues for thermo-mechanical high cycle fatigue

    International Nuclear Information System (INIS)

    Desmorat, R.; Kane, A.; Seyedi, M.; Sermage, J.P.

    2007-01-01

    On the idea that fatigue damage is localized at the microscopic scale, a scale smaller than the mesoscopic one of the Representative Volume Element (RVE), a three-dimensional two scale damage model has been proposed for High Cycle Fatigue applications. It is extended here to aniso-thermal cases and then to thermo-mechanical fatigue. The modeling consists in the micro-mechanics analysis of a weak micro-inclusion subjected to plasticity and damage embedded in an elastic meso-element (the RVE of continuum mechanics). The consideration of plasticity coupled with damage equations at micro-scale, altogether with Eshelby-Kroner localization law, allows to compute the value of microscopic damage up to failure for any kind of loading, 1D or 3D, cyclic or random, isothermal or aniso-thermal, mechanical, thermal or thermo-mechanical. A robust numerical scheme is proposed in order to make the computations fast. A post-processor for damage and fatigue (DAMAGE-2005) has been developed. It applies to complex thermo-mechanical loadings. Examples of the representation by the two scale damage model of physical phenomena related to High Cycle Fatigue are given such as the mean stress effect, the non-linear accumulation of damage. Examples of thermal and thermo-mechanical fatigue as well as complex applications on real size testing structure subjected to thermo-mechanical fatigue are detailed. (authors)

  13. Estimation and applicability of attenuation characteristics for source parameters and scaling relations in the Garhwal Kumaun Himalaya region, India

    Science.gov (United States)

    Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.

    2018-06-01

    Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections

  14. A novel laboratory scale method for studying heat treatment of cake flour

    OpenAIRE

    Chesterton, AKS; Wilson, David Ian; Sadd, PI; Moggridge, Geoffrey Dillwyn

    2014-01-01

    A lab-scale method for replicating the time–temperature history experienced by cake flours undergoing heat treatment was developed based on a packed bed configuration. The performance of heat-treated flours was compared with untreated and commercially heat-treated flour by test baking a high ratio cake formulation. Both cake volume and AACC shape measures were optimal after 15 min treatment at 130 °C, though their values varied between harvests. Separate oscillatory rheometry tests of cake ba...

  15. Development and validation of the Chinese version of dry eye related quality of life scale.

    Science.gov (United States)

    Zheng, Bang; Liu, Xiao-Jing; Sun, Yue-Qian Fiona; Su, Jia-Zeng; Zhao, Yang; Xie, Zheng; Yu, Guang-Yan

    2017-07-17

    To develop the Chinese version of quality of life scale for dry eye patients based on the Impact of Dry Eye on Everyday Life (IDEEL) questionnaire and to assess the reliability and validity of the developed scale. The original IDEEL was adapted cross-culturally to Chinese language and further developed following standard procedures. A total of 100 Chinese patients diagnosed with dry eye syndrome were included to investigate the psychometric properties of the Chinese version of scale. Psychometric tests included internal consistency (Cronbach's ɑ coefficients), construct validity (exploratory factor analysis), and known-groups validity (the analysis of variance). The Chinese version of Dry Eye Related Quality of Life (CDERQOL) Scale contains 45 items classified into 5 domains. Good to excellent internal consistency reliability was demonstrated for all 5 domains (Cronbach's ɑ coefficients range from 0.716 to 0.913). Construct validity assessment indicated a consistent factorial structure of the CDERQOL scale with hypothesized construct, with the exception of "Dry Eye Symptom-Bother" domain. All domain scores were detected with significant difference across three severity groups of dry eye patients (P dry eye syndrome among Chinese population, and could be used as a supplementary diagnostic and treatment-effectiveness measure.

  16. An empirical velocity scale relation for modelling a design of large mesh pelagic trawl

    NARCIS (Netherlands)

    Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.

    1996-01-01

    Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is

  17. Synthesis of Large-Scale Single-Crystalline Monolayer WS2 Using a Semi-Sealed Method

    Directory of Open Access Journals (Sweden)

    Feifei Lan

    2018-02-01

    Full Text Available As a two-dimensional semiconductor, WS2 has attracted great attention due to its rich physical properties and potential applications. However, it is still difficult to synthesize monolayer single-crystalline WS2 at larger scale. Here, we report the growth of large-scale triangular single-crystalline WS2 with a semi-sealed installation by chemical vapor deposition (CVD. Through this method, triangular single-crystalline WS2 with an average length of more than 300 µm was obtained. The largest one was about 405 μm in length. WS2 triangles with different sizes and thicknesses were analyzed by optical microscope and atomic force microscope (AFM. Their optical properties were evaluated by Raman and photoluminescence (PL spectra. This report paves the way to fabricating large-scale single-crystalline monolayer WS2, which is useful for the growth of high-quality WS2 and its potential applications in the future.

  18. Trapped Bose-Einstein condensates with Planck-scale induced deformation of the energy-momentum dispersion relation

    International Nuclear Information System (INIS)

    Briscese, F.

    2012-01-01

    We show that harmonically trapped Bose-Einstein condensates can be used to constrain Planck-scale physics. In particular we prove that a Planck-scale induced deformation of the Minkowski energy-momentum dispersion relation δE≃ξ 1 mcp/2M p produces a shift in the condensation temperature T c of about ΔT c /T c 0 ≃10 -6 ξ 1 for typical laboratory conditions. Such a shift allows to bound the deformation parameter up to |ξ 1 |≤10 4 . Moreover we show that it is possible to enlarge ΔT c /T c 0 and improve the bound on ξ 1 lowering the frequency of the harmonic trap. Finally we compare the Planck-scale induced shift in T c with similar effects due to interboson interactions and finite size effects.

  19. Development of a local-scale urban stream assessment method using benthic macroinvertebrates: An example from the Santa Clara Basin, California

    Science.gov (United States)

    Carter, J.L.; Purcell, A.H.; Fend, S.V.; Resh, V.H.

    2009-01-01

    Research that explores the biological response to urbanization on a site-specific scale is necessary for management of urban basins. Recent studies have proposed a method to characterize the biological response of benthic macroinvertebrates along an urban gradient for several climatic regions in the USA. Our study demonstrates how this general framework can be refined and applied on a smaller scale to an urbanized basin, the Santa Clara Basin (surrounding San Jose, California, USA). Eighty-four sampling sites on 14 streams in the Santa Clara Basin were used for assessing local stream conditions. First, an urban index composed of human population density, road density, and urban land cover was used to determine the extent of urbanization upstream from each sampling site. Second, a multimetric biological index was developed to characterize the response of macroinvertebrate assemblages along the urban gradient. The resulting biological index included metrics from 3 ecological categories: taxonomic composition ( Ephemeroptera, Plecoptera, and Trichoptera), functional feeding group (shredder richness), and habit ( clingers). The 90th-quantile regression line was used to define the best available biological conditions along the urban gradient, which we define as the predicted biological potential. This descriptor was then used to determine the relative condition of sites throughout the basin. Hierarchical partitioning of variance revealed that several site-specific variables (dissolved O2 and temperature) were significantly related to a site's deviation from its predicted biological potential. Spatial analysis of each site's deviation from its biological potential indicated geographic heterogeneity in the distribution of impaired sites. The presence and operation of local dams optimize water use, but modify natural flow regimes, which in turn influence stream habitat, dissolved O2, and temperature. Current dissolved O2 and temperature regimes deviate from natural

  20. Scaling Professional Problems of Teachers in Turkey with Paired Comparison Method

    Directory of Open Access Journals (Sweden)

    Yasemin Duygu ESEN

    2017-03-01

    Full Text Available In this study, teachers’ professional problems was investigated and the significance level of them was measured with the paired comparison method. The study was carried out in survey model. The study group consisted of 484 teachers working in public schools which are accredited by Ministry of National Education (MEB in Turkey. “The Teacher Professional Problems Survey” developed by the researchers was used as a data collection tool. In data analysis , the scaling method with the third conditional equation of Thurstone’s law of comparative judgement was used. According to the results of study, the teachers’ professional problems include teacher training and the quality of teacher, employee rights and financial problems, decrease of professional reputation, the problems with MEB policies, the problems with union activities, workload, the problems with administration in school, physical conditions and the lack of infrastructure, the problems with parents, the problems with students. According to teachers, the most significant problem is MEB educational policies. This is followed by decrease of professional reputation, physical conditions and the lack of infrastructure, the problems with students, employee rights and financial problems, the problems with administration in school, teacher training and the quality of teacher, the problems with parents, workload, and the problems with union activities. When teachers’ professional problems were analyzed seniority variable, there was little difference in scale values. While the teachers with 0-10 years experience consider decrease of professional reputation as the most important problem, the teachers with 11-45 years experience put the problems with MEB policies at the first place.

  1. Automatic computation of moment magnitudes for small earthquakes and the scaling of local to moment magnitude

    Science.gov (United States)

    Edwards, Benjamin; Allmann, Bettina; Fäh, Donat; Clinton, John

    2010-10-01

    Moment magnitudes (MW) are computed for small and moderate earthquakes using a spectral fitting method. 40 of the resulting values are compared with those from broadband moment tensor solutions and found to match with negligible offset and scatter for available MW values of between 2.8 and 5.0. Using the presented method, MW are computed for 679 earthquakes in Switzerland with a minimum ML = 1.3. A combined bootstrap and orthogonal L1 minimization is then used to produce a scaling relation between ML and MW. The scaling relation has a polynomial form and is shown to reduce the dependence of the predicted MW residual on magnitude relative to an existing linear scaling relation. The computation of MW using the presented spectral technique is fully automated at the Swiss Seismological Service, providing real-time solutions within 10 minutes of an event through a web-based XML database. The scaling between ML and MW is explored using synthetic data computed with a stochastic simulation method. It is shown that the scaling relation can be explained by the interaction of attenuation, the stress-drop and the Wood-Anderson filter. For instance, it is shown that the stress-drop controls the saturation of the ML scale, with low-stress drops (e.g. 0.1-1.0 MPa) leading to saturation at magnitudes as low as ML = 4.

  2. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  3. DGDFT: A massively parallel method for large scale density functional theory calculations.

    Science.gov (United States)

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  4. New Distributed Multipole Methods for Accurate Electrostatics for Large-Scale Biomolecular Simultations

    Science.gov (United States)

    Sagui, Celeste

    2006-03-01

    An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.

  5. DGDFT: A massively parallel method for large scale density functional theory calculations

    International Nuclear Information System (INIS)

    Hu, Wei; Yang, Chao; Lin, Lin

    2015-01-01

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail

  6. DGDFT: A massively parallel method for large scale density functional theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  7. Effect of Integrated Pest Management Training on Ugandan Small-Scale Farmers

    DEFF Research Database (Denmark)

    Clausen, Anna Sabine; Jørs, Erik; Atuhaire, Aggrey

    2017-01-01

    Small-scale farmers in developing countries use hazardous pesticides taking few or no safety measures. Farmer field schools (FFSs) teaching integrated pest management (IPM) have been shown to reduce pesticide use among trained farmers. This cross-sectional study compares pesticide-related knowledge......-reported symptoms. The study supports IPM as a method to reduce pesticide use and potential exposure and to improve pesticide-related KAP among small-scale farmers in developing countries....

  8. Application of spectral Lanczos decomposition method to large scale problems arising geophysics

    Energy Technology Data Exchange (ETDEWEB)

    Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)

    1996-12-31

    This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.

  9. New SCALE-4 features related to cross-section processing

    International Nuclear Information System (INIS)

    Petrie, L.M.; Landers, N.F.; Greene, N.M.; Parks, C.V.

    1991-01-01

    The SCALE code system has a standardized scheme for processing problem-dependent cross section from problem-independent waste libraries. Some improvements and new capabilities in the processing scheme have been incorporated into the new Version 4 release of the SCALE system. The new features include the capability to consider annular cylindrical and spherical unit cells, and improved Dancoff factor formulation, and changes to the NITAWL-II module to perform resonance self-shielding with reference to infinite dilute values. A review of these major changes in the cross-section processing scheme for SCALE-4 is presented in this paper

  10. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  11. Driving forces behind the stagnancy of China's energy-related CO2 emissions from 1996 to 1999: the relative importance of structural change, intensity change and scale change

    International Nuclear Information System (INIS)

    Libo Wu; Kaneko, S.; Matsuoka, S.

    2005-01-01

    It is noteworthy that income elasticity of energy consumption in China shifted from positive to negative after 1996, accompanied by an unprecedented decline in energy-related CO 2 emissions. This paper therefore investigate the evolution of energy-related CO 2 emissions in China from 1985 to 1999 and the underlying driving forces, using the newly proposed three-level 'perfect decomposition' method and provincially aggregated data. The province-based estimates and analyses reveal a 'sudden stagnancy' of energy consumption, supply and energy-related CO 2 emissions in China from 1996 to 1999. The speed of a decrease in energy intensity and a slowdown in the growth of average labor productivity of industrial enterprises may have been the dominant contributors to this 'stagnancy'. The findings of this paper point to the highest rate of deterioration of state-owned enterprises in early 1996, the industrial restructuring caused by changes in ownership, the shutdown of small-scale power plants, and the introduction of policies to improve energy efficiency as probable factors. Taking into account the characteristics of those key driving forces, we characterize China's decline of energy-related CO 2 emissions as a short-term fluctuation and incline to the likelihood that China will resume an increasing trend from a lower starting point in the near future. (author)

  12. Notes on analytical study of holographic superconductors with Lifshitz scaling in external magnetic field

    International Nuclear Information System (INIS)

    Zhao, Zixu; Pan, Qiyuan; Jing, Jiliang

    2014-01-01

    We employ the matching method to analytically investigate the holographic superconductors with Lifshitz scaling in an external magnetic field. We discuss systematically the restricted conditions for the matching method and find that this analytic method is not always powerful to explore the effect of external magnetic field on the holographic superconductors unless the matching point is chosen in an appropriate range and the dynamical exponent z satisfies the relation z=d−1 or z=d−2. From the analytic treatment, we observe that Lifshitz scaling can hinder the condensation to be formed, which can be used to back up the numerical results. Moreover, we study the effect of Lifshitz scaling on the upper critical magnetic field and reproduce the well-known relation obtained from Ginzburg–Landau theory

  13. Adaptation study of the Turkish version of the Gambling-Related Cognitions Scale (GRCS-T).

    Science.gov (United States)

    Arcan, K; Karanci, A N

    2015-03-01

    This study aimed to adapt and to test the validity and the reliability of the Turkish version of the Gambling-Related Cognitions Scale (GRCS-T) that was developed by Raylu and Oei (Addiction 99(6):757-769, 2004a). The significance of erroneous cognitions in the development and the maintenance of gambling problems, the importance of promoting gambling research in different cultures, and the limited information about the gambling individuals in Turkey due to limited gambling research interest inspired the present study. The sample consisted of 354 voluntary male participants who were above age 17 and betting on sports and horse races selected through convenience sampling in betting terminals. The results of the confirmatory factor analysis following the original scale's five factor structure indicated a good fit for the data. The analyses were carried out with 21 items due to relatively inadequate psychometric properties of two GRCS-T items. Correlational analyses and group comparison tests supported the concurrent and the criterion validity of the GRCS-T. Cronbach's alpha coefficient for the whole scale was 0.84 whereas the coefficients ranged between 0.52 and 0.78 for the subscales of GRCS-T. The findings suggesting that GRCS-T is a valid and reliable instrument to identify gambling cognitions in Turkish samples are discussed considering the possible influence of the sample make-up and cultural texture within the limitations of the present study and in the light of the relevant literature.

  14. An efficient and novel computation method for simulating diffraction patterns from large-scale coded apertures on large-scale focal plane arrays

    Science.gov (United States)

    Shrekenhamer, Abraham; Gottesman, Stephen R.

    2012-10-01

    A novel and memory efficient method for computing diffraction patterns produced on large-scale focal planes by largescale Coded Apertures at wavelengths where diffraction effects are significant has been developed and tested. The scheme, readily implementable on portable computers, overcomes the memory limitations of present state-of-the-art simulation codes such as Zemax. The method consists of first calculating a set of reference complex field (amplitude and phase) patterns on the focal plane produced by a single (reference) central hole, extending to twice the focal plane array size, with one such pattern for each Line-of-Sight (LOS) direction and wavelength in the scene, and with the pattern amplitude corresponding to the square-root of the spectral irradiance from each such LOS direction in the scene at selected wavelengths. Next the set of reference patterns is transformed to generate pattern sets for other holes. The transformation consists of a translational pattern shift corresponding to each hole's position offset and an electrical phase shift corresponding to each hole's position offset and incoming radiance's direction and wavelength. The set of complex patterns for each direction and wavelength is then summed coherently and squared for each detector to yield a set of power patterns unique for each direction and wavelength. Finally the set of power patterns is summed to produce the full waveband diffraction pattern from the scene. With this tool researchers can now efficiently simulate diffraction patterns produced from scenes by large-scale Coded Apertures onto large-scale focal plane arrays to support the development and optimization of coded aperture masks and image reconstruction algorithms.

  15. The Pore-scale modeling of multiphase flows in reservoir rocks using the lattice Boltzmann method

    Science.gov (United States)

    Mu, Y.; Baldwin, C. H.; Toelke, J.; Grader, A.

    2011-12-01

    Digital rock physics (DRP) is a new technology to compute the physical and fluid flow properties of reservoir rocks. In this approach, pore scale images of the porous rock are obtained and processed to create highly accurate 3D digital rock sample, and then the rock properties are evaluated by advanced numerical methods at the pore scale. Ingrain's DRP technology is a breakthrough for oil and gas companies that need large volumes of accurate results faster than the current special core analysis (SCAL) laboratories can normally deliver. In this work, we compute the multiphase fluid flow properties of 3D digital rocks using D3Q19 immiscible LBM with two relaxation times (TRT). For efficient implementation on GPU, we improved and reformulated color-gradient model proposed by Gunstensen and Rothmann. Furthermore, we only use one-lattice with the sparse data structure: only allocate memory for pore nodes on GPU. We achieved more than 100 million fluid lattice updates per second (MFLUPS) for two-phase LBM on single Fermi-GPU and high parallel efficiency on Multi-GPUs. We present and discuss our simulation results of important two-phase fluid flow properties, such as capillary pressure and relative permeabilities. We also investigate the effects of resolution and wettability on multiphase flows. Comparison of direct measurement results with the LBM-based simulations shows practical ability of DRP to predict two-phase flow properties of reservoir rock.

  16. MEASUREMENT OF MECURY IN FISH SCALES AS AN ASSESSMENT METHOD FOR PREDICTING MUSCLE TISSUE MERCURY CNOCENTRATIONS IN LARGEMOUTH BASS

    Science.gov (United States)

    The relationship between total mercury (Hg) concentration in fish scales and in tissues of largemouth bass (Micropterus salmoides) from 20 freshwater sites was developed and evaluated to determine whether scale analysis would allow a non lethal and convenient method for predicti...

  17. Fractional Nottale's Scale Relativity and emergence of complexified gravity

    Energy Technology Data Exchange (ETDEWEB)

    EL-Nabulsi, Ahmad Rami [Department of Nuclear and Energy Engineering, Cheju National University, Ara-dong 1, Jeju 690-756 (Korea, Republic of)], E-mail: nabulsiahmadrami@yahoo.fr

    2009-12-15

    Fractional calculus of variations has recently gained significance in studying weak dissipative and nonconservative dynamical systems ranging from classical mechanics to quantum field theories. In this paper, fractional Nottale's Scale Relativity (NSR) for an arbitrary fractal dimension is introduced within the framework of fractional action-like variational approach recently introduced by the author. The formalism is based on fractional differential operators that generalize the differential operators of conventional NSR but that reduces to the standard formalism in the integer limit. Our main aim is to build the fractional setting for the NSR dynamical equations. Many interesting consequences arise, in particular the emergence of complexified gravity and complex time.

  18. Particle generation methods applied in large-scale experiments on aerosol behaviour and source term studies

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Gomez, F.J.; Martin, M.

    1997-01-01

    In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO 2 , UO 2 Al 2 O 3 , Al 2 SiO 5 , B 2 O 3 , Cd, CdO, Fe 2 O 3 , MnO, SiO 2 , AgO, SnO 2 , Te, U 3 O 8 , BaO, CsCl, CsNO 3 , Urania, RuO 2 , TiO 2 , Al(OH) 3 , BaSO 4 , Eu 2 O 3 and Sn. (Author)

  19. Examining related influential factors for dental calculus scaling utilization among people with disabilities in Taiwan, a nationwide population-based study.

    Science.gov (United States)

    Lai, Hsien-Tang; Kung, Pei-Tseng; Su, Hsun-Pi; Tsai, Wen-Chen

    2014-09-01

    Limited studies with large samples have been conducted on the utilization of dental calculus scaling among people with physical or mental disabilities. This study aimed to investigate the utilization of dental calculus scaling among the national disabled population. This study analyzed the utilization of dental calculus scaling among the disabled people, using the nationwide data between 2006 and 2008. Descriptive analysis and logistic regression were performed to analyze related influential factors for dental calculus scaling utilization. The dental calculus scaling utilization rate among people with physical or mental disabilities was 16.39%, and the annual utilization frequency was 0.2 times. Utilization rate was higher among the female and non-aboriginal samples. Utilization rate decreased with increased age and disability severity while utilization rate increased with income, education level, urbanization of residential area and number of chronic illnesses. Related influential factors for dental calculus scaling utilization rate were gender, age, ethnicity (aboriginal or non-aboriginal), education level, urbanization of residence area, income, catastrophic illnesses, chronic illnesses, disability types, and disability severity significantly influenced the dental calculus scaling utilization rate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Full-Scale Turbofan Engine Noise-Source Separation Using a Four-Signal Method

    Science.gov (United States)

    Hultgren, Lennart S.; Arechiga, Rene O.

    2016-01-01

    Contributions from the combustor to the overall propulsion noise of civilian transport aircraft are starting to become important due to turbofan design trends and expected advances in mitigation of other noise sources. During on-ground, static-engine acoustic tests, combustor noise is generally sub-dominant to other engine noise sources because of the absence of in-flight effects. Consequently, noise-source separation techniques are needed to extract combustor-noise information from the total noise signature in order to further progress. A novel four-signal source-separation method is applied to data from a static, full-scale engine test and compared to previous methods. The new method is, in a sense, a combination of two- and three-signal techniques and represents an attempt to alleviate some of the weaknesses of each of those approaches. This work is supported by the NASA Advanced Air Vehicles Program, Advanced Air Transport Technology Project, Aircraft Noise Reduction Subproject and the NASA Glenn Faculty Fellowship Program.