WorldWideScience

Sample records for multiple scale method

  1. Multiple time scale methods in tokamak magnetohydrodynamics

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  2. Multiple time-scale methods in particle simulations of plasmas

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  3. Regularization methods for ill-posed problems in multiple Hilbert scales

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  4. Non-Abelian Kubo formula and the multiple time-scale method

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  5. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  6. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  7. Dynamical properties of the growing continuum using multiple-scale method

    Hynčík L.

    2008-12-01

    Full Text Available The theory of growth and remodeling is applied to the 1D continuum. This can be mentioned e.g. as a model of the muscle fibre or piezo-electric stack. Hyperelastic material described by free energy potential suggested by Fung is used whereas the change of stiffness is taken into account. Corresponding equations define the dynamical system with two degrees of freedom. Its stability and the properties of bifurcations are studied using multiple-scale method. There are shown the conditions under which the degenerated Hopf's bifurcation is occuring.

  8. On the nonlinear dynamics of trolling-mode AFM: Analytical solution using multiple time scales method

    Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza

    2018-06-01

    Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.

  9. Accurate scaling on multiplicity

    Golokhvastov, A.I.

    1989-01-01

    The commonly used formula of KNO scaling P n =Ψ(n/ ) for descrete distributions (multiplicity distributions) is shown to contradict mathematically the condition ΣP n =1. The effect is essential even at ISR energies. A consistent generalization of the concept of similarity for multiplicity distributions is obtained. The multiplicity distributions of negative particles in PP and also e + e - inelastic interactions are similar over the whole studied energy range. Collider data are discussed. 14 refs.; 8 figs

  10. Multiple time scale dynamics

    Kuehn, Christian

    2015-01-01

    This book provides an introduction to dynamical systems with multiple time scales. The approach it takes is to provide an overview of key areas, particularly topics that are less available in the introductory form.  The broad range of topics included makes it accessible for students and researchers new to the field to gain a quick and thorough overview. The first of its kind, this book merges a wide variety of different mathematical techniques into a more unified framework. The book is highly illustrated with many examples and exercises and an extensive bibliography. The target audience of this  book are senior undergraduates, graduate students as well as researchers interested in using the multiple time scale dynamics theory in nonlinear science, either from a theoretical or a mathematical modeling perspective. 

  11. A multiple-scaling method of the computation of threaded structures

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  12. MULTIPLE SCALES FOR SUSTAINABLE RESULTS

    This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...

  13. Calculation of axial secular frequencies in a nonlinear ion trap with hexapole, octupole, decapole and dodecapole superpositions by the combined methods of multiple scales and Lindstedt-Poincare

    Doroudi, A.; Emampour, M.; Emampour, M.

    2012-01-01

    In this paper a combination of the method of multiple scales and the method of Lindstedt-Poincare which is a perturbative technique is used for calculation of axial secular frequencies of a nonlinear ion trap in the presence of second ,third, fourth and fifth order nonlinear terms of the potential distribution within the trap. The frequencies are calculated. The calculated frequencies are compared with the results of multiple scales method and the exact results.

  14. Production method and cost of commercial-scale offshore cultivation of kelp in the Faroe Islands using multiple partial harvesting

    Grandorf Bak, Urd; Mols-Mortensen, Agnes; Gregersen, Olavur

    2018-01-01

    was conducted. The total cost per kg dw of cultivated S. latissima decreased when the number of possible harvests without re-seeding was increased (from € 36.73 to € 9.27). This work has demonstrated that large-scale kelp cultivation is possible using multiple partial harvesting in the Faroe Islands...

  15. Neutron source multiplication method

    Clayton, E.D.

    1985-01-01

    Extensive use has been made of neutron source multiplication in thousands of measurements of critical masses and configurations and in subcritical neutron-multiplication measurements in situ that provide data for criticality prevention and control in nuclear materials operations. There is continuing interest in developing reliable methods for monitoring the reactivity, or k/sub eff/, of plant operations, but the required measurements are difficult to carry out and interpret on the far subcritical configurations usually encountered. The relationship between neutron multiplication and reactivity is briefly discussed and data presented to illustrate problems associated with the absolute measurement of neutron multiplication and reactivity in subcritical systems. A number of curves of inverse multiplication have been selected from a variety of experiments showing variations observed in multiplication during the course of critical and subcritical experiments where different methods of reactivity addition were used, with different neutron source detector position locations. Concern is raised regarding the meaning and interpretation of k/sub eff/ as might be measured in a far subcritical system because of the modal effects and spectrum differences that exist between the subcritical and critical systems. Because of this, the calculation of k/sub eff/ identical with unity for the critical assembly, although necessary, may not be sufficient to assure safety margins in calculations pertaining to far subcritical systems. Further study is needed on the interpretation and meaning of k/sub eff/ in the far subcritical system

  16. Estimating scaled treatment effects with multiple outcomes.

    Kennedy, Edward H; Kangovi, Shreya; Mitra, Nandita

    2017-01-01

    In classical study designs, the aim is often to learn about the effects of a treatment or intervention on a single outcome; in many modern studies, however, data on multiple outcomes are collected and it is of interest to explore effects on multiple outcomes simultaneously. Such designs can be particularly useful in patient-centered research, where different outcomes might be more or less important to different patients. In this paper, we propose scaled effect measures (via potential outcomes) that translate effects on multiple outcomes to a common scale, using mean-variance and median-interquartile range based standardizations. We present efficient, nonparametric, doubly robust methods for estimating these scaled effects (and weighted average summary measures), and for testing the null hypothesis that treatment affects all outcomes equally. We also discuss methods for exploring how treatment effects depend on covariates (i.e., effect modification). In addition to describing efficiency theory for our estimands and the asymptotic behavior of our estimators, we illustrate the methods in a simulation study and a data analysis. Importantly, and in contrast to much of the literature concerning effects on multiple outcomes, our methods are nonparametric and can be used not only in randomized trials to yield increased efficiency, but also in observational studies with high-dimensional covariates to reduce confounding bias.

  17. Beyond KNO multiplicative cascades and novel multiplicity scaling laws

    Hegyi, S

    1999-01-01

    The collapse of multiplicity distributions P/sub n/ onto a universal scaling curve arises when P/sub n/ is expressed as a function of the standardized multiplicity (n-c)/ lambda with c and lambda being location and scale parameters governed by leading particle effects and the growth of average multiplicity. It is demonstrated that self- similar multiplicative cascade processes such as QCD parton branching naturally lead to a novel type of scaling behavior of P/sub n/ which manifests itself in Mellin space through a location change controlled by the degree of multifractality and a scale change governed by the depth of the cascade. Applying the new scaling rule it is shown how to restore data collapsing behavior of P/sub n/ measured in hh collisions at ISR and SPS energies. (21 refs).

  18. A method for mapping fire hazard and risk across multiple scales and its application in fire management

    Robert E. Keane; Stacy A. Drury; Eva C. Karau; Paul F. Hessburg; Keith M. Reynolds

    2010-01-01

    This paper presents modeling methods for mapping fire hazard and fire risk using a research model called FIREHARM (FIRE Hazard and Risk Model) that computes common measures of fire behavior, fire danger, and fire effects to spatially portray fire hazard over space. FIREHARM can compute a measure of risk associated with the distribution of these measures over time using...

  19. Study on TVD parameters sensitivity of a crankshaft using multiple scale and state space method considering quadratic and cubic non-linearities

    R. Talebitooti

    Full Text Available In this paper the effect of quadratic and cubic non-linearities of the system consisting of the crankshaft and torsional vibration damper (TVD is taken into account. TVD consists of non-linear elastomer material used for controlling the torsional vibration of crankshaft. The method of multiple scales is used to solve the governing equations of the system. Meanwhile, the frequency response of the system for both harmonic and sub-harmonic resonances is extracted. In addition, the effects of detuning parameters and other dimensionless parameters for a case of harmonic resonance are investigated. Moreover, the external forces including both inertia and gas forces are simultaneously applied into the model. Finally, in order to study the effectiveness of the parameters, the dimensionless governing equations of the system are solved, considering the state space method. Then, the effects of the torsional damper as well as all corresponding parameters of the system are discussed.

  20. Accuracy Improvement of the Method of Multiple Scales for Nonlinear Vibration Analyses of Continuous Systems with Quadratic and Cubic Nonlinearities

    Akira Abe

    2010-01-01

    and are the driving and natural frequencies, respectively. The application of Galerkin's procedure to the equation of motion yields nonlinear ordinary differential equations with quadratic and cubic nonlinear terms. The steady-state responses are obtained by using the discretization approach of the MMS in which the definition of the detuning parameter, expressing the relationship between the natural frequency and the driving frequency, is changed in an attempt to improve the accuracy of the solutions. The validity of the solutions is discussed by comparing them with solutions of the direct approach of the MMS and the finite difference method.

  1. Method of complex scaling

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  2. SDG and qualitative trend based model multiple scale validation

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  3. Accurate multiplicity scaling in isotopically conjugate reactions

    Golokhvastov, A.I.

    1989-01-01

    The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs

  4. A New Class of Scaling Correction Methods

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  5. Overview of the Bushland Evapotranspiration and Agricultural Remote sensing EXperiment 2008 (BEAREX08): A field experiment evaluating methods for quantifying ET at multiple scales

    Evett, Steven R.; Kustas, William P.; Gowda, Prasanna H.; Anderson, Martha C.; Prueger, John H.; Howell, Terry A.

    2012-12-01

    In 2008, scientists from seven federal and state institutions worked together to investigate temporal and spatial variations of evapotranspiration (ET) and surface energy balance in a semi-arid irrigated and dryland agricultural region of the Southern High Plains in the Texas Panhandle. This Bushland Evapotranspiration and Agricultural Remote sensing EXperiment 2008 (BEAREX08) involved determination of micrometeorological fluxes (surface energy balance) in four weighing lysimeter fields (each 4.7 ha) containing irrigated and dryland cotton and in nearby bare soil, wheat stubble and rangeland fields using nine eddy covariance stations, three large aperture scintillometers, and three Bowen ratio systems. In coordination with satellite overpasses, flux and remote sensing aircraft flew transects over the surrounding fields and region encompassing an area contributing fluxes from 10 to 30 km upwind of the USDA-ARS lysimeter site. Tethered balloon soundings were conducted over the irrigated fields to investigate the effect of advection on local boundary layer development. Local ET was measured using four large weighing lysimeters, while field scale estimates were made by soil water balance with a network of neutron probe profile water sites and from the stationary flux systems. Aircraft and satellite imagery were obtained at different spatial and temporal resolutions. Plot-scale experiments dealt with row orientation and crop height effects on spatial and temporal patterns of soil surface temperature, soil water content, soil heat flux, evaporation from soil in the interrow, plant transpiration and canopy and soil radiation fluxes. The BEAREX08 field experiment was unique in its assessment of ET fluxes over a broad range in spatial scales; comparing direct and indirect methods at local scales with remote sensing based methods and models using aircraft and satellite imagery at local to regional scales, and comparing mass balance-based ET ground truth with eddy covariance

  6. Multiple-scale approach for the expansion scaling of superfluid quantum gases

    Egusquiza, I. L.; Valle Basagoiti, M. A.; Modugno, M.

    2011-01-01

    We present a general method, based on a multiple-scale approach, for deriving the perturbative solutions of the scaling equations governing the expansion of superfluid ultracold quantum gases released from elongated harmonic traps. We discuss how to treat the secular terms appearing in the usual naive expansion in the trap asymmetry parameter ε and calculate the next-to-leading correction for the asymptotic aspect ratio, with significant improvement over the previous proposals.

  7. Modelling of rate effects at multiple scales

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  8. Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29

    Misajon Rose

    2009-06-01

    Full Text Available Abstract Background Multiple Sclerosis (MS is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29 is a disease-specific health-related quality of life (HRQoL instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS and psychological (MSIS-29-PSYCH impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings.

  9. Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)

    Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F

    2009-01-01

    Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings. PMID:19545445

  10. Rank Dynamics of Word Usage at Multiple Scales

    José A. Morales

    2018-05-01

    Full Text Available The recent dramatic increase in online data availability has allowed researchers to explore human culture with unprecedented detail, such as the growth and diversification of language. In particular, it provides statistical tools to explore whether word use is similar across languages, and if so, whether these generic features appear at different scales of language structure. Here we use the Google Books N-grams dataset to analyze the temporal evolution of word usage in several languages. We apply measures proposed recently to study rank dynamics, such as the diversity of N-grams in a given rank, the probability that an N-gram changes rank between successive time intervals, the rank entropy, and the rank complexity. Using different methods, results show that there are generic properties for different languages at different scales, such as a core of words necessary to minimally understand a language. We also propose a null model to explore the relevance of linguistic structure across multiple scales, concluding that N-gram statistics cannot be reduced to word statistics. We expect our results to be useful in improving text prediction algorithms, as well as in shedding light on the large-scale features of language use, beyond linguistic and cultural differences across human populations.

  11. Classification of Farmland Landscape Structure in Multiple Scales

    Jiang, P.; Cheng, Q.; Li, M.

    2017-12-01

    Farmland is one of the basic terrestrial resources that support the development and survival of human beings and thus plays a crucial role in the national security of every country. Pattern change is the intuitively spatial representation of the scale and quality variation of farmland. Through the characteristic development of spatial shapes as well as through changes in system structures, functions and so on, farmland landscape patterns may indicate the landscape health level. Currently, it is still difficult to perform positioning analyses of landscape pattern changes that reflect the landscape structure variations of farmland with an index model. Depending on a number of spatial properties such as locations and adjacency relations, distance decay, fringe effect, and on the model of patch-corridor-matrix that is applied, this study defines a type system of farmland landscape structure on the national, provincial, and city levels. According to such a definition, the classification model of farmland landscape-structure type at the pixel scale is developed and validated based on mathematical-morphology concepts and on spatial-analysis methods. Then, the laws that govern farmland landscape-pattern change in multiple scales are analyzed from the perspectives of spatial heterogeneity, spatio-temporal evolution, and function transformation. The result shows that the classification model of farmland landscape-structure type can reflect farmland landscape-pattern change and its effects on farmland production function. Moreover, farmland landscape change in different scales displayed significant disparity in zonality, both within specific regions and in urban-rural areas.

  12. Case studies: Soil mapping using multiple methods

    Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald

    2010-05-01

    Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful

  13. Optimization of breeding methods when introducing multiple ...

    Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...

  14. Multiple scaling power in liquid gallium under pressure conditions

    Li, Renfeng; Wang, Luhong; Li, Liangliang; Yu, Tony; Zhao, Haiyan; Chapman, Karena W.; Rivers, Mark L.; Chupas, Peter J.; Mao, Ho-kwang; Liu, Haozhe

    2017-06-01

    Generally, a single scaling exponent, Df, can characterize the fractal structures of metallic glasses according to the scaling power law. However, when the scaling power law is applied to liquid gallium upon compression, the results show multiple scaling exponents and the values are beyond 3 within the first four coordination spheres in real space, indicating that the power law fails to describe the fractal feature in liquid gallium. The increase in the first coordination number with pressure leads to the fact that first coordination spheres at different pressures are not similar to each other in a geometrical sense. This multiple scaling power behavior is confined within a correlation length of ξ ≈ 14–15 Å at applied pressure according to decay of G(r) in liquid gallium. Beyond this length the liquid gallium system could roughly be viewed as homogeneous, as indicated by the scaling exponent, Ds, which is close to 3 beyond the first four coordination spheres.

  15. Scaling and mean normalized multiplicity in hadron-nucleus collisions

    Khan, M.Q.R.; Ahmad, M.S.; Hasan, R.

    1987-01-01

    Recently it has been reported that the dependence of the mean normalized multiplicity, R A , in hadron-nucleus collisions upon the effective number of projectile encounters, , is projectile independent. We report the failure of this kind of scaling using the world data at accelerator and cosmic ray energies. Infact, we have found that the dependence of R A upon the number of projectile encounters hA is projectile independent. This leads to a new kind of scaling. Further, the scaled multiplicity distributions are found independent on the nature and energy of the incident hadron in the energy range ≅ (17.2-300) GeV. (orig.)

  16. Scaling as an Organizational Method

    Papazu, Irina Maria Clara Hansen; Nelund, Mette

    2018-01-01

    Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....

  17. Scaling of charged particle multiplicity distributions in relativistic nuclear collisions

    Ahamd, N.; Hushnud; Azmi, M.D.; Zafar, M.; Irfan, M.; Khan, M.M.; Tufail, A.

    2011-01-01

    Validity of KNO scaling in hadron-hadron and hadron-nucleus collisions has been tested by several workers. Multiplicity distributions for p-emulsion interactions are found to be consistent with the KNO scaling hypothesis for pp collisions. The applicability of the scaling law was extended to FNAL energies by earlier workers. Slattery has shown that KNO scaling hypothesis is in fine agreement with the data for pp interactions over a wide range of incident energies. An attempt, is, therefore, made to examine the scaling hypothesis using multiplicity distributions of particles produced in 3.7A GeV/c 16 O-, 4.5A GeV/c and 14.5A GeV/c 28 Si - nucleus interactions

  18. Hybrid multiple criteria decision-making methods

    Zavadskas, Edmundas Kazimieras; Govindan, K.; Antucheviciene, Jurgita

    2016-01-01

    Formal decision-making methods can be used to help improve the overall sustainability of industries and organisations. Recently, there has been a great proliferation of works aggregating sustainability criteria by using diverse multiple criteria decision-making (MCDM) techniques. A number of revi...

  19. Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)

    Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F

    2009-01-01

    Abstract Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) im...

  20. Multiple Shooting and Time Domain Decomposition Methods

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  1. Assessing network scale-up estimates for groups most at risk of HIV/AIDS: evidence from a multiple-method study of heavy drug users in Curitiba, Brazil.

    Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I

    2011-11-15

    One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.

  2. Multiple scales in metapopulations of public goods producers

    Bauer, Marianne; Frey, Erwin

    2018-04-01

    Multiple scales in metapopulations can give rise to paradoxical behavior: in a conceptual model for a public goods game, the species associated with a fitness cost due to the public good production can be stabilized in the well-mixed limit due to the mere existence of these scales. The scales in this model involve a length scale corresponding to separate patches, coupled by mobility, and separate time scales for reproduction and interaction with a local environment. Contrary to the well-mixed high mobility limit, we find that for low mobilities, the interaction rate progressively stabilizes this species due to stochastic effects, and that the formation of spatial patterns is not crucial for this stabilization.

  3. HMC algorithm with multiple time scale integration and mass preconditioning

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  4. Multiple predictor smoothing methods for sensitivity analysis

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  5. Multiple predictor smoothing methods for sensitivity analysis.

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  6. Sparing land for biodiversity at multiple spatial scales

    Johan eEkroos

    2016-01-01

    Full Text Available A common approach to the conservation of farmland biodiversity and the promotion of multifunctional landscapes, particularly in landscapes containing only small remnants of non-crop habitats, has been to maintain landscape heterogeneity and reduce land-use intensity. In contrast, it has recently been shown that devoting specific areas of non-crop habitats to conservation, segregated from high-yielding farmland (‘land sparing’, can more effectively conserve biodiversity than promoting low-yielding, less intensively managed farmland occupying larger areas (‘land sharing’. In the present paper we suggest that the debate over the relative merits of land sparing or land sharing is partly blurred by the differing spatial scales at which it is suggested that land sparing should be applied. We argue that there is no single correct spatial scale for segregating biodiversity protection and commodity production in multifunctional landscapes. Instead we propose an alternative conceptual construct, which we call ‘multiple-scale land sparing’, targeting biodiversity and ecosystem services in transformed landscapes. We discuss how multiple-scale land sparing may overcome the apparent dichotomy between land sharing and land sparing and help to find acceptable compromises that conserve biodiversity and landscape multifunctionality.

  7. Efficient Selection of Multiple Objects on a Large Scale

    Stenholt, Rasmus

    2012-01-01

    The task of multiple object selection (MOS) in immersive virtual environments is important and still largely unexplored. The diffi- culty of efficient MOS increases with the number of objects to be selected. E.g. in small-scale MOS, only a few objects need to be simultaneously selected. This may...... consuming. Instead, we have implemented and tested two of the existing approaches to 3-D MOS, a brush and a lasso, as well as a new technique, a magic wand, which automati- cally selects objects based on local proximity to other objects. In a formal user evaluation, we have studied how the performance...

  8. Curvaton paradigm can accommodate multiple low inflation scales

    Matsuda, Tomohiro

    2004-01-01

    Recent arguments show that some curvaton field may generate the cosmological curvature perturbation. As the curvaton is independent of the inflaton field, there is a hope that the fine tunings of inflation models can be cured by the curvaton scenario. More recently, however, Lyth discussed that there is a strong bound for the Hubble parameter during inflation even if one assumes the curvaton scenario. Although the most serious constraint was evaded, the bound seems rather crucial for many models of a low inflation scale. In this paper we try to remove the constraint. We show that the bound is drastically modified if there were multiple stages of inflation. (letter to the editor)

  9. Time Scale in Least Square Method

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  10. Multiple time scales of adaptation in auditory cortex neurons.

    Ulanovsky, Nachum; Las, Liora; Farkas, Dina; Nelken, Israel

    2004-11-17

    Neurons in primary auditory cortex (A1) of cats show strong stimulus-specific adaptation (SSA). In probabilistic settings, in which one stimulus is common and another is rare, responses to common sounds adapt more strongly than responses to rare sounds. This SSA could be a correlate of auditory sensory memory at the level of single A1 neurons. Here we studied adaptation in A1 neurons, using three different probabilistic designs. We showed that SSA has several time scales concurrently, spanning many orders of magnitude, from hundreds of milliseconds to tens of seconds. Similar time scales are known for the auditory memory span of humans, as measured both psychophysically and using evoked potentials. A simple model, with linear dependence on both short-term and long-term stimulus history, provided a good fit to A1 responses. Auditory thalamus neurons did not show SSA, and their responses were poorly fitted by the same model. In addition, SSA increased the proportion of failures in the responses of A1 neurons to the adapting stimulus. Finally, SSA caused a bias in the neuronal responses to unbiased stimuli, enhancing the responses to eccentric stimuli. Therefore, we propose that a major function of SSA in A1 neurons is to encode auditory sensory memory on multiple time scales. This SSA might play a role in stream segregation and in binding of auditory objects over many time scales, a property that is crucial for processing of natural auditory scenes in cats and of speech and music in humans.

  11. Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.

    Smith, Kent W.; Sasaki, M. S.

    1979-01-01

    A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)

  12. Continuum Level Density in Complex Scaling Method

    Suzuki, R.; Myo, T.; Kato, K.

    2005-01-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique

  13. The Multiple Intelligences Teaching Method and Mathematics ...

    The Multiple Intelligences teaching approach has evolved and been embraced widely especially in the United States. The approach has been found to be very effective in changing situations for the better, in the teaching and learning of any subject especially mathematics. Multiple Intelligences teaching approach proposes ...

  14. A Multiple-Scale Analysis of Evaporation Induced Marangoni Convection

    Hennessy, Matthew G.

    2013-04-23

    This paper considers the stability of thin liquid layers of binary mixtures of a volatile (solvent) species and a nonvolatile (polymer) species. Evaporation leads to a depletion of the solvent near the liquid surface. If surface tension increases for lower solvent concentrations, sufficiently strong compositional gradients can lead to Bénard-Marangoni-type convection that is similar to the kind which is observed in films that are heated from below. The onset of the instability is investigated by a linear stability analysis. Due to evaporation, the base state is time dependent, thus leading to a nonautonomous linearized system which impedes the use of normal modes. However, the time scale for the solvent loss due to evaporation is typically long compared to the diffusive time scale, so a systematic multiple scales expansion can be sought for a finite-dimensional approximation of the linearized problem. This is determined to leading and to next order. The corrections indicate that the validity of the expansion does not depend on the magnitude of the individual eigenvalues of the linear operator, but it requires these eigenvalues to be well separated. The approximations are applied to analyze experiments by Bassou and Rharbi with polystyrene/toluene mixtures [Langmuir, 25 (2009), pp. 624-632]. © 2013 Society for Industrial and Applied Mathematics.

  15. A Multiple-Scale Analysis of Evaporation Induced Marangoni Convection

    Hennessy, Matthew G.; Mü nch, Andreas

    2013-01-01

    This paper considers the stability of thin liquid layers of binary mixtures of a volatile (solvent) species and a nonvolatile (polymer) species. Evaporation leads to a depletion of the solvent near the liquid surface. If surface tension increases for lower solvent concentrations, sufficiently strong compositional gradients can lead to Bénard-Marangoni-type convection that is similar to the kind which is observed in films that are heated from below. The onset of the instability is investigated by a linear stability analysis. Due to evaporation, the base state is time dependent, thus leading to a nonautonomous linearized system which impedes the use of normal modes. However, the time scale for the solvent loss due to evaporation is typically long compared to the diffusive time scale, so a systematic multiple scales expansion can be sought for a finite-dimensional approximation of the linearized problem. This is determined to leading and to next order. The corrections indicate that the validity of the expansion does not depend on the magnitude of the individual eigenvalues of the linear operator, but it requires these eigenvalues to be well separated. The approximations are applied to analyze experiments by Bassou and Rharbi with polystyrene/toluene mixtures [Langmuir, 25 (2009), pp. 624-632]. © 2013 Society for Industrial and Applied Mathematics.

  16. Validation of the fatigue scale for motor and cognitive functions in a danish multiple sclerosis cohort

    Oervik, M. S.; Sejbaek, T.; Penner, I. K.

    2017-01-01

    Background Our objective was to validate the Danish translation of the Fatigue Scale for Motor and Cognitive Functions (FSMC) in multiple sclerosis (MS) patients. Materials and methods A Danish MS cohort (n = 84) was matched and compared to the original German validation cohort (n = 309) and a he......Background Our objective was to validate the Danish translation of the Fatigue Scale for Motor and Cognitive Functions (FSMC) in multiple sclerosis (MS) patients. Materials and methods A Danish MS cohort (n = 84) was matched and compared to the original German validation cohort (n = 309...... positive correlations between the two fatigue scales implied high convergent validity (total scores: r = 0.851, p gender). Correcting for depression did not result in any significant adjustments of the correlations...

  17. Algorithmic Foundation of Spectral Rarefaction for Measuring Satellite Imagery Heterogeneity at Multiple Spatial Scales

    Rocchini, Duccio

    2009-01-01

    Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600

  18. Multiple Scale Analysis of the Dynamic State Index (DSI)

    Müller, A.; Névir, P.

    2016-12-01

    The Dynamic State Index (DSI) is a novel parameter that indicates local deviations of the atmospheric flow field from a stationary, inviscid and adiabatic solution of the primitive equations of fluid mechanics. This is in contrast to classical methods, which often diagnose deviations from temporal or spatial mean states. We show some applications of the DSI to atmospheric flow phenomena on different scales. The DSI is derived from the Energy-Vorticity-Theory (EVT) which is based on two global conserved quantities, the total energy and Ertel's potential enstrophy. Locally, these global quantities lead to the Bernoulli function and the PV building together with the potential temperature the DSI.If the Bernoulli function and the PV are balanced, the DSI vanishes and the basic state is obtained. Deviations from the basic state provide an indication of diabatic and non-stationary weather events. Therefore, the DSI offers a tool to diagnose and even prognose different atmospheric events on different scales.On synoptic scale, the DSI can help to diagnose storms and hurricanes, where also the dipole structure of the DSI plays an important role. In the scope of the collaborative research center "Scaling Cascades in Complex Systems" we show high correlations between the DSI and precipitation on convective scale. Moreover, we compare the results with reduced models and different spatial resolutions.

  19. Integral criteria for large-scale multiple fingerprint solutions

    Ushmaev, Oleg S.; Novikov, Sergey O.

    2004-08-01

    We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.

  20. Receptivity to Kinetic Fluctuations: A Multiple Scales Approach

    Edwards, Luke; Tumin, Anatoli

    2017-11-01

    The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.

  1. Preface: Introductory Remarks: Linear Scaling Methods

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  2. Test equating, scaling, and linking methods and practices

    Kolen, Michael J

    2014-01-01

    This book provides an introduction to test equating, scaling, and linking, including those concepts and practical issues that are critical for developers and all other testing professionals.  In addition to statistical procedures, successful equating, scaling, and linking involves many aspects of testing, including procedures to develop tests, to administer and score tests, and to interpret scores earned on tests. Test equating methods are used with many standardized tests in education and psychology to ensure that scores from multiple test forms can be used interchangeably.  Test scaling is the process of developing score scales that are used when scores on standardized tests are reported. In test linking, scores from two or more tests are related to one another. Linking has received much recent attention, due largely to investigations of linking similarly named tests from different test publishers or tests constructed for different purposes. In recent years, researchers from the education, psychology, and...

  3. Validity and Reliability of the Turkish Version of the Monitoring My Multiple Sclerosis Scale.

    Polat, Cansu; Tülek, Zeliha; Kürtüncü, Murat; Eraksoy, Mefkure

    2017-06-01

    This research was conducted to adapt the Monitoring My Multiple Sclerosis (MMMS) scale, which is a scale used for self-evaluation by multiple sclerosis (MS) patients of their own health and quality of life, to Turkish and to determine the psychometric properties of the scale. The methodological research was conducted in the outpatient MS clinic of a university hospital between January and September 2013. The sample in this study consisted of 140 patients aged above 18 who had a diagnosis of definite MS. Patients who experienced attacks in the previous month or had any serious medical problems other than MS were not included in the group. The linguistic validity of MMMS was tested by a backward-forward translation method and an expert panel. Reliability analysis was performed using test-retest correlations, item-total correlations, and internal consistency analysis. Confirmatory factor analysis and concurrent validity were used to determine the construct validity. The Multiple Sclerosis Quality of Life-54 instrument was used to determine concurrent validity and the Expanded Disability Status Scale, Hospital Anxiety and Depression Scale, and Mini Mental State Examination were used for further determination of the construct validity. We determined that the scale consisted of four factors with loadings ranging from 0.49 to 0.79. The correlation coefficients of the scale were determined to be between 0.47 and 0.76 for item-total score and between 0.60 and 0.81 for items and subscale scores. Cronbach's alpha coefficient was determined to be 0.94 for the entire scale and between 0.64 and 0.89 for the subscales. Test-retest correlations were significant. Correlations between MMMS and other scales were also found to be significant. The Turkish MMMS provides adequate validity and reliability for assessing the impact of MS on quality of life and health status in patients.

  4. Basic thinking patterns and working methods for multiple DFX

    Andreasen, Mogens Myrup; Mortensen, Niels Henrik

    1997-01-01

    This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX.......This paper attempts to describe the theory and methodologies behind DFX and linking multiple DFX's together. The contribution is an articulation of basic thinking patterns and description of some working methods for handling multiple DFX....

  5. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    Williams, Mark L.; Rearden, Bradley T.

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  6. On Distance Scale Bias due to Stellar Multiplicity and Associations

    Anderson, Richard I.; Riess, Adam

    2018-01-01

    The Cepheid Period-luminosity relation (Leavitt Law) provides the most accurate footing for the cosmic distance scale (CDS). Recently, evidence has been presented that the value of the Hubble constant H0 measured via the cosmic distance scale differs by 3.4σ from the value inferred using Planck data assuming ΛCDM cosmology (Riess et al. 2016). This exciting result may point to missing physics in the cosmological model; however, before such a claim can be made, careful analyses must address possible systematics involved in the calibration of the CDS.A frequently made claim in the literature is that companion stars or cluster membership of Cepheids may bias the calibration of the CDS. To evaluate this claim, we have carried out the first detailed study of the impact of Cepheid multiplicity and cluster membership on the determination of H0. Using deep HST imaging of M31 we directly measured the mean photometric bias due to cluster companions on Cepheid-based distances. Together with the empirical determination of the frequency with which Cepheids appear in clusters we quantify the combined H0 bias from close associations to be approximately 0.3% (0.20 km s-1 Mpc-1) for the passbands commonly used. Thus, we demonstrate that stellar associations cannot explain the aforementioned discrepancy observed in H0 and do not prevent achieving the community goal of measuring H0 with an accuracy of 1%. We emphasize the subtle, but important, difference between systematics relevant for calibrating the Leavitt Law (achieving a better understanding of stellar physics) and for accurately calibrating the CDS (measuring H0).

  7. A Multiphysics Framework to Learn and Predict in Presence of Multiple Scales

    Tomin, P.; Lunati, I.

    2015-12-01

    Modeling complex phenomena in the subsurface remains challenging due to the presence of multiple interacting scales, which can make it impossible to focus on purely macroscopic phenomena (relevant in most applications) and neglect the processes at the micro-scale. We present and discuss a general framework that allows us to deal with the situation in which the lack of scale separation requires the combined use of different descriptions at different scale (for instance, a pore-scale description at the micro-scale and a Darcy-like description at the macro-scale) [1,2]. The method is based on conservation principles and constructs the macro-scale problem by numerical averaging of micro-scale balance equations. By employing spatiotemporal adaptive strategies, this approach can efficiently solve large-scale problems [2,3]. In addition, being based on a numerical volume-averaging paradigm, it offers a tool to illuminate how macroscopic equations emerge from microscopic processes, to better understand the meaning of microscopic quantities, and to investigate the validity of the assumptions routinely used to construct the macro-scale problems. [1] Tomin, P., and I. Lunati, A Hybrid Multiscale Method for Two-Phase Flow in Porous Media, Journal of Computational Physics, 250, 293-307, 2013 [2] Tomin, P., and I. Lunati, Local-global splitting and spatiotemporal-adaptive Multiscale Finite Volume Method, Journal of Computational Physics, 280, 214-231, 2015 [3] Tomin, P., and I. Lunati, Spatiotemporal adaptive multiphysics simulations of drainage-imbibition cycles, Computational Geosciences, 2015 (under review)

  8. Level density in the complex scaling method

    Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki

    2005-01-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)

  9. Neural Computations in a Dynamical System with Multiple Time Scales.

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  10. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    Erlangga, Mokhammad Puput [Geophysical Engineering, Institut Teknologi Bandung, Ganesha Street no.10 Basic Science B Buliding fl.2-3 Bandung, 40132, West Java Indonesia puput.erlangga@gmail.com (Indonesia)

    2015-04-16

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.

  11. Methods for Large-Scale Nonlinear Optimization.

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  12. Temperature scaling method for Markov chains.

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  13. Multiple time scale analysis of pressure oscillations in solid rocket motors

    Ahmed, Waqas; Maqsood, Adnan; Riaz, Rizwan

    2018-03-01

    In this study, acoustic pressure oscillations for single and coupled longitudinal acoustic modes in Solid Rocket Motor (SRM) are investigated using Multiple Time Scales (MTS) method. Two independent time scales are introduced. The oscillations occur on fast time scale whereas the amplitude and phase changes on slow time scale. Hopf bifurcation is employed to investigate the properties of the solution. The supercritical bifurcation phenomenon is observed for linearly unstable system. The amplitude of the oscillations result from equal energy gain and loss rates of longitudinal acoustic modes. The effect of linear instability and frequency of longitudinal modes on amplitude and phase of oscillations are determined for both single and coupled modes. For both cases, the maximum amplitude of oscillations decreases with the frequency of acoustic mode and linear instability of SRM. The comparison of analytical MTS results and numerical simulations demonstrate an excellent agreement.

  14. On multiple level-set regularization methods for inverse problems

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  15. Multiple network interface core apparatus and method

    Underwood, Keith D [Albuquerque, NM; Hemmert, Karl Scott [Albuquerque, NM

    2011-04-26

    A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.

  16. Multiple tag labeling method for DNA sequencing

    Mathies, R.A.; Huang, X.C.; Quesada, M.A.

    1995-07-25

    A DNA sequencing method is described which uses single lane or channel electrophoresis. Sequencing fragments are separated in the lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radioisotope labels. 5 figs.

  17. IRIS Arrays: Observing Wavefields at Multiple Scales and Frequencies

    Sumy, D. F.; Woodward, R.; Frassetto, A.

    2014-12-01

    The Incorporated Research Institutions for Seismology (IRIS) provides instruments for creating and operating seismic arrays at a wide range of scales. As an example, for over thirty years the IRIS PASSCAL program has provided instruments to individual Principal Investigators to deploy arrays of all shapes and sizes on every continent. These arrays have ranged from just a few sensors to hundreds or even thousands of sensors, covering areas with dimensions of meters to thousands of kilometers. IRIS also operates arrays directly, such as the USArray Transportable Array (TA) as part of the EarthScope program. Since 2004, the TA has rolled across North America, at any given time spanning a swath of approximately 800 km by 2,500 km, and thus far sampling 2% of the Earth's surface. This achievement includes all of the lower-48 U.S., southernmost Canada, and now parts of Alaska. IRIS has also facilitated specialized arrays in polar environments and on the seafloor. In all cases, the data from these arrays are freely available to the scientific community. As the community of scientists who use IRIS facilities and data look to the future they have identified a clear need for new array capabilities. In particular, as part of its Wavefields Initiative, IRIS is exploring new technologies that can enable large, dense array deployments to record unaliased wavefields at a wide range of frequencies. Large-scale arrays might utilize multiple sensor technologies to best achieve observing objectives and optimize equipment and logistical costs. Improvements in packaging and power systems can provide equipment with reduced size, weight, and power that will reduce logistical constraints for large experiments, and can make a critical difference for deployments in harsh environments or other situations where rapid deployment is required. We will review the range of existing IRIS array capabilities with an overview of previous and current deployments and examples of data and results. We

  18. Multiple-scale stochastic processes: Decimation, averaging and beyond

    Bo, Stefano, E-mail: stefano.bo@nordita.org [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden); Celani, Antonio [Quantitative Life Sciences, The Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, I-34151 - Trieste (Italy)

    2017-02-07

    The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.

  19. Materials and nanosystems : interdisciplinary computational modeling at multiple scales

    Huber, S.E.

    2014-01-01

    Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of

  20. Methods for monitoring multiple gene expression

    Berka, Randy [Davis, CA; Bachkirova, Elena [Davis, CA; Rey, Michael [Davis, CA

    2012-05-01

    The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.

  1. Methods for monitoring multiple gene expression

    Berka, Randy; Bachkirova, Elena; Rey, Michael

    2013-10-01

    The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.

  2. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  3. Fuzzy multiple attribute decision making methods and applications

    Chen, Shu-Jen

    1992-01-01

    This monograph is intended for an advanced undergraduate or graduate course as well as for researchers, who want a compilation of developments in this rapidly growing field of operations research. This is a sequel to our previous works: "Multiple Objective Decision Making--Methods and Applications: A state-of-the-Art Survey" (No.164 of the Lecture Notes); "Multiple Attribute Decision Making--Methods and Applications: A State-of-the-Art Survey" (No.186 of the Lecture Notes); and "Group Decision Making under Multiple Criteria--Methods and Applications" (No.281 of the Lecture Notes). In this monograph, the literature on methods of fuzzy Multiple Attribute Decision Making (MADM) has been reviewed thoroughly and critically, and classified systematically. This study provides readers with a capsule look into the existing methods, their characteristics, and applicability to the analysis of fuzzy MADM problems. The basic concepts and algorithms from the classical MADM methods have been used in the development of the f...

  4. Polarized atomic orbitals for linear scaling methods

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  5. Novel scaling of the multiplicity distributions in the sequential fragmentation process and in the percolation

    Botet, R.

    1996-01-01

    A novel scaling of the multiplicity distributions is found in the shattering phase of the sequential fragmentation process with inhibition. The same scaling law is shown to hold in the percolation process. (author)

  6. Optimization of Inventories for Multiple Companies by Fuzzy Control Method

    Kawase, Koichi; Konishi, Masami; Imai, Jun

    2008-01-01

    In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...

  7. Interplay between multiple length and time scales in complex ...

    Administrator

    Processes in complex chemical systems, such as macromolecules, electrolytes, interfaces, ... by processes operating on a multiplicity of length .... real time. The design and interpretation of femto- second experiments has required considerable ...

  8. Multiple histogram method and static Monte Carlo sampling

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  9. Cut-off scaling and multiplicative reformalization in the theory of critical phenomena

    Forgacs, G.; Solyom, J.; Zawadowski, A.

    1976-03-01

    In the paper a new method to study the critical fluctuations in systems of 4-epsilon dimensions around the phase transition point is developed. This method unifies the Kadanoff scaling hypothesis as formulated by Wilson by help of his renormalization group technique and the simple mathematical structure of the Lie equations of the Gell-Mann-Low multiplicative renormalization. The basic idea of the new method is that a change in the physical cut-off can be compensated by an effective coupling in such a way that the Green's function and vertex in the original and transformed system differ only by a multiplicative factor. The critical indices, the anomalous dimensions and the critical exponent describing the correction to scaling are determined to second order in epsilon. The specific heat exponent is also calculated, in four dimensions the effect of fluctuations appears in the form of logarithmic corrections. In the last sections the new method is compared to other ones and the differences are discussed. (Sz.N.Z.)

  10. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  11. Scaled multiple holes suction tip for microneurosurgery; Technical note

    Abdolkarim Rahmanian, Associate Professor of Neurosurgery

    2017-12-01

    Conclusion: The new suction tip permits easy and precise adjustment of suction power in microneurosirgical operations. Our scaled 3 and 4-hole suction tip is a simple and useful device for controlling the suction power during the microneurosurgeical procedures.

  12. Multiple dynamical time-scales in networks with hierarchically

    Modular networks; hierarchical organization; synchronization. ... we show that such a topological structure gives rise to characteristic time-scale separation ... This suggests a possible functional role of such mesoscopic organization principle in ...

  13. The Great Chains of Computing: Informatics at Multiple Scales

    Kevin Kirby

    2011-10-01

    Full Text Available The perspective from which information processing is pervasive in the universe has proven to be an increasingly productive one. Phenomena from the quantum level to social networks have commonalities that can be usefully explicated using principles of informatics. We argue that the notion of scale is particularly salient here. An appreciation of what is invariant and what is emergent across scales, and of the variety of different types of scales, establishes a useful foundation for the transdiscipline of informatics. We survey the notion of scale and use it to explore the characteristic features of information statics (data, kinematics (communication, and dynamics (processing. We then explore the analogy to the principles of plenitude and continuity that feature in Western thought, under the name of the "great chain of being", from Plato through Leibniz and beyond, and show that the pancomputational turn is a modern counterpart of this ruling idea. We conclude by arguing that this broader perspective can enhance informatics pedagogy.

  14. Microstructural evolution at multiple scales during plastic deformation

    Winther, Grethe

    During plastic deformation metals develop microstructures which may be analysed on several scales, e.g. bulk textures, the scale of individual grains, intragranular phenomena in the form of orientation spreads as well as dislocation patterning by formation of dislocation boundaries in metals of m......, which is backed up by experimental data [McCabe et al. 2004; Wei et al., 2011; Hong, Huang, & Winther, 2013]. The current state of understanding as well as the major challenges are discusse....

  15. A multiple regression method for genomewide association studies ...

    Bujun Mei

    2018-06-07

    Jun 7, 2018 ... Similar to the typical genomewide association tests using LD ... new approach performed validly when the multiple regression based on linkage method was employed. .... the model, two groups of scenarios were simulated.

  16. National Earthquake Information Center Seismic Event Detections on Multiple Scales

    Patton, J.; Yeck, W. L.; Benz, H.; Earle, P. S.; Soto-Cordero, L.; Johnson, C. E.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center (NEIC) monitors seismicity on local, regional, and global scales using automatic picks from more than 2,000 near-real time seismic stations. This presents unique challenges in automated event detection due to the high variability in data quality, network geometries and density, and distance-dependent variability in observed seismic signals. To lower the overall detection threshold while minimizing false detection rates, NEIC has begun to test the incorporation of new detection and picking algorithms, including multiband (Lomax et al., 2012) and kurtosis (Baillard et al., 2014) pickers, and a new bayesian associator (Glass 3.0). The Glass 3.0 associator allows for simultaneous processing of variably scaled detection grids, each with a unique set of nucleation criteria (e.g., nucleation threshold, minimum associated picks, nucleation phases) to meet specific monitoring goals. We test the efficacy of these new tools on event detection in networks of various scales and geometries, compare our results with previous catalogs, and discuss lessons learned. For example, we find that on local and regional scales, rapid nucleation of small events may require event nucleation with both P and higher-amplitude secondary phases (e.g., S or Lg). We provide examples of the implementation of a scale-independent associator for an induced seismicity sequence (local-scale), a large aftershock sequence (regional-scale), and for monitoring global seismicity. Baillard, C., Crawford, W. C., Ballu, V., Hibert, C., & Mangeney, A. (2014). An automatic kurtosis-based P-and S-phase picker designed for local seismic networks. Bulletin of the Seismological Society of America, 104(1), 394-409. Lomax, A., Satriano, C., & Vassallo, M. (2012). Automatic picker developments and optimization: FilterPicker - a robust, broadband picker for real-time seismic monitoring and earthquake early-warning, Seism. Res. Lett. , 83, 531-540, doi: 10

  17. Multiple time scales in modeling the incidence of infections acquired in intensive care units

    Martin Wolkewitz

    2016-09-01

    Full Text Available Abstract Background When patients are admitted to an intensive care unit (ICU their risk of getting an infection will be highly depend on the length of stay at-risk in the ICU. In addition, risk of infection is likely to vary over calendar time as a result of fluctuations in the prevalence of the pathogen on the ward. Hence risk of infection is expected to depend on two time scales (time in ICU and calendar time as well as competing events (discharge or death and their spatial location. The purpose of this paper is to develop and apply appropriate statistical models for the risk of ICU-acquired infection accounting for multiple time scales, competing risks and the spatial clustering of the data. Methods A multi-center data base from a Spanish surveillance network was used to study the occurrence of an infection due to Methicillin-resistant Staphylococcus aureus (MRSA. The analysis included 84,843 patient admissions between January 2006 and December 2011 from 81 ICUs. Stratified Cox models were used to study multiple time scales while accounting for spatial clustering of the data (patients within ICUs and for death or discharge as competing events for MRSA infection. Results Both time scales, time in ICU and calendar time, are highly associated with the MRSA hazard rate and cumulative risk. When using only one basic time scale, the interpretation and magnitude of several patient-individual risk factors differed. Risk factors concerning the severity of illness were more pronounced when using only calendar time. These differences disappeared when using both time scales simultaneously. Conclusions The time-dependent dynamics of infections is complex and should be studied with models allowing for multiple time scales. For patient individual risk-factors we recommend stratified Cox regression models for competing events with ICU time as the basic time scale and calendar time as a covariate. The inclusion of calendar time and stratification by ICU

  18. Multiple independent identification decisions: a method of calibrating eyewitness identifications.

    Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul

    2004-02-01

    Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)

  19. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  20. Efficiency scale and technological change in credit unions and multiple banks using the COSIF

    Wanderson Rocha Bittencourt

    2016-08-01

    Full Text Available The modernization of the financial intermediation process and adapting to new technologies, brought adjustments to operational processes, providing the reduction of information borrowing costs, allowing generate greater customer satisfaction, due to increased competitiveness in addition to making gains with long efficiency period. In this context, this research aims to analyze the evolution in scale and technological efficiency of credit and multiple cooperative banks from 2009 to 2013. We used the method of Data Envelopment Analysis - DEA, which allows to calculate the change in efficiency of institutions through the Malmquist Index. The results indicated that institutions that employ larger volumes of assets in the composition of its resources presented evolution in scale and technological efficiency, influencing the change in total factor productivity. It should be noticed that cooperatives had, in some years, advances in technology and scale efficiency higher than banks. However, this result can be explained by the fact that the average efficiency of credit unions have been lower than that of banks in the analyzed sample, indicating that there is greater need to improve internal processes by cooperatives, compared to multiple banks surveyed.

  1. New ISR and SPS collider multiplicity data and the Golokhvastov generalization of the KNO scaling

    Szwed, R.; Wrochna, G.

    1985-01-01

    The generalization of KNO scaling proposed by Golokhvastov (KNO-G scaling) is tested using pp multiplicity data, in particular results of the new high precision ISR measurements. Since the data obey KNO-G scaling over the full energy range √s=2.51-62.2 GeV with the scaling function psi(z), having only one free parameter, the superiority of the KNO-G over the standard approach is clearly demonstrated. The extrapolation within KNO-G scaling to the SPS Collider energy range and a comparison with the recent UA5 multiplicity results is presented. (orig.)

  2. Nonlinear MHD dynamics of tokamak plasmas on multiple time scales

    Kruger, S.E.; Schnack, D.D.; Brennan, D.P.; Gianakon, T.A.; Sovinec, C.R.

    2003-01-01

    Two types of numerical, nonlinear simulations using the NIMROD code are presented. In the first simulation, we model the disruption occurring in DIII-D discharge 87009 as an ideal MHD instability driven unstable by neutral-beam heating. The mode grows faster than exponential, but on a time scale that is a hybrid of the heating rate and the ideal MHD growth rate as predicted by analytic theory. The second type of simulations, which occur on a much longer time scale, focus on the seeding of tearing modes by sawteeth. Pressure effects play a role both in the exterior region solutions and in the neoclassical drive terms. The results of both simulations are reviewed and their implications for experimental analysis is discussed. (author)

  3. Human learning: Power laws or multiple characteristic time scales?

    Gottfried Mayer-Kress

    2006-09-01

    Full Text Available The central proposal of A. Newell and Rosenbloom (1981 was that the power law is the ubiquitous law of learning. This proposition is discussed in the context of the key factors that led to the acceptance of the power law as the function of learning. We then outline the principles of an epigenetic landscape framework for considering the role of the characteristic time scales of learning and an approach to system identification of the processes of performance dynamics. In this view, the change of performance over time is the product of a superposition of characteristic exponential time scales that reflect the influence of different processes. This theoretical approach can reproduce the traditional power law of practice – within the experimental resolution of performance data sets - but we hypothesize that this function may prove to be a special and perhaps idealized case of learning.

  4. Gender Effect According to Item Directionality on the Perceived Stress Scale for Adults with Multiple Sclerosis

    Gitchel, W. Dent; Roessler, Richard T.; Turner, Ronna C.

    2011-01-01

    Assessment is critical to rehabilitation practice and research, and self-reports are a commonly used form of assessment. This study examines a gender effect according to item wording on the "Perceived Stress Scale" for adults with multiple sclerosis. Past studies have demonstrated two-factor solutions on this scale and other scales measuring…

  5. Transition in multiple-scale-lengths turbulence in plasmas

    Itoh, S.-I.; Yagi, M.; Kawasaki, M.; Kitazawa, A. [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, K. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2002-02-01

    The statistical theory of strong turbulence in inhomogeneous plasmas is developed for the cases where fluctuations with different scale-lengths coexist. Statistical nonlinear interactions between semi-micro and micro modes are first kept in the analysis as the drag, noise and drive. The nonlinear dynamics determines both the fluctuation levels and the cross field turbulent transport for the fixed global parameters. A quenching or suppressing effect is induced by their nonlinear interplay, even if both modes are unstable when analyzed independently. Influence of the inhomogeneous global radial electric field is discussed. A new insight is given for the physics of internal transport barrier. The thermal fluctuation of the scale length of {lambda}{sub D} is assumed to be statistically independent. The hierarchical structure is constructed according to the scale lengths. Transitions in turbulence are found and phase diagrams with cusp type catastrophe are obtained. Dynamics is followed. Statistical properties of the subcritical excitation are discussed. The probability density function (PDF) and transition probability are obtained. Power-laws are obtained in the PDF as well as in the transition probability. Generalization for the case where turbulence is composed of three-classes of modes is also developed. A new catastrophe of turbulent sates is obtained. (author)

  6. Transition in multiple-scale-lengths turbulence in plasmas

    Itoh, S.-I.; Yagi, M.; Kawasaki, M.; Kitazawa, A.

    2002-02-01

    The statistical theory of strong turbulence in inhomogeneous plasmas is developed for the cases where fluctuations with different scale-lengths coexist. Statistical nonlinear interactions between semi-micro and micro modes are first kept in the analysis as the drag, noise and drive. The nonlinear dynamics determines both the fluctuation levels and the cross field turbulent transport for the fixed global parameters. A quenching or suppressing effect is induced by their nonlinear interplay, even if both modes are unstable when analyzed independently. Influence of the inhomogeneous global radial electric field is discussed. A new insight is given for the physics of internal transport barrier. The thermal fluctuation of the scale length of λ D is assumed to be statistically independent. The hierarchical structure is constructed according to the scale lengths. Transitions in turbulence are found and phase diagrams with cusp type catastrophe are obtained. Dynamics is followed. Statistical properties of the subcritical excitation are discussed. The probability density function (PDF) and transition probability are obtained. Power-laws are obtained in the PDF as well as in the transition probability. Generalization for the case where turbulence is composed of three-classes of modes is also developed. A new catastrophe of turbulent sates is obtained. (author)

  7. A model for AGN variability on multiple time-scales

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  8. HARMONIC ANALYSIS OF SVPWM INVERTER USING MULTIPLE-PULSES METHOD

    Mehmet YUMURTACI

    2009-01-01

    Full Text Available Space Vector Modulation (SVM technique is a popular and an important PWM technique for three phases voltage source inverter in the control of Induction Motor. In this study harmonic analysis of Space Vector PWM (SVPWM is investigated using multiple-pulses method. Multiple-Pulses method calculates the Fourier coefficients of individual positive and negative pulses of the output PWM waveform and adds them together using the principle of superposition to calculate the Fourier coefficients of the all PWM output signal. Harmonic magnitudes can be calculated directly by this method without linearization, using look-up tables or Bessel functions. In this study, the results obtained in the application of SVPWM for values of variable parameters are compared with the results obtained with the multiple-pulses method.

  9. Research on neutron source multiplication method in nuclear critical safety

    Zhu Qingfu; Shi Yongqian; Hu Dingsheng

    2005-01-01

    The paper concerns in the neutron source multiplication method research in nuclear critical safety. Based on the neutron diffusion equation with external neutron source the effective sub-critical multiplication factor k s is deduced, and k s is different to the effective neutron multiplication factor k eff in the case of sub-critical system with external neutron source. The verification experiment on the sub-critical system indicates that the parameter measured with neutron source multiplication method is k s , and k s is related to the external neutron source position in sub-critical system and external neutron source spectrum. The relation between k s and k eff and the effect of them on nuclear critical safety is discussed. (author)

  10. Relationships between avian richness and landscape structure at multiple scales using multiple landscapes

    Michael S. Mitchell; Scott H. Rutzmoser; T. Bently Wigley; Craig Loehle; John A. Gerwin; Patrick D. Keyser; Richard A. Lancia; Roger W. Perry; Christopher L. Reynolds; Ronald E. Thill; Robert Weih; Don White; Petra Bohall Wood

    2006-01-01

    Little is known about factors that structure biodiversity on landscape scales, yet current land management protocols, such as forest certification programs, place an increasing emphasis on managing for sustainable biodiversity at landscape scales. We used a replicated landscape study to evaluate relationships between forest structure and avian diversity at both stand...

  11. Preliminary validation study of the Spanish version of the satisfaction with life scale in persons with multiple sclerosis

    Lucas-Carrasco, Ramona; Sastre-Garriga, Jaume; Galan, Ingrid; Den Oudsten, Brenda L.; Power, Michael J.

    2014-01-01

    Purpose: To assess Life Satisfaction, using the Satisfaction with Life Scale (SWLS), and to analyze its psychometric properties in Multiple Sclerosis (MS). Method: Persons with MS (n = 84) recruited at the MS Centre of Catalonia (Spain) completed a battery of subjective assessments including the

  12. An Intuitionistic Multiplicative ORESTE Method for Patients’ Prioritization of Hospitalization

    Cheng Zhang

    2018-04-01

    Full Text Available The tension brought about by sickbeds is a common and intractable issue in public hospitals in China due to the large population. Assigning the order of hospitalization of patients is difficult because of complex patient information such as disease type, emergency degree, and severity. It is critical to rank the patients taking full account of various factors. However, most of the evaluation criteria for hospitalization are qualitative, and the classical ranking method cannot derive the detailed relations between patients based on these criteria. Motivated by this, a comprehensive multiple criteria decision making method named the intuitionistic multiplicative ORESTE (organísation, rangement et Synthèse dedonnées relarionnelles, in French was proposed to handle the problem. The subjective and objective weights of criteria were considered in the proposed method. To do so, first, considering the vagueness of human perceptions towards the alternatives, an intuitionistic multiplicative preference relation model is applied to represent the experts’ preferences over the pairwise alternatives with respect to the predetermined criteria. Then, a correlation coefficient-based weight determining method is developed to derive the objective weights of criteria. This method can overcome the biased results caused by highly-related criteria. Afterwards, we improved the general ranking method, ORESTE, by introducing a new score function which considers both the subjective and objective weights of criteria. An intuitionistic multiplicative ORESTE method was then developed and further highlighted by a case study concerning the patients’ prioritization.

  13. Large-Scale Data for Multiple-View Stereopsis

    Aanæs, Henrik; Jensen, Rasmus Ramsbøl; Vogiatzis, George

    2016-01-01

    The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions...... that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken...... under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set...

  14. Numerical Investigation of Multiple-, Interacting-Scale Variable-Density Ground Water Flow Systems

    Cosler, D.; Ibaraki, M.

    2004-12-01

    The goal of our study is to elucidate the nonlinear processes that are important for multiple-, interacting-scale flow and solute transport in subsurface environments. In particular, we are focusing on the influence of small-scale instability development on variable-density ground water flow behavior in large-scale systems. Convective mixing caused by these instabilities may mix the fluids to a greater extent than would be the case with classical, Fickian dispersion. Most current numerical schemes for interpreting field-scale variable-density flow systems do not explicitly account for the complexities caused by small-scale instabilities and treat such processes as "lumped" Fickian dispersive mixing. Such approaches may greatly underestimate the mixing behavior and misrepresent the overall large-scale flow field dynamics. The specific objectives of our study are: (i) to develop an adaptive (spatial and temporal scales) three-dimensional numerical model that is fully capable of simulating field-scale variable-density flow systems with fine resolution (~1 cm); and (ii) to evaluate the importance of scale-dependent process interactions by performing a series of simulations on different problem scales ranging from laboratory experiments to field settings, including an aquifer storage and freshwater recovery (ASR) system similar to those planned for the Florida Everglades and in-situ contaminant remediation systems. We are examining (1) methods to create instabilities in field-scale systems, (2) porous media heterogeneity effects, and (3) the relation between heterogeneity characteristics (e.g., permeability variance and correlation length scales) and the mixing scales that develop for varying degrees of unstable stratification. Applications of our work include the design of new water supply and conservation measures (e.g., ASR systems), assessment of saltwater intrusion problems in coastal aquifers, and the design of in-situ remediation systems for aquifer restoration

  15. Scaling of chaotic multiplicity: A new observation in high-energy interactions

    Ghosh, D.; Ghosh, P.; Roy, J.

    1990-01-01

    We analyze high-energy-interaction data to study the dependence of chaotic multiplicity on the pseudorapidity window and propose a new scaling function bar Ψ(bar z)=left-angle n 1 right-angle/left-angle n right-angle max where left-angle n 1 right-angle is the chaotic multiplicity and bar z=left-angle n right-angle/left-angle n right-angle max is the reduced multiplicity, following the quantum-optical concept of particle production. It has been observed that the proposed ''chaotic multiplicity scaling'' is obeyed by pp, p bar p, and AA collisions at different available energies

  16. High Agreement was Obtained Across Scores from Multiple Equated Scales for Social Anxiety Disorder using Item Response Theory.

    Sunderland, Matthew; Batterham, Philip; Calear, Alison; Carragher, Natacha; Baillie, Andrew; Slade, Tim

    2018-04-10

    There is no standardized approach to the measurement of social anxiety. Researchers and clinicians are faced with numerous self-report scales with varying strengths, weaknesses, and psychometric properties. The lack of standardization makes it difficult to compare scores across populations that utilise different scales. Item response theory offers one solution to this problem via equating different scales using an anchor scale to set a standardized metric. This study is the first to equate several scales for social anxiety disorder. Data from two samples (n=3,175 and n=1,052), recruited from the Australian community using online advertisements, were utilised to equate a network of 11 self-report social anxiety scales via a fixed parameter item calibration method. Comparisons between actual and equated scores for most of the scales indicted a high level of agreement with mean differences <0.10 (equivalent to a mean difference of less than one point on the standardized metric). This study demonstrates that scores from multiple scales that measure social anxiety can be converted to a common scale. Re-scoring observed scores to a common scale provides opportunities to combine research from multiple studies and ultimately better assess social anxiety in treatment and research settings. Copyright © 2018. Published by Elsevier Inc.

  17. Symbolic interactionism as a theoretical perspective for multiple method research.

    Benzies, K M; Allen, M N

    2001-02-01

    Qualitative and quantitative research rely on different epistemological assumptions about the nature of knowledge. However, the majority of nurse researchers who use multiple method designs do not address the problem of differing theoretical perspectives. Traditionally, symbolic interactionism has been viewed as one perspective underpinning qualitative research, but it is also the basis for quantitative studies. Rooted in social psychology, symbolic interactionism has a rich intellectual heritage that spans more than a century. Underlying symbolic interactionism is the major assumption that individuals act on the basis of the meaning that things have for them. The purpose of this paper is to present symbolic interactionism as a theoretical perspective for multiple method designs with the aim of expanding the dialogue about new methodologies. Symbolic interactionism can serve as a theoretical perspective for conceptually clear and soundly implemented multiple method research that will expand the understanding of human health behaviour.

  18. A General Method for QTL Mapping in Multiple Related Populations Derived from Multiple Parents

    Yan AO

    2009-03-01

    Full Text Available It's well known that incorporating some existing populations derived from multiple parents may improve QTL mapping and QTL-based breeding programs. However, no general maximum likelihood method has been available for this strategy. Based on the QTL mapping in multiple related populations derived from two parents, a maximum likelihood estimation method was proposed, which can incorporate several populations derived from three or more parents and also can be used to handle different mating designs. Taking a circle design as an example, we conducted simulation studies to study the effect of QTL heritability and sample size upon the proposed method. The results showed that under the same heritability, enhanced power of QTL detection and more precise and accurate estimation of parameters could be obtained when three F2 populations were jointly analyzed, compared with the joint analysis of any two F2 populations. Higher heritability, especially with larger sample sizes, would increase the ability of QTL detection and improve the estimation of parameters. Potential advantages of the method are as follows: firstly, the existing results of QTL mapping in single population can be compared and integrated with each other with the proposed method, therefore the ability of QTL detection and precision of QTL mapping can be improved. Secondly, owing to multiple alleles in multiple parents, the method can exploit gene resource more adequately, which will lay an important genetic groundwork for plant improvement.

  19. Method for measuring multiple scattering corrections between liquid scintillators

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  20. Neural Computations in a Dynamical System with Multiple Time Scales

    Yuanyuan Mi

    2016-09-01

    Full Text Available Neural systems display rich short-term dynamics at various levels, e.g., spike-frequencyadaptation (SFA at single neurons, and short-term facilitation (STF and depression (STDat neuronal synapses. These dynamical features typically covers a broad range of time scalesand exhibit large diversity in different brain regions. It remains unclear what the computationalbenefit for the brain to have such variability in short-term dynamics is. In this study, we proposethat the brain can exploit such dynamical features to implement multiple seemingly contradictorycomputations in a single neural circuit. To demonstrate this idea, we use continuous attractorneural network (CANN as a working model and include STF, SFA and STD with increasing timeconstants in their dynamics. Three computational tasks are considered, which are persistent activity,adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, andhence cannot be implemented by a single dynamical feature or any combination with similar timeconstants. However, with properly coordinated STF, SFA and STD, we show that the network isable to implement the three computational tasks concurrently. We hope this study will shed lighton the understanding of how the brain orchestrates its rich dynamics at various levels to realizediverse cognitive functions.

  1. Friction modeling on multiple scales for Deep drawing processes

    Karupannasamy, Dinesh Kumar

    2013-01-01

    A deep drawing process is one of the widely used manufacturing techniques in the automotive industry because of its capability to produce complex shapes with sheet material, often performed using lubricants to ease the forming. Finite Element Methods (FEM) are popularly used at the design stage to

  2. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  3. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  4. Multiple-scale structures: from Faraday waves to soft-matter quasicrystals

    Samuel Savitz

    2018-05-01

    Full Text Available For many years, quasicrystals were observed only as solid-state metallic alloys, yet current research is now actively exploring their formation in a variety of soft materials, including systems of macromolecules, nanoparticles and colloids. Much effort is being invested in understanding the thermodynamic properties of these soft-matter quasicrystals in order to predict and possibly control the structures that form, and hopefully to shed light on the broader yet unresolved general questions of quasicrystal formation and stability. Moreover, the ability to control the self-assembly of soft quasicrystals may contribute to the development of novel photonics or other applications based on self-assembled metamaterials. Here a path is followed, leading to quantitative stability predictions, that starts with a model developed two decades ago to treat the formation of multiple-scale quasiperiodic Faraday waves (standing wave patterns in vibrating fluid surfaces and which was later mapped onto systems of soft particles, interacting via multiple-scale pair potentials. The article reviews, and substantially expands, the quantitative predictions of these models, while correcting a few discrepancies in earlier calculations, and presents new analytical methods for treating the models. In so doing, a number of new stable quasicrystalline structures are found with octagonal, octadecagonal and higher-order symmetries, some of which may, it is hoped, be observed in future experiments.

  5. Multiple-scale structures: from Faraday waves to soft-matter quasicrystals.

    Savitz, Samuel; Babadi, Mehrtash; Lifshitz, Ron

    2018-05-01

    For many years, quasicrystals were observed only as solid-state metallic alloys, yet current research is now actively exploring their formation in a variety of soft materials, including systems of macromolecules, nanoparticles and colloids. Much effort is being invested in understanding the thermodynamic properties of these soft-matter quasicrystals in order to predict and possibly control the structures that form, and hopefully to shed light on the broader yet unresolved general questions of quasicrystal formation and stability. Moreover, the ability to control the self-assembly of soft quasicrystals may contribute to the development of novel photonics or other applications based on self-assembled metamaterials. Here a path is followed, leading to quantitative stability predictions, that starts with a model developed two decades ago to treat the formation of multiple-scale quasiperiodic Faraday waves (standing wave patterns in vibrating fluid surfaces) and which was later mapped onto systems of soft particles, interacting via multiple-scale pair potentials. The article reviews, and substantially expands, the quantitative predictions of these models, while correcting a few discrepancies in earlier calculations, and presents new analytical methods for treating the models. In so doing, a number of new stable quasicrystalline structures are found with octagonal, octadecagonal and higher-order symmetries, some of which may, it is hoped, be observed in future experiments.

  6. Tidal Channel Diatom Assemblages Reflect within Wetland Environmental Conditions and Land Use at Multiple Scales

    We characterized regional patterns of the tidal channel benthic diatom community and examined the relative importance of local wetland and surrounding landscape level factors measured at multiple scales in structuring this assemblage. Surrounding land cover was characterized at ...

  7. A study of multiplicity scaling of particles produced in 16O-nucleus collisions

    Ahmad, N.

    2015-01-01

    Koba-Nielsen-Olesen (KNO) scaling has been a dominant framework to study the behaviour of multiplicity distribution of charged particles produced in high-energy hadronic collisions. Several workers have made attempt to investigate multiplicity distributions of particles produced in hadron-hadron (h-h), hadron-nucleus (h-A) and nucleus-nucleus (A-A) collisions at relativistic energies. Multiplicity distributions in p-nucleus interactions in emulsion experiments are found to be consistent with the KNO scaling. The applicability of the scaling of multiplicities was extended to FNL energies by earlier workers. Slattery has shown that KNO scaling is in agreement with the data on pp interactions over a wide-range of energies

  8. Multiple Scale Reaction-Diffusion-Advection Problems with Moving Fronts

    Nefedov, Nikolay

    2016-06-01

    In this work we discuss the further development of the general scheme of the asymptotic method of differential inequalities to investigate stability and motion of sharp internal layers (fronts) for nonlinear singularly perturbed parabolic equations, which are called in applications reaction-diffusion-advection equations. Our approach is illustrated for some new important cases of initial boundary value problems. We present results on stability and on the motion of the fronts.

  9. Multiple Scale Music Segmentation Using Rhythm, Timbre, and Harmony

    Kristoffer Jensen

    2007-01-01

    Full Text Available The segmentation of music into intro-chorus-verse-outro, and similar segments, is a difficult topic. A method for performing automatic segmentation based on features related to rhythm, timbre, and harmony is presented, and compared, between the features and between the features and manual segmentation of a database of 48 songs. Standard information retrieval performance measures are used in the comparison, and it is shown that the timbre-related feature performs best.

  10. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  11. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    Jure Tuta

    2018-03-01

    Full Text Available This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method, a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.. Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  12. Multiple Contexts, Multiple Methods: A Study of Academic and Cultural Identity among Children of Immigrant Parents

    Urdan, Tim; Munoz, Chantico

    2012-01-01

    Multiple methods were used to examine the academic motivation and cultural identity of a sample of college undergraduates. The children of immigrant parents (CIPs, n = 52) and the children of non-immigrant parents (non-CIPs, n = 42) completed surveys assessing core cultural identity, valuing of cultural accomplishments, academic self-concept,…

  13. Correction of measured multiplicity distributions by the simulated annealing method

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  14. How do the multiple large-scale climate oscillations trigger extreme precipitation?

    Shi, Pengfei; Yang, Tao; Xu, Chong-Yu; Yong, Bin; Shao, Quanxi; Li, Zhenya; Wang, Xiaoyan; Zhou, Xudong; Li, Shu

    2017-10-01

    Identifying the links between variations in large-scale climate patterns and precipitation is of tremendous assistance in characterizing surplus or deficit of precipitation, which is especially important for evaluation of local water resources and ecosystems in semi-humid and semi-arid regions. Restricted by current limited knowledge on underlying mechanisms, statistical correlation methods are often used rather than physical based model to characterize the connections. Nevertheless, available correlation methods are generally unable to reveal the interactions among a wide range of climate oscillations and associated effects on precipitation, especially on extreme precipitation. In this work, a probabilistic analysis approach by means of a state-of-the-art Copula-based joint probability distribution is developed to characterize the aggregated behaviors for large-scale climate patterns and their connections to precipitation. This method is employed to identify the complex connections between climate patterns (Atlantic Multidecadal Oscillation (AMO), El Niño-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO)) and seasonal precipitation over a typical semi-humid and semi-arid region, the Haihe River Basin in China. Results show that the interactions among multiple climate oscillations are non-uniform in most seasons and phases. Certain joint extreme phases can significantly trigger extreme precipitation (flood and drought) owing to the amplification effect among climate oscillations.

  15. System and method for image registration of multiple video streams

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  16. Multiple time scale analysis of sediment and runoff changes in the Lower Yellow River

    K. Chi

    2018-06-01

    Full Text Available Sediment and runoff changes of seven hydrological stations along the Lower Yellow River (LYR (Huayuankou Station, Jiahetan Station, Gaocun Station, Sunkou Station, Ai Shan Station, Qikou Station and Lijin Station from 1980 to 2003 were alanyzed at multiple time scale. The maximum value of monthly, daily and hourly sediment load and runoff conservations were also analyzed with the annually mean value. Mann–Kendall non-parametric mathematics correlation test and Hurst coefficient method were adopted in the study. Research results indicate that (1 the runoff of seven hydrological stations was significantly reduced in the study period at different time scales. However, the trends of sediment load in these stations were not obvious. The sediment load of Huayuankou, Jiahetan and Aishan stations even slightly increased with the runoff decrease. (2 The trends of the sediment load with different time scale showed differences at Luokou and Lijin stations. Although the annually and monthly sediment load were broadly flat, the maximum hourly sediment load showed decrease trend. (3 According to the Hurst coefficients, the trend of sediment and runoff will be continue without taking measures, which proved the necessary of runoff-sediment regulation scheme.

  17. Ear Detection under Uncontrolled Conditions with Multiple Scale Faster Region-Based Convolutional Neural Networks

    Yi Zhang

    2017-04-01

    Full Text Available Ear detection is an important step in ear recognition approaches. Most existing ear detection techniques are based on manually designing features or shallow learning algorithms. However, researchers found that the pose variation, occlusion, and imaging conditions provide a great challenge to the traditional ear detection methods under uncontrolled conditions. This paper proposes an efficient technique involving Multiple Scale Faster Region-based Convolutional Neural Networks (Faster R-CNN to detect ears from 2D profile images in natural images automatically. Firstly, three regions of different scales are detected to infer the information about the ear location context within the image. Then an ear region filtering approach is proposed to extract the correct ear region and eliminate the false positives automatically. In an experiment with a test set of 200 web images (with variable photographic conditions, 98% of ears were accurately detected. Experiments were likewise conducted on the Collection J2 of University of Notre Dame Biometrics Database (UND-J2 and University of Beira Interior Ear dataset (UBEAR, which contain large occlusion, scale, and pose variations. Detection rates of 100% and 98.22%, respectively, demonstrate the effectiveness of the proposed approach.

  18. Statistics of electron multiplication in multiplier phototube: iterative method

    Grau Malonda, A.; Ortiz Sanchez, J.F.

    1985-01-01

    An iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situations are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average anti-r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (author)

  19. Statistics of electron multiplication in a multiplier phototube; Iterative method

    Ortiz, J. F.; Grau, A.

    1985-01-01

    In the present paper an iterative method is applied to study the variation of dynode response in the multiplier phototube. Three different situation are considered that correspond to the following ways of electronic incidence on the first dynode: incidence of exactly one electron, incidence of exactly r electrons and incidence of an average r electrons. The responses are given for a number of steps between 1 and 5, and for values of the multiplication factor of 2.1, 2.5, 3 and 5. We study also the variance, the skewness and the excess of jurtosis for different multiplication factors. (Author) 11 refs

  20. Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP

    Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang

    2016-01-01

    This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by maki...

  1. Patterns of disturbance at multiple scales in real and simulated landscapes

    Giovanni Zurlini; Kurt H. Riitters; Nicola Zaccarelli; Irene Petrosoillo

    2007-01-01

    We describe a framework to characterize and interpret the spatial patterns of disturbances at multiple scales in socio-ecological systems. Domains of scale are defined in pattern metric space and mapped in geographic space, which can help to understand how anthropogenic disturbances might impact biodiversity through habitat modification. The approach identifies typical...

  2. Walking path-planning method for multiple radiation areas

    Liu, Yong-kuo; Li, Meng-kun; Peng, Min-jun; Xie, Chun-li; Yuan, Cheng-qian; Wang, Shuang-yu; Chao, Nan

    2016-01-01

    Highlights: • Radiation environment modeling method is designed. • Path-evaluating method and segmented path-planning method are proposed. • Path-planning simulation platform for radiation environment is built. • The method avoids to be misled by minimum dose path in single area. - Abstract: Based on minimum dose path-searching method, walking path-planning method for multiple radiation areas was designed to solve minimum dose path problem in single area and find minimum dose path in the whole space in this paper. Path-planning simulation platform was built using C# programming language and DirectX engine. The simulation platform was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method is effective in providing safety for people walking in nuclear facilities.

  3. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  4. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  5. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  6. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  7. Computational methods for criticality safety analysis within the scale system

    Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.

    1986-01-01

    The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs

  8. Multiple plasmonically induced transparency for chip-scale bandpass filters in metallic nanowaveguides

    Lu, Hua; Yue, Zengqi; Zhao, Jianlin

    2018-05-01

    We propose and investigate a new kind of bandpass filters based on the plasmonically induced transparency (PIT) effect in a special metal-insulator-metal (MIM) waveguide system. The finite element method (FEM) simulations illustrate that the obvious PIT response can be generated in the metallic nanostructure with the stub and coupled cavities. The lineshape and position of the PIT peak are particularly dependent on the lengths of the stub and coupled cavities, the waveguide width, as well as the coupling distance between the stub and coupled cavities. The numerical simulations are in accordance with the results obtained by the temporal coupled-mode theory. The multi-peak PIT effect can be achieved by integrating multiple coupled cavities into the plasmonic waveguide. This PIT response contributes to the flexible realization of chip-scale multi-channel bandpass filters, which could find crucial applications in highly integrated optical circuits for signal processing.

  9. Operational tools to build a multicriteria territorial risk scale with multiple stakeholders

    Cailloux, Olivier; Mayag, Brice; Meyer, Patrick; Mousseau, Vincent

    2013-01-01

    Evaluating and comparing the threats and vulnerabilities associated with territorial zones according to multiple criteria (industrial activity, population, etc.) can be a time-consuming task and often requires the participation of several stakeholders. Rather than a direct evaluation of these zones, building a risk assessment scale and using it in a formal procedure permits to automate the assessment and therefore to apply it in a repeated way and in large-scale contexts and, provided the chosen procedure and scale are accepted, to make it objective. One of the main difficulties of building such a formal evaluation procedure is to account for the multiple decision makers' preferences. The procedure used in this article, ELECTRE TRI, uses the performances of each territorial zone on multiple criteria, together with preferential parameters from multiple decision makers, to qualitatively assess their associated risk level. We also present operational tools in order to implement such a procedure in practice, and show their use on a detailed example

  10. Geometric calibration method for multiple head cone beam SPECT systems

    Rizo, Ph.; Grangeat, P.; Guillemaud, R.; Sauze, R.

    1993-01-01

    A method is presented for performing geometric calibration on Single Photon Emission Tomography (SPECT) cone beam systems with multiple cone beam collimators, each having its own orientation parameters. This calibration method relies on the fact that, in tomography, for each head, the relative position of the rotation axis and of the collimator does not change during the acquisition. In order to ensure the method stability, the parameters to be estimated in intrinsic parameters and extrinsic parameters are separated. The intrinsic parameters describe the acquisition geometry and the extrinsic parameters position of the detection system with respect to the rotation axis. (authors) 3 refs

  11. A crack growth evaluation method for interacting multiple cracks

    Kamaya, Masayuki

    2003-01-01

    When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e.g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks. (author)

  12. Galerkin projection methods for solving multiple related linear systems

    Chan, T.F.; Ng, M.; Wan, W.L.

    1996-12-31

    We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.

  13. A novel method for producing multiple ionization of noble gas

    Wang Li; Li Haiyang; Dai Dongxu; Bai Jiling; Lu Richang

    1997-01-01

    We introduce a novel method for producing multiple ionization of He, Ne, Ar, Kr and Xe. A nanosecond pulsed electron beam with large number density, which could be energy-controlled, was produced by incidence a focused 308 nm laser beam onto a stainless steel grid. On Time-of-Flight Mass Spectrometer, using this electron beam, we obtained multiple ionization of noble gas He, Ne, Ar and Xe. Time of fight mass spectra of these ions were given out. These ions were supposed to be produced by step by step ionization of the gas atoms by electron beam impact. This method may be used as a ideal soft ionizing point ion source in Time of Flight Mass Spectrometer

  14. New Models and Methods for the Electroweak Scale

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  15. A level set method for multiple sclerosis lesion segmentation.

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Measuring multiple residual-stress components using the contour method and multiple cuts

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  17. Clutter-free Visualization of Large Point Symbols at Multiple Scales by Offset Quadtrees

    ZHANG Xiang

    2016-08-01

    Full Text Available To address the cartographic problems in map mash-up applications in the Web 2.0 context, this paper studies a clutter-free technique for visualizing large symbols on Web maps. Basically, a quadtree is used to select one symbol in each grid cell at each zoom level. To resolve the symbol overlaps between neighboring quad-grids, multiple offsets are applied to the quadtree and a voting strategy is used to compute the significant level of symbols for their selection at multiple scales. The method is able to resolve spatial conflicts without explicit conflict detection, thus enabling a highly efficient processing. Also the resulting map forms a visual hierarchy of semantic importance. We discuss issues such as the relative importance, symbol-to-grid size ratio, and effective offset schemes, and propose two extensions to make better use of the free space available on the map. Experiments were carried out to validate the technique,which demonstrates its robustness and efficiency (a non-optimal implementation leads to a sub-second processing for datasets of a 105 magnitude.

  18. Measurement of subcritical multiplication by the interval distribution method

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  19. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  20. Physical modelling of granular flows at multiple-scales and stress levels

    Take, Andy; Bowman, Elisabeth; Bryant, Sarah

    2015-04-01

    The rheology of dry granular flows is an area of significant focus within the granular physics, geoscience, and geotechnical engineering research communities. Studies performed to better understand granular flows in manufacturing, materials processing or bulk handling applications have typically focused on the behavior of steady, continuous flows. As a result, much of the research on relating the fundamental interaction of particles to the rheological or constitutive behaviour of granular flows has been performed under (usually) steady-state conditions and low stress levels. However, landslides, which are the primary focus of the geoscience and geotechnical engineering communities, are by nature unsteady flows defined by a finite source volume and at flow depths much larger than typically possible in laboratory experiments. The objective of this paper is to report initial findings of experimental studies currently being conducted using a new large-scale landslide flume (8 m long, 2 m wide slope inclined at 30° with a 35 m long horizontal base section) and at elevated particle self-weight in a 10 m diameter geotechnical centrifuge to investigate the granular flow behavior at multiple-scales and stress levels. The transparent sidewalls of the two flumes used in the experimental investigation permit the combination of observations of particle-scale interaction (using high-speed imaging through transparent vertical sidewalls at over 1000 frames per second) with observations of the distal reach of the landslide debris. These observations are used to investigate the applicability of rheological models developed for steady state flows (e.g. the dimensionless inertial number) in landslide applications and the robustness of depth-averaged approaches to modelling dry granular flow at multiple scales. These observations indicate that the dimensionless inertial number calculated for the flow may be of limited utility except perhaps to define a general state (e.g. liquid

  1. Scaling in multiplicity distributions of heavy, black and grey prongs in nuclear emulsions

    Nieminen, M.; Torsti, J.J.; Valtonen, E.

    1979-01-01

    The validity of Koba-Nielsen-Olesen scaling hypothesis was examined in the case of heavy, black, and grey prongs in proton-emulsion collisions ('heavy' means 'either black or grey'). The average multiplicities of these prongs were computed in the region 0.1-400 GeV for the nuclei C, N, O, S, Br, Ag, and I. After the inclusion of the energy-dependent excitation probability of the nuclei of the form P* = b 0 + b 1 ln E 0 into the model, experimental multiplicity distributions in the energy region 6-300 GeV agreed satisfactorily with the scaling hypothesis. The ratio of the dispersion D (D = √ 2 >- 2 ) to the average multiplicity in the scaling functions of heavy, balck, and grey prongs was estimated to be 0.86, 0.84, and 1.04, respectively, in the high energy region. (Auth.)

  2. A global calibration method for multiple vision sensors based on multiple targets

    Liu, Zhen; Zhang, Guangjun; Wei, Zhenzhong; Sun, Junhua

    2011-01-01

    The global calibration of multiple vision sensors (MVS) has been widely studied in the last two decades. In this paper, we present a global calibration method for MVS with non-overlapping fields of view (FOVs) using multiple targets (MT). MT is constructed by fixing several targets, called sub-targets, together. The mutual coordinate transformations between sub-targets need not be known. The main procedures of the proposed method are as follows: one vision sensor is selected from MVS to establish the global coordinate frame (GCF). MT is placed in front of the vision sensors for several (at least four) times. Using the constraint that the relative positions of all sub-targets are invariant, the transformation matrix from the coordinate frame of each vision sensor to GCF can be solved. Both synthetic and real experiments are carried out and good result is obtained. The proposed method has been applied to several real measurement systems and shown to be both flexible and accurate. It can serve as an attractive alternative to existing global calibration methods

  3. Field evaluation of personal sampling methods for multiple bioaerosols.

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  4. Field evaluation of personal sampling methods for multiple bioaerosols.

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  5. Scaling of multiplicity distribution in hadron collisions and diffractive-excitation like models

    Buras, A.J.; Dethlefsen, J.M.; Koba, Z.

    1974-01-01

    Multiplicity distribution of secondary particles in inelastic hadron collision at high energy is studied in the semiclassical impact parameter representation. The scaling function is shown to consist of two factors: one geometrical and the other dynamical. We propose a specific choice of these factors, which describe satisfactorily the elastic scattering, the ratio of elastic to total cross-section and the simple scaling behaviour of multiplicity distribution in p-p collisions. Two versions of diffractive-excitation like models (global and local excitation) are presented as interpretation of our choice of dynamical factor. (author)

  6. Modelling across bioreactor scales: methods, challenges and limitations

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  7. Multiple Sclerosis Walking Scale-12, translation, adaptation and validation for the Persian language population.

    Nakhostin Ansari, Noureddin; Naghdi, Soofia; Mohammadi, Roghaye; Hasson, Scott

    2015-02-01

    The Multiple Sclerosis Walking Scale-12 (MSWS-12) is a multi-item rating scale used to assess the perspectives of patients about the impact of MS on their walking ability. The aim of this study was to examine the reliability and validity of the MSWS-12 in Persian speaking patients with MS. The MSWS-12 questionnaire was translated into Persian language according to internationally adopted standards involving forward-backward translation, reviewed by an expert committee and tested on the pre-final version. In this cross-sectional study, 100 participants (50 patients with MS and 50 healthy subjects) were included. The MSWS-12 was administered twice 7 days apart to 30 patients with MS for test and retest reliability. Internal consistency reliability was Cronbach's α 0.96 for test and 0.97 for retest. There were no significant floor or ceiling effects. Test-retest reliability was excellent (intraclass correlation coefficient [ICC] agreement of 0.98, 95% CI, 0.95-0.99) confirming the reproducibility of the Persian MSWS-12. Construct validity using known group methods was demonstrated through a significant difference in the Persian MSWS-12 total score between the patients with MS and healthy subjects. Factor analysis extracted 2 latent factors (79.24% of the total variance). A second factor analysis suggested the 9-item Persian MSWS as a unidimensional scale for patients with MS. The Persian MSWS-12 was found to be valid and reliable for assessing walking ability in Persian speaking patients with MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Hesitant fuzzy methods for multiple criteria decision analysis

    Zhang, Xiaolu

    2017-01-01

    The book offers a comprehensive introduction to methods for solving multiple criteria decision making and group decision making problems with hesitant fuzzy information. It reports on the authors’ latest research, as well as on others’ research, providing readers with a complete set of decision making tools, such as hesitant fuzzy TOPSIS, hesitant fuzzy TODIM, hesitant fuzzy LINMAP, hesitant fuzzy QUALIFEX, and the deviation modeling approach with heterogeneous fuzzy information. The main focus is on decision making problems in which the criteria values and/or the weights of criteria are not expressed in crisp numbers but are more suitable to be denoted as hesitant fuzzy elements. The largest part of the book is devoted to new methods recently developed by the authors to solve decision making problems in situations where the available information is vague or hesitant. These methods are presented in detail, together with their application to different type of decision-making problems. All in all, the book ...

  9. Correlation expansion: a powerful alternative multiple scattering calculation method

    Zhao Haifeng; Wu Ziyu; Sebilleau, Didier

    2008-01-01

    We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion

  10. Linking Fine-Scale Observations and Model Output with Imagery at Multiple Scales

    Sadler, J.; Walthall, C. L.

    2014-12-01

    The development and implementation of a system for seasonal worldwide agricultural yield estimates is underway with the international Group on Earth Observations GeoGLAM project. GeoGLAM includes a research component to continually improve and validate its algorithms. There is a history of field measurement campaigns going back decades to draw upon for ways of linking surface measurements and model results with satellite observations. Ground-based, in-situ measurements collected by interdisciplinary teams include yields, model inputs and factors affecting scene radiation. Data that is comparable across space and time with careful attention to calibration is essential for the development and validation of agricultural applications of remote sensing. Data management to ensure stewardship, availability and accessibility of the data are best accomplished when considered an integral part of the research. The expense and logistical challenges of field measurement campaigns can be cost-prohibitive and because of short funding cycles for research, access to consistent, stable study sites can be lost. The use of a dedicated staff for baseline data needed by multiple investigators, and conducting measurement campaigns using existing measurement networks such as the USDA Long Term Agroecosystem Research network can fulfill these needs and ensure long-term access to study sites.

  11. Small-scale fluctuations in the microwave background radiation and multiple gravitational lensing

    Kashlinsky, A.

    1988-01-01

    It is shown that multiple gravitational lensing of the microwave background radiation (MBR) by static compact objects significantly attenuates small-scale fluctuations in the MBR. Gravitational lensing, by altering trajectories of MBR photons reaching an observer, leads to (phase) mixing of photons from regions with different initial fluctuations. As a result of this diffusion process the original fluctuations are damped on scales up to several arcmin. An equation that describes this process and its general solution are given. It is concluded that the present upper limits on the amplitude of the MBR fluctuations on small scales cannot constrain theories of galaxy formation. 25 references

  12. Multiple scales and singular limits for compressible rotating fluids with general initial data

    Feireisl, Eduard; Novotný, A.

    2014-01-01

    Roč. 39, č. 6 (2014), s. 1104-1127 ISSN 0360-5302 Keywords : compressible Navier-Stokes equations * multiple scales * oscillatory integrals Subject RIV: BA - General Mathematics Impact factor: 1.013, year: 2014 http://www.tandfonline.com/doi/full/10.1080/03605302.2013.856917

  13. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  14. Multiple predictor smoothing methods for sensitivity analysis: Example results

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  15. Integrating Multiple Teaching Methods into a General Chemistry Classroom

    Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella

    1998-02-01

    In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.

  16. Fuzzy multiple objective decision making methods and applications

    Lai, Young-Jou

    1994-01-01

    In the last 25 years, the fuzzy set theory has been applied in many disciplines such as operations research, management science, control theory, artificial intelligence/expert system, etc. In this volume, methods and applications of crisp, fuzzy and possibilistic multiple objective decision making are first systematically and thoroughly reviewed and classified. This state-of-the-art survey provides readers with a capsule look into the existing methods, and their characteristics and applicability to analysis of fuzzy and possibilistic programming problems. To realize practical fuzzy modelling, it presents solutions for real-world problems including production/manufacturing, location, logistics, environment management, banking/finance, personnel, marketing, accounting, agriculture economics and data analysis. This book is a guided tour through the literature in the rapidly growing fields of operations research and decision making and includes the most up-to-date bibliographical listing of literature on the topi...

  17. Large-scale synthesis of YSZ nanopowder by Pechini method

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  18. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Ya. S. Pekker

    2014-01-01

    Full Text Available Major disabling factors in multiple sclerosis is motor disorders. Rehabilitation of such violations is one of the most important medical and social problems. Currently, most of the role given to the development of methods for correction of motor disorders based on accessing natural resources of the human body. One of these methods is the adaptive control with biofeedback (BFB. The aim of our study was the correction of motor disorders in multiple sclerosis patients using biofeedback training. In the study, we have developed scenarios for training rehabilitation program computer EMG biofeedback aimed at correction of motor disorders in patients with multiple sclerosis (MS. The method was tested in the neurological clinic of SSMU. The study included 9 patients with definite diagnosis of MS with the presence of the clinical picture of combined pyramidal and cerebellar symptoms. Assessed the effectiveness of rehabilitation procedures biofeedback training using specialized scales (rating scale functional systems Kurtzke; questionnaire research quality of life – SF-36, evaluation of disease impact Profile – SIP and score on a scale fatigue – FSS. In the studied group of patients decreased score on a scale of fatigue (FSS, increased motor control (SIP2, the physical and mental components of health (SF-36. The tendency to reduce the amount of neurological deficit by reducing the points on the pyramidal Kurtske violations. Analysis of the exchange rate dynamics of biofeedback training on EMG for trained muscles indicates an increase in the recorded signal OEMG from session to session. Proved a tendency to increase strength and coordination trained muscles of patients studied.Positive results of biofeedback therapy in patients with MS can be recommended to use this method in the complex rehabilitation measures to correct motor and psycho-emotional disorders.

  19. Measuring floodplain spatial patterns using continuous surface metrics at multiple scales

    Scown, Murray W.; Thoms, Martin C.; DeJager, Nathan R.

    2015-01-01

    Interactions between fluvial processes and floodplain ecosystems occur upon a floodplain surface that is often physically complex. Spatial patterns in floodplain topography have only recently been quantified over multiple scales, and discrepancies exist in how floodplain surfaces are perceived to be spatially organised. We measured spatial patterns in floodplain topography for pool 9 of the Upper Mississippi River, USA, using moving window analyses of eight surface metrics applied to a 1 × 1 m2 DEM over multiple scales. The metrics used were Range, SD, Skewness, Kurtosis, CV, SDCURV,Rugosity, and Vol:Area, and window sizes ranged from 10 to 1000 m in radius. Surface metric values were highly variable across the floodplain and revealed a high degree of spatial organisation in floodplain topography. Moran's I correlograms fit to the landscape of each metric at each window size revealed that patchiness existed at nearly all window sizes, but the strength and scale of patchiness changed within window size, suggesting that multiple scales of patchiness and patch structure exist in the topography of this floodplain. Scale thresholds in the spatial patterns were observed, particularly between the 50 and 100 m window sizes for all surface metrics and between the 500 and 750 m window sizes for most metrics. These threshold scales are ~ 15–20% and 150% of the main channel width (1–2% and 10–15% of the floodplain width), respectively. These thresholds may be related to structuring processes operating across distinct scale ranges. By coupling surface metrics, multi-scale analyses, and correlograms, quantifying floodplain topographic complexity is possible in ways that should assist in clarifying how floodplain ecosystems are structured.

  20. Does the Assessment of Recovery Capital scale reflect a single or multiple domains?

    Arndt S

    2017-07-01

    Full Text Available Stephan Arndt,1–3 Ethan Sahker,1,4 Suzy Hedden1 1Iowa Consortium for Substance Abuse Research and Evaluation, 2Department of Psychiatry, Carver College of Medicine, 3Department of Biostatistics, College of Public Health, 4Department of Psychological and Quantitative Foundations, Counseling Psychology Program College of Education, University of Iowa, Iowa City, IA, USA Objective: The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Methods: Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. Results: The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker’s congruence coefficients between the factor structure and defining weights (0.41–0.52 suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93, accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Conclusion: Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains. Keywords: social support, psychometrics, quality of life

  1. Methods of scaling threshold color difference using printed samples

    Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier

    2012-01-01

    A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

  2. Cross-cultural adaptation and validation of the 12-item Multiple Sclerosis Walking Scale (MSWS-12 for the Brazilian population

    Bruna E. M. Marangoni

    2012-12-01

    Full Text Available Gait impairment is reported by 85% of patients with multiple sclerosis (MS as main complaint. In 2003, Hobart et al. developed a scale for walking known as The 12-item Multiple Sclerosis Walking Scale (MSWS-12, which combines the perspectives of patients with psychometric methods. OBJECTIVE: This study aimed to cross-culturally adapt and validate the MSWS-12 for the Brazilian population with MS. METHODS: This study included 116 individuals diagnosed with MS, in accordance with McDonald's criteria. The steps of the adaptation process included translation, back-translation, review by an expert committee and pretesting. A test and retest of MSWS-12/BR was made for validation, with comparison with another scale (MSIS-29/BR and another test (T25FW. RESULTS: The Brazilian version of MSWS-12/BR was shown to be similar to the original. The results indicate that MSWS-12/BR is a reliable and reproducible scale. CONCLUSIONS: MSWS-12/BR has been adapted and validated, and it is a reliable tool for the Brazilian population.

  3. The importance of neurophysiological-Bobath method in multiple sclerosis

    Adrian Miler

    2018-02-01

    Full Text Available Rehabilitation treatment in multiple sclerosis should be carried out continuously, can take place in the hospital, ambulatory as well as environmental conditions. In the traditional approach, it focuses on reducing the symptoms of the disease, such as paresis, spasticity, ataxia, pain, sensory disturbances, speech disorders, blurred vision, fatigue, neurogenic bladder dysfunction, and cognitive impairment. In kinesiotherapy in people with paresis, the most common methods are the (Bobathian method.Improvement can be achieved by developing the ability to maintain a correct posture in various positions (so-called postural alignment, patterns based on corrective and equivalent responses. During the therapy, various techniques are used to inhibit pathological motor patterns and stimulate the reaction. The creators of the method believe that each movement pattern has its own postural system, from which it can be initiated, carried out and effectively controlled. Correct movement can not take place in the wrong position of the body. The physiotherapist discusses with the patient how to perform individual movement patterns, which protects him against spontaneous pathological compensation.The aim of the work is to determine the meaning and application of the  Bobath method in the therapy of people with MS

  4. A NDVI assisted remote sensing image adaptive scale segmentation method

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  5. Deposit and scale prevention methods in thermal sea water desalination

    Froehner, K.R.

    1977-01-01

    Introductory remarks deal with the 'fouling factor' and its influence on the overall heat transfer coefficient of msf evaporators. The composition of the matter dissolved in sea water and the thermal and chemical properties lead to formation of alkaline scale or even hard, sulphate scale on the heat exchanger tube walls and can hamper plant operation and economics seriously. Among the scale prevention methods are 1) pH control by acid dosing (decarbonation), 2) 'threshold treatment' by dosing of inhibitors of different kind, 3) mechanical cleaning by sponge rubber balls guided through the heat exchanger tubes, in general combined with methods no. 1 or 2, and 4) application of a scale crystals germ slurry (seeding). Mention is made of several other scale prevention proposals. The problems encountered with marine life (suspension, deposit, growth) in desalination plants are touched. (orig.) [de

  6. Elements of a method to scale ignition reactor Tokamak

    Cotsaftis, M.

    1984-08-01

    Due to unavoidable uncertainties from present scaling laws when projected to thermonuclear regime, a method is proposed to minimize these uncertainties in order to figure out the main parameters of ignited tokamak. The method mainly consists in searching, if any, a domain in adapted parameters space which allows Ignition, but is the least sensitive to possible change in scaling laws. In other words, Ignition domain is researched which is the intersection of all possible Ignition domains corresponding to all possible scaling laws produced by all possible transports

  7. Method of producing nano-scaled inorganic platelets

    Zhamu, Aruna; Jang, Bor Z.

    2012-11-13

    The present invention provides a method of exfoliating a layered material (e.g., transition metal dichalcogenide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites.

  8. Variational Multi-Scale method with spectral approximation of the sub-scales.

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  9. Cross-scale Efficient Tensor Contractions for Coupled Cluster Computations Through Multiple Programming Model Backends

    Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Epifanovsky, Evgeny [Q-Chem, Inc., Pleasanton, CA (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Krylov, Anna I. [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Chemistry

    2016-07-26

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.

  10. Fault detection and isolation for a full-scale railway vehicle suspension with multiple Kalman filters

    Jesussek, Mathias; Ellermann, Katrin

    2014-12-01

    Reliability and dependability in complex mechanical systems can be improved by fault detection and isolation (FDI) methods. These techniques are key elements for maintenance on demand, which could decrease service cost and time significantly. This paper addresses FDI for a railway vehicle: the mechanical model is described as a multibody system, which is excited randomly due to track irregularities. Various parameters, like masses, spring- and damper-characteristics, influence the dynamics of the vehicle. Often, the exact values of the parameters are unknown and might even change over time. Some of these changes are considered critical with respect to the operation of the system and they require immediate maintenance. The aim of this work is to detect faults in the suspension system of the vehicle. A Kalman filter is used in order to estimate the states. To detect and isolate faults the detection error is minimised with multiple Kalman filters. A full-scale train model with nonlinear wheel/rail contact serves as an example for the described techniques. Numerical results for different test cases are presented. The analysis shows that for the given system it is possible not only to detect a failure of the suspension system from the system's dynamic response, but also to distinguish clearly between different possible causes for the changes in the dynamical behaviour.

  11. Variational Multi-Scale method with spectral approximation of the sub-scales.

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  12. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-01-01

    Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to

  13. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China

    Xu, Lilai, E-mail: llxu@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Gao, Peiqing, E-mail: peiqing15@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China); Cui, Shenghui, E-mail: shcui@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Liu, Chun, E-mail: xmhwlc@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China)

    2013-06-15

    Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to

  14. Dual-scale Galerkin methods for Darcy flow

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  15. Acoustic scattering by multiple elliptical cylinders using collocation multipole method

    Lee, Wei-Ming

    2012-01-01

    This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical–cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.

  16. Spectral algorithms for multiple scale localized eigenfunctions in infinitely long, slightly bent quantum waveguides

    Boyd, John P.; Amore, Paolo; Fernández, Francisco M.

    2018-03-01

    A "bent waveguide" in the sense used here is a small perturbation of a two-dimensional rectangular strip which is infinitely long in the down-channel direction and has a finite, constant width in the cross-channel coordinate. The goal is to calculate the smallest ("ground state") eigenvalue of the stationary Schrödinger equation which here is a two-dimensional Helmholtz equation, ψxx +ψyy + Eψ = 0 where E is the eigenvalue and homogeneous Dirichlet boundary conditions are imposed on the walls of the waveguide. Perturbation theory gives a good description when the "bending strength" parameter ɛ is small as described in our previous article (Amore et al., 2017) and other works cited therein. However, such series are asymptotic, and it is often impractical to calculate more than a handful of terms. It is therefore useful to develop numerical methods for the perturbed strip to cover intermediate ɛ where the perturbation series may be inaccurate and also to check the pertubation expansion when ɛ is small. The perturbation-induced change-in-eigenvalue, δ ≡ E(ɛ) - E(0) , is O(ɛ2) . We show that the computation becomes very challenging as ɛ → 0 because (i) the ground state eigenfunction varies on both O(1) and O(1 / ɛ) length scales and (ii) high accuracy is needed to compute several correct digits in δ, which is itself small compared to the eigenvalue E. The multiple length scales are not geographically separate, but rather are inextricably commingled in the neighborhood of the boundary deformation. We show that coordinate mapping and immersed boundary strategies both reduce the computational domain to the uniform strip, allowing application of pseudospectral methods on tensor product grids with tensor product basis functions. We compared different basis sets; Chebyshev polynomials are best in the cross-channel direction. However, sine functions generate rather accurate analytical approximations with just a single basis function. In the down

  17. Combining MCDA and risk analysis: dealing with scaling issues in the multiplicative AHP

    Barfod, Michael Bruhn; van den Honert, Rob; Salling, Kim Bang

    the progression factor 2 is used for calculating scores of alternatives and √2 for calculation of criteria weights when transforming the verbal judgments stemming from pair wise comparisons. However, depending on the decision context, the decision-makers aversion towards risk, etc., it is most likely......This paper proposes a new decision support system (DSS) for applying risk analysis and stochastic simulation to the multiplicative AHP in order to deal with issues concerning the progression factors. The multiplicative AHP makes use of direct rating on a logarithmic scale, and for this purpose...

  18. Flood statistics of simple and multiple scaling; Invarianza di scala del regime di piena

    Rosso, Renzo; Mancini, Marco; Burlando, Paolo; De Michele, Carlo [Milan, Politecnico Univ. (Italy). DIIAR; Brath, Armando [Bologna, Univ. (Italy). DISTART

    1996-09-01

    The variability of flood probabilities throughout the river network is investigated by introducing the concepts of simple and multiple scaling. Flood statistics and quantiles as parametrized by drainage area are considered, and a distributed geomorphoclimatic model is used to analyze in detail their scaling properties for two river basins in Thyrrhenian Liguria (North-Western Italy). Although temporal storm precipitation and spatial runoff production are not scaling, the resulting flood flows do not display substantial deviations from statistical self-similarity or simple scaling. This result has a wide potential in order to assess the concept of hydrological homogeneity, also indicating a new route towards establishing physically-based procedures for flood frequency regionalization.

  19. Multiple mechanisms generate a universal scaling with dissipation for the air-water gas transfer velocity

    Katul, Gabriel; Liu, Heping

    2017-02-01

    A large corpus of field and laboratory experiments support the finding that the water side transfer velocity kL of sparingly soluble gases near air-water interfaces scales as kL˜(νɛ)1/4, where ν is the kinematic water viscosity and ɛ is the mean turbulent kinetic energy dissipation rate. Originally predicted from surface renewal theory, this scaling appears to hold for marine and coastal systems and across many environmental conditions. It is shown that multiple approaches to representing the effects of turbulence on kL lead to this expression when the Kolmogorov microscale is assumed to be the most efficient transporting eddy near the interface. The approaches considered range from simplified surface renewal schemes with distinct models for renewal durations, scaling and dimensional considerations, and a new structure function approach derived using analogies between scalar and momentum transfer. The work offers a new perspective as to why the aforementioned 1/4 scaling is robust.

  20. VLSI scaling methods and low power CMOS buffer circuit

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  1. Multiple scales and phases in discrete chains with application to folded proteins

    Sinelnikova, A.; Niemi, A. J.; Nilsson, Johan; Ulybyshev, M.

    2018-05-01

    Chiral heteropolymers such as large globular proteins can simultaneously support multiple length scales. The interplay between the different scales brings about conformational diversity, determines the phase properties of the polymer chain, and governs the structure of the energy landscape. Most importantly, multiple scales produce complex dynamics that enable proteins to sustain live matter. However, at the moment there is incomplete understanding of how to identify and distinguish the various scales that determine the structure and dynamics of a complex protein. Here we address this impending problem. We develop a methodology with the potential to systematically identify different length scales, in the general case of a linear polymer chain. For this we introduce and analyze the properties of an order parameter that can both reveal the presence of different length scales and can also probe the phase structure. We first develop our concepts in the case of chiral homopolymers. We introduce a variant of Kadanoff's block-spin transformation to coarse grain piecewise linear chains, such as the C α backbone of a protein. We derive analytically, and then verify numerically, a number of properties that the order parameter can display, in the case of a chiral polymer chain. In particular, we propose that in the case of a chiral heteropolymer the order parameter can reveal traits of several different phases, contingent on the length scale at which it is scrutinized. We confirm that this is the case with crystallographic protein structures in the Protein Data Bank. Thus our results suggest relations between the scales, the phases, and the complexity of folding pathways.

  2. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  3. A multi-scale method of mapping urban influence

    Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters

    2009-01-01

    Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...

  4. Subscales correlations between MSSS-88 and PRISM scales in evaluation of spasticity for patients with multiple sclerosis

    Knežević Tatjana

    2017-01-01

    Full Text Available Introduction/Objective. Patient-reported outcomes have been recognized as an important way of assessing health and well-being of patients with multiple sclerosis (MS. The aim of the study is to determine the correlation between different subscales of Patient-Reported Impact of Spasticity Measure (PRISM and Multiple Sclerosis Spasticity Scale (MSSS-88 scales in the estimation of spasticity influence on different domains Methods. The study is a cross-sectional observational study. MSSS-88 and PRISM scales were analyzed in five domains (body-function domain, activity domain, participation domain, personal factors/wellbeing domain, and hypothesis. For statistical interpretation of the correlation we performed the Spearman’s ρ-test, concurrent validity, divergent validity, and the linear regression model. Results. We found a significant correlation between subscales of evaluated MSSS-88 and PRISM scales for body domains; the highest correlation was between the need for assistance/positioning (NA/P and walking (W. Spasticity has the weakest correlation with the need for intervention (NI. The presence of pain has a negative impact and significant positive correlation between pain discomfort and NI. In the domain of body function for males, there was a non-significant correlation between muscle spasms and NI. The same applies for social functioning and social embarrassment domains, as well as for emotional health and psychological agitation for personal factors / wellbeing domain. The differences between genders of MS patients persist in different domains; muscle spasms are strong predictors for NI, and body movement is a strong predictor versus W for NA/P. Conclusion. MSSS-88 and PRISM scales can be considered reliable in measuring different domains of disability for MS patients with spasticity. Because it is shorter, quicker, and simple to use, it is concluded that the PRISM scale can successfully compete with and replace the MSSS-88 scale in

  5. Local scale multiple quantitative risk assessment and uncertainty evaluation in a densely urbanised area (Brescia, Italy

    S. Lari

    2012-11-01

    Full Text Available The study of the interactions between natural and anthropogenic risks is necessary for quantitative risk assessment in areas affected by active natural processes, high population density and strong economic activities.

    We present a multiple quantitative risk assessment on a 420 km2 high risk area (Brescia and surroundings, Lombardy, Northern Italy, for flood, seismic and industrial accident scenarios. Expected economic annual losses are quantified for each scenario and annual exceedance probability-loss curves are calculated. Uncertainty on the input variables is propagated by means of three different methodologies: Monte-Carlo-Simulation, First Order Second Moment, and point estimate.

    Expected losses calculated by means of the three approaches show similar values for the whole study area, about 64 000 000 € for earthquakes, about 10 000 000 € for floods, and about 3000 € for industrial accidents. Locally, expected losses assume quite different values if calculated with the three different approaches, with differences up to 19%.

    The uncertainties on the expected losses and their propagation, performed with the three methods, are compared and discussed in the paper. In some cases, uncertainty reaches significant values (up to almost 50% of the expected loss. This underlines the necessity of including uncertainty in quantitative risk assessment, especially when it is used as a support for territorial planning and decision making. The method is developed thinking at a possible application at a regional-national scale, on the basis of data available in Italy over the national territory.

  6. Lead Selenide Nanostructures Self-Assembled across Multiple Length Scales and Dimensions

    Evan K. Wujcik

    2016-01-01

    Full Text Available A self-assembly approach to lead selenide (PbSe structures that have organized across multiple length scales and multiple dimensions has been achieved. These structures consist of angstrom-scale 0D PbSe crystals, synthesized via a hot solution process, which have stacked into 1D nanorods via aligned dipoles. These 1D nanorods have arranged into nanoscale 2D sheets via directional short-ranged attraction. The nanoscale 2D sheets then further aligned into larger 2D microscale planes. In this study, the authors have characterized the PbSe structures via normal and cryo-TEM and EDX showing that this multiscale multidimensional self-assembled alignment is not due to drying effects. These PbSe structures hold promise for applications in advanced materials—particularly electronic technologies, where alignment can aid in device performance.

  7. Multiple instance learning tracking method with local sparse representation

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  8. Scaling laws governing the multiple scattering of diatomic molecules under Coulomb explosion

    Sigmund, P.

    1992-01-01

    The trajectories of fast molecules during and after penetration through foils are governed by Coulomb explosion and distorted by multiple scattering and other penetration phenomena. A scattering event may cause the energy available for Coulomb explosion to increase or decrease, and angular momentum may be transferred to the molecule. Because of continuing Coulomb explosion inside and outside the target foil, the transmission pattern recorded at a detector far away from the target is not just a linear superposition of Coulomb explosion and multiple scattering. The velocity distribution of an initially monochromatic and well-collimated, but randomly oriented, beam of molecular ions is governed by a generalization of the standard Bothe-Landau integral that governs the multiple scattering of atomic ions. Emphasis has been laid on the distribution in relative velocity and, in particular, relative energy. The statistical distributions governing the longitudinal motion (i.e., the relative motion along the molecular axis) and the rotational motion can be scaled into standard multiple-scattering distributions of atomic ions. The two scaling laws are very different. For thin target foils, the significance of rotational energy transfer is enhanced by an order of magnitude compared to switched-off Coulomb explosion. A distribution for the total relative energy (i.e., longitudinal plus rotational motion) has also been found, but its scaling behavior is more complex. Explicit examples given for all three distributions refer to power-law scattering. As a first approximation, scattering events undergone by the two atoms in the molecule were assumed uncorrelated. A separate section has been devoted to an estimate of the effect of impact-parameter correlation on the multiple scattering of penetrating molecules

  9. Seed harvesting by a generalist consumer is context-dependent: Interactive effects across multiple spatial scales

    Ostoja, Steven M.; Schupp, Eugene W.; Klinger, Rob

    2013-01-01

    multiple scales. Associational effects provide a useful theoretical basis for better understanding harvester ant foraging decisions. These results demonstrate the importance of ecological context for seed removal, which has implications for seed pools, plant populations and communities.

  10. Development of a patient reported outcome scale for fatigue in multiple sclerosis: The Neurological Fatigue Index (NFI-MS

    Tennant Alan

    2010-02-01

    Full Text Available Abstract Background Fatigue is a common and debilitating symptom in multiple sclerosis (MS. Best-practice guidelines suggest that health services should repeatedly assess fatigue in persons with MS. Several fatigue scales are available but concern has been expressed about their validity. The objective of this study was to examine the reliability and validity of a new scale for MS fatigue, the Neurological Fatigue Index (NFI-MS. Methods Qualitative analysis of 40 MS patient interviews had previously contributed to a coherent definition of fatigue, and a potential 52 item set representing the salient themes. A draft questionnaire was mailed out to 1223 people with MS, and the resulting data subjected to both factor and Rasch analysis. Results Data from 635 (51.9% response respondents were split randomly into an 'evaluation' and 'validation' sample. Exploratory factor analysis identified four potential subscales: 'physical', 'cognitive', 'relief by diurnal sleep or rest' and 'abnormal nocturnal sleep and sleepiness'. Rasch analysis led to further item reduction and the generation of a Summary scale comprising items from the Physical and Cognitive subscales. The scales were shown to fit Rasch model expectations, across both the evaluation and validation samples. Conclusion A simple 10-item Summary scale, together with scales measuring the physical and cognitive components of fatigue, were validated for MS fatigue.

  11. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  12. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-06-01

    Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Community males show multiple-perpetrator rape proclivity: development and preliminary validation of an interest scale.

    Alleyne, Emma; Gannon, Theresa A; Ó Ciardha, Caoilte; Wood, Jane L

    2014-02-01

    The literature on Multiple Perpetrator Rape (MPR) is scant; however, a significant proportion of sexual offending involves multiple perpetrators. In addition to the need for research with apprehended offenders of MPR, there is also a need to conduct research with members of the general public. Recent advances in the forensic literature have led to the development of self-report proclivity scales. These scales have enabled researchers to conduct evaluative studies sampling from members of the general public who may be perpetrators of sexual offenses and have remained undetected, or at highest risk of engaging in sexual offending. The current study describes the development and preliminary validation of the Multiple-Perpetrator Rape Interest Scale (M-PRIS), a vignette-based measure assessing community males' sexual arousal to MPR, behavioral propensity toward MPR and enjoyment of MPR. The findings show that the M-PRIS is a reliable measure of community males' sexual interest in MPR with high internal reliability and temporal stability. In a sample of university males we found that a large proportion (66%) did not emphatically reject an interest in MPR. We also found that rape-supportive cognitive distortions, antisocial attitudes, and high-risk sexual fantasies were predictors of sexual interest in MPR. We discuss these findings and the implications for further research employing proclivity measures referencing theory development and clinical practice.

  14. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon

    2014-01-01

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents for a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible

  15. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    Werner, Arelia T.; Cannon, Alex J.

    2016-04-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale

  16. Charged Particles Multiplicity and Scaling Violation of Fragmentation Functions in Electron-Positron Annihilation

    Ghaffary, Tooraj

    2016-01-01

    By the use of data from the annihilation process of electron-positron in AMY detector at 60 GeV center of mass energy, charged particles multiplicity distribution is obtained and fitted with the KNO scaling. Then, momentum spectra of charged particles and momentum distribution with respect to the jet axis are obtained, and the results are compared to the different models of QCD; also, the distribution of fragmentation functions and scaling violations are studied. It is being expected that the scaling violations of the fragmentation functions of gluon jets are stronger than the quark ones. One of the reasons for such case is that splitting function of quarks is larger than splitting function of gluon.

  17. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  18. Decomposing Multi‐Level Ethnic Segregation in Auckland, New Zealand, 2001–2013 : Segregation Intensity for Multiple Groups at Multiple Scales

    Manley, D.J.; Johnston, Ron; Jones, Kelvyn

    2018-01-01

    There has been a growing appreciation that the processes generating urban residential segregation operate at multiple scales, stimulating innovations into the measurement of their outcomes. This paper applies a multi‐level modelling approach to that issue to the situation in Auckland, where multiple

  19. Genetic structuring of northern myotis (Myotis septentrionalis) at multiple spatial scales

    Johnson, Joshua B.; Roberts, James H.; King, Timothy L.; Edwards, John W.; Ford, W. Mark; Ray, David A.

    2014-01-01

    Although groups of bats may be genetically distinguishable at large spatial scales, the effects of forest disturbances, particularly permanent land use conversions on fine-scale population structure and gene flow of summer aggregations of philopatric bat species are less clear. We genotyped and analyzed variation at 10 nuclear DNA microsatellite markers in 182 individuals of the forest-dwelling northern myotis (Myotis septentrionalis) at multiple spatial scales, from within first-order watersheds scaling up to larger regional areas in West Virginia and New York. Our results indicate that groups of northern myotis were genetically indistinguishable at any spatial scale we considered, and the collective population maintained high genetic diversity. It is likely that the ability to migrate, exploit small forest patches, and use networks of mating sites located throughout the Appalachian Mountains, Interior Highlands, and elsewhere in the hibernation range have allowed northern myotis to maintain high genetic diversity and gene flow regardless of forest disturbances at local and regional spatial scales. A consequence of maintaining high gene flow might be the potential to minimize genetic founder effects following population declines caused currently by the enzootic White-nose Syndrome.

  20. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C

    2013-12-21

    As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.

  1. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  2. Gamma Ray Tomographic Scan Method for Large Scale Industrial Plants

    Moon, Jin Ho; Jung, Sung Hee; Kim, Jong Bum; Park, Jang Geun

    2011-01-01

    The gamma ray tomography systems have been used to investigate a chemical process for last decade. There have been many cases of gamma ray tomography for laboratory scale work but not many cases for industrial scale work. Non-tomographic equipment with gamma-ray sources is often used in process diagnosis. Gamma radiography, gamma column scanning and the radioisotope tracer technique are examples of gamma ray application in industries. In spite of many outdoor non-gamma ray tomographic equipment, the most of gamma ray tomographic systems still remained as indoor equipment. But, as the gamma tomography has developed, the demand on gamma tomography for real scale plants also increased. To develop the industrial scale system, we introduced the gamma-ray tomographic system with fixed detectors and rotating source. The general system configuration is similar to 4 th generation geometry. But the main effort has been made to actualize the instant installation of the system for real scale industrial plant. This work would be a first attempt to apply the 4th generation industrial gamma tomographic scanning by experimental method. The individual 0.5-inch NaI detector was used for gamma ray detection by configuring circular shape around industrial plant. This tomographic scan method can reduce mechanical complexity and require a much smaller space than a conventional CT. Those properties make it easy to get measurement data for a real scale plant

  3. Scale factor measure method without turntable for angular rate gyroscope

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  4. Reduced α-stable dynamics for multiple time scale systems forced with correlated additive and multiplicative Gaussian white noise

    Thompson, William F.; Kuske, Rachel A.; Monahan, Adam H.

    2017-11-01

    Stochastic averaging problems with Gaussian forcing have been the subject of numerous studies, but far less attention has been paid to problems with infinite-variance stochastic forcing, such as an α-stable noise process. It has been shown that simple linear systems driven by correlated additive and multiplicative (CAM) Gaussian noise, which emerge in the context of reduced atmosphere and ocean dynamics, have infinite variance in certain parameter regimes. In this study, we consider the stochastic averaging of systems where a linear CAM noise process in the infinite variance parameter regime drives a comparatively slow process. We use (semi)-analytical approximations combined with numerical illustrations to compare the averaged process to one that is forced by a white α-stable process, demonstrating consistent properties in the case of large time-scale separation. We identify the conditions required for the fast linear CAM process to have such an influence in driving a slower process and then derive an (effectively) equivalent fast, infinite-variance process for which an existing stochastic averaging approximation is readily applied. The results are illustrated using numerical simulations of a set of example systems.

  5. Analysis of Resource and Emission Impacts: An Emergy-Based Multiple Spatial Scale Framework for Urban Ecological and Economic Evaluation

    Lixiao Zhang

    2011-03-01

    Full Text Available The development of the complex and multi-dimensional urban socio-economic system creates impacts on natural capital and human capital, which range from a local to a global scale. An emergy-based multiple spatial scale analysis framework and a rigorous accounting method that can quantify the values of human-made and natural capital losses were proposed in this study. With the intent of comparing the trajectory of Beijing over time, the characteristics of the interface between different scales are considered to explain the resource trade and the impacts of emissions. In addition, our improved determination of emergy analysis and acceptable management options that are in agreement with Beijing’s overall sustainability strategy were examined. The results showed that Beijing’s economy was closely correlated with the consumption of nonrenewable resources and exerted rising pressure on the environment. Of the total emergy use by the economic system, the imported nonrenewable resources from other provinces contribute the most, and the multi‑scale environmental impacts of waterborne and airborne pollution continued to increase from 1999 to 2006. Given the inputs structure, Beijing was chiefly making greater profits by shifting resources from other provinces in China and transferring the emissions outside. The results of our study should enable urban policy planners to better understand the multi-scale policy planning and development design of an urban ecological economic system.

  6. The Language Teaching Methods Scale: Reliability and Validity Studies

    Okmen, Burcu; Kilic, Abdurrahman

    2016-01-01

    The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…

  7. A comparison of multidimensional scaling methods for perceptual mapping

    Bijmolt, T.H.A.; Wedel, M.

    Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare

  8. Correlates of the Rosenberg Self-Esteem Scale Method Effects

    Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan

    2006-01-01

    Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…

  9. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    study has shown that the use of multiple methods facilitates the calibration and validation of models and might provide a more accurate measure for soil erosion rates in ungauged catchments. Moreover, the approach could be used to identify the most appropriate working and operational scales for soil erosion modelling.

  10. Single or multiple synchronization transitions in scale-free neuronal networks with electrical or chemical coupling

    Hao Yinghang; Gong, Yubing; Wang Li; Ma Xiaoguang; Yang Chuanlu

    2011-01-01

    Research highlights: → Single synchronization transition for gap-junctional coupling. → Multiple synchronization transitions for chemical synaptic coupling. → Gap junctions and chemical synapses have different impacts on synchronization transition. → Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.

  11. Single or multiple synchronization transitions in scale-free neuronal networks with electrical or chemical coupling

    Hao Yinghang [School of Physics, Ludong University, Yantai 264025 (China); Gong, Yubing, E-mail: gongyubing09@hotmail.co [School of Physics, Ludong University, Yantai 264025 (China); Wang Li; Ma Xiaoguang; Yang Chuanlu [School of Physics, Ludong University, Yantai 264025 (China)

    2011-04-15

    Research highlights: Single synchronization transition for gap-junctional coupling. Multiple synchronization transitions for chemical synaptic coupling. Gap junctions and chemical synapses have different impacts on synchronization transition. Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.

  12. The function of communities in protein interaction networks at multiple scales

    Jones Nick S

    2010-07-01

    Full Text Available Abstract Background If biology is modular then clusters, or communities, of proteins derived using only protein interaction network structure should define protein modules with similar biological roles. We investigate the link between biological modules and network communities in yeast and its relationship to the scale at which we probe the network. Results Our results demonstrate that the functional homogeneity of communities depends on the scale selected, and that almost all proteins lie in a functionally homogeneous community at some scale. We judge functional homogeneity using a novel test and three independent characterizations of protein function, and find a high degree of overlap between these measures. We show that a high mean clustering coefficient of a community can be used to identify those that are functionally homogeneous. By tracing the community membership of a protein through multiple scales we demonstrate how our approach could be useful to biologists focusing on a particular protein. Conclusions We show that there is no one scale of interest in the community structure of the yeast protein interaction network, but we can identify the range of resolution parameters that yield the most functionally coherent communities, and predict which communities are most likely to be functionally homogeneous.

  13. Skin and scales of teleost fish: Simple structure but high performance and multiple functions

    Vernerey, Franck J.; Barthelat, Francois

    2014-08-01

    Natural and man-made structural materials perform similar functions such as structural support or protection. Therefore they rely on the same types of properties: strength, robustness, lightweight. Nature can therefore provide a significant source of inspiration for new and alternative engineering designs. We report here some results regarding a very common, yet largely unknown, type of biological material: fish skin. Within a thin, flexible and lightweight layer, fish skins display a variety of strain stiffening and stabilizing mechanisms which promote multiple functions such as protection, robustness and swimming efficiency. We particularly discuss four important features pertaining to scaled skins: (a) a strongly elastic tensile behavior that is independent from the presence of rigid scales, (b) a compressive response that prevents buckling and wrinkling instabilities, which are usually predominant for thin membranes, (c) a bending response that displays nonlinear stiffening mechanisms arising from geometric constraints between neighboring scales and (d) a robust structure that preserves the above characteristics upon the loss or damage of structural elements. These important properties make fish skin an attractive model for the development of very thin and flexible armors and protective layers, especially when combined with the high penetration resistance of individual scales. Scaled structures inspired by fish skin could find applications in ultra-light and flexible armor systems, flexible electronics or the design of smart and adaptive morphing structures for aerospace vehicles.

  14. Linear-scaling quantum mechanical methods for excited states.

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  15. Neutron multiplicities as a measure for scission time scales and reaction violences

    Knoche, K.; Scobel, W.; Sprute, L.

    1991-01-01

    We discuss the temporal evolution of the fusion-fission reactions 32 S + 197 Au, 232 Th measured for 838 MeV projectiles by means of the neutron clock method. The results confirm existent precision lifetime versus fissility data. The total neutron multiplicity as a measure of the initial excitation energy E * is compared with the folding angle method. (author). 13 refs, 8 figs

  16. Multiple synchronization transitions in scale-free neuronal networks with electrical and chemical hybrid synapses

    Liu, Chen; Wang, Jiang; Wang, Lin; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok

    2014-01-01

    Highlights: • Synchronization transitions in hybrid scale-free neuronal networks are investigated. • Multiple synchronization transitions can be induced by the time delay. • Effect of synchronization transitions depends on the ratio of the electrical and chemical synapses. • Coupling strength and the density of inter-neuronal links can enhance the synchronization. -- Abstract: The impacts of information transmission delay on the synchronization transitions in scale-free neuronal networks with electrical and chemical hybrid synapses are investigated. Numerical results show that multiple appearances of synchronization regions transitions can be induced by different information transmission delays. With the time delay increasing, the synchronization of neuronal activities can be enhanced or destroyed, irrespective of the probability of chemical synapses in the whole hybrid neuronal network. In particular, for larger probability of electrical synapses, the regions of synchronous activities appear broader with stronger synchronization ability of electrical synapses compared with chemical ones. Moreover, it can be found that increasing the coupling strength can promote synchronization monotonously, playing the similar role of the increasing the probability of the electrical synapses. Interestingly, the structures and parameters of the scale-free neuronal networks, especially the structural evolvement plays a more subtle role in the synchronization transitions. In the network formation process, it is found that every new vertex is attached to the more old vertices already present in the network, the more synchronous activities will be emerge

  17. Human-Robot Teaming for Hydrologic Data Gathering at Multiple Scales

    Peschel, J.; Young, S. N.

    2017-12-01

    The use of personal robot-assistive technology by researchers and practitioners for hydrologic data gathering has grown in recent years as barriers to platform capability, cost, and human-robot interaction have been overcome. One consequence to this growth is a broad availability of unmanned platforms that might or might not be suitable for a specific hydrologic investigation. Through multiple field studies, a set of recommendations has been developed to help guide novice through experienced users in choosing the appropriate unmanned platforms for a given application. This talk will present a series of hydrologic data sets gathered using a human-robot teaming approach that has leveraged unmanned aerial, ground, and surface vehicles over multiple scales. The field case studies discussed will be connected to the best practices, also provided in the presentation. This talk will be of interest to geoscience researchers and practitioners, in general, as well as those working in fields related to emerging technologies.

  18. A Multiple-Item Scale for Assessing E-Government Service Quality

    Papadomichelaki, Xenia; Mentzas, Gregoris

    A critical element in the evolution of e-governmental services is the development of sites that better serve the citizens’ needs. To deliver superior service quality, we must first understand how citizens perceive and evaluate online citizen service. This involves defining what e-government service quality is, identifying its underlying dimensions, and determining how it can be conceptualized and measured. In this article we conceptualise an e-government service quality model (e-GovQual) and then we develop, refine, validate, confirm and test a multiple-item scale for measuring e-government service quality for public administration sites where citizens seek either information or services.

  19. Fission time-scale in experiments and in multiple initiation model

    Karamian, S. A., E-mail: karamian@nrmail.jinr.ru [Joint Institute for Nuclear Research (Russian Federation)

    2011-12-15

    Rate of fission for highly-excited nuclei is affected by the viscose character of the systemmotion in deformation coordinates as was reported for very heavy nuclei with Z{sub C} > 90. The long time-scale of fission can be described in a model of 'fission by diffusion' that includes an assumption of the overdamped diabatic motion. The fission-to-spallation ratio at intermediate proton energy could be influenced by the viscosity, as well. Within a novel approach of the present work, the cross examination of the fission probability, time-scales, and pre-fission neutron multiplicities is resulted in the consistent interpretation of a whole set of the observables. Earlier, different aspects could be reproduced in partial simulations without careful coordination.

  20. Studying time of flight imaging through scattering media across multiple size scales (Conference Presentation)

    Velten, Andreas

    2017-05-01

    Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.

  1. Temporal scale dependent interactions between multiple environmental disturbances in microcosm ecosystems.

    Garnier, Aurélie; Pennekamp, Frank; Lemoine, Mélissa; Petchey, Owen L

    2017-12-01

    Global environmental change has negative impacts on ecological systems, impacting the stable provision of functions, goods, and services. Whereas effects of individual environmental changes (e.g. temperature change or change in resource availability) are reasonably well understood, we lack information about if and how multiple changes interact. We examined interactions among four types of environmental disturbance (temperature, nutrient ratio, carbon enrichment, and light) in a fully factorial design using a microbial aquatic ecosystem and observed responses of dissolved oxygen saturation at three temporal scales (resistance, resilience, and return time). We tested whether multiple disturbances combine in a dominant, additive, or interactive fashion, and compared the predictability of dissolved oxygen across scales. Carbon enrichment and shading reduced oxygen concentration in the short term (i.e. resistance); although no other effects or interactions were statistically significant, resistance decreased as the number of disturbances increased. In the medium term, only enrichment accelerated recovery, but none of the other effects (including interactions) were significant. In the long term, enrichment and shading lengthened return times, and we found significant two-way synergistic interactions between disturbances. The best performing model (dominant, additive, or interactive) depended on the temporal scale of response. In the short term (i.e. for resistance), the dominance model predicted resistance of dissolved oxygen best, due to a large effect of carbon enrichment, whereas none of the models could predict the medium term (i.e. resilience). The long-term response was best predicted by models including interactions among disturbances. Our results indicate the importance of accounting for the temporal scale of responses when researching the effects of environmental disturbances on ecosystems. © 2017 The Authors. Global Change Biology Published by John Wiley

  2. A multiple-time-scale approach to the control of ITBs on JET

    Laborde, L.; Mazon, D.; Moreau, D. [EURATOM-CEA Association (DSM-DRFC), CEA Cadarache, 13 - Saint Paul lez Durance (France); Moreau, D. [Culham Science Centre, EFDA-JET, Abingdon, OX (United Kingdom); Ariola, M. [EURATOM/ENEA/CREATE Association, Univ. Napoli Federico II, Napoli (Italy); Cordoliani, V. [Ecole Polytechnique, 91 - Palaiseau (France); Tala, T. [EURATOM-Tekes Association, VTT Processes (Finland)

    2005-07-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  3. A multiple-time-scale approach to the control of ITBs on JET

    Laborde, L.; Mazon, D.; Moreau, D.; Moreau, D.; Ariola, M.; Cordoliani, V.; Tala, T.

    2005-01-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  4. Population genetics of the Eastern Hellbender (Cryptobranchus alleganiensis alleganiensis across multiple spatial scales.

    Shem D Unger

    Full Text Available Conservation genetics is a powerful tool to assess the population structure of species and provides a framework for informing management of freshwater ecosystems. As lotic habitats become fragmented, the need to assess gene flow for species of conservation management becomes a priority. The eastern hellbender (Cryptobranchus alleganiensis alleganiensis is a large, fully aquatic paedamorphic salamander. Many populations are experiencing declines throughout their geographic range, yet the genetic ramifications of these declines are currently unknown. To this end, we examined levels of genetic variation and genetic structure at both range-wide and drainage (hierarchical scales. We collected 1,203 individuals from 77 rivers throughout nine states from June 2007 to August 2011. Levels of genetic diversity were relatively high among all sampling locations. We detected significant genetic structure across populations (Fst values ranged from 0.001 between rivers within a single watershed to 0.218 between states. We identified two genetically differentiated groups at the range-wide scale: 1 the Ohio River drainage and 2 the Tennessee River drainage. An analysis of molecular variance (AMOVA based on landscape-scale sampling of basins within the Tennessee River drainage revealed the majority of genetic variation (∼94-98% occurs within rivers. Eastern hellbenders show a strong pattern of isolation by stream distance (IBSD at the drainage level. Understanding levels of genetic variation and differentiation at multiple spatial and biological scales will enable natural resource managers to make more informed decisions and plan effective conservation strategies for cryptic, lotic species.

  5. Seeing the forest through the trees: Considering roost-site selection at multiple spatial scales

    Jachowski, David S.; Rota, Christopher T.; Dobony, Christopher A.; Ford, W. Mark; Edwards, John W.

    2016-01-01

    Conservation of bat species is one of the most daunting wildlife conservation challenges in North America, requiring detailed knowledge about their ecology to guide conservation efforts. Outside of the hibernating season, bats in temperate forest environments spend their diurnal time in day-roosts. In addition to simple shelter, summer roost availability is as critical as maternity sites and maintaining social group contact. To date, a major focus of bat conservation has concentrated on conserving individual roost sites, with comparatively less focus on the role that broader habitat conditions contribute towards roost-site selection. We evaluated roost-site selection by a northern population of federally-endangered Indiana bats (Myotis sodalis) at Fort Drum Military Installation in New York, USA at three different spatial scales: landscape, forest stand, and individual tree level. During 2007–2011, we radiotracked 33 Indiana bats (10 males, 23 females) and located 348 roosting events in 116 unique roost trees. At the landscape scale, bat roost-site selection was positively associated with northern mixed forest, increased slope, and greater distance from human development. At the stand scale, we observed subtle differences in roost site selection based on sex and season, but roost selection was generally positively associated with larger stands with a higher basal area, larger tree diameter, and a greater sugar maple (Acer saccharum) component. We observed no distinct trends of roosts being near high-quality foraging areas of water and forest edges. At the tree scale, roosts were typically in American elm (Ulmus americana) or sugar maple of large diameter (>30 cm) of moderate decay with loose bark. Collectively, our results highlight the importance of considering day roost needs simultaneously across multiple spatial scales. Size and decay class of individual roosts are key ecological attributes for the Indiana bat, however, larger-scale stand structural

  6. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  7. Review of Monte Carlo methods for particle multiplicity evaluation

    Armesto-Pérez, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided.

  8. Review of Monte Carlo methods for particle multiplicity evaluation

    Armesto, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided

  9. A multiple length scale description of the mechanism of elastomer stretching

    Neuefeind, J.; Skov, Anne Ladegaard; Daniels, J. E.

    2016-01-01

    Conventionally, the stretching of rubber is modeled exclusively by rotations of segments of the embedded polymer chains; i.e. changes in entropy. However models have not been tested on all relevant length scales due to a lack of appropriate probes. Here we present a universal X-ray based method f...

  10. A comparison of confirmatory factor analysis methods : Oblique multiple group method versus confirmatory common factor method

    Stuive, Ilse

    2007-01-01

    Confirmatieve Factor Analyse (CFA) is een vaak gebruikte methode wanneer onderzoekers een bepaalde veronderstelling hebben over de indeling van items in één of meerdere subtests en willen onderzoeken of deze indeling ook wordt ondersteund door verzamelde onderzoeksgegevens. De meest gebruikte

  11. Maxwell iteration for the lattice Boltzmann method with diffusive scaling

    Zhao, Weifeng; Yong, Wen-An

    2017-03-01

    In this work, we present an alternative derivation of the Navier-Stokes equations from Bhatnagar-Gross-Krook models of the lattice Boltzmann method with diffusive scaling. This derivation is based on the Maxwell iteration and can expose certain important features of the lattice Boltzmann solutions. Moreover, it will be seen to be much more straightforward and logically clearer than the existing approaches including the Chapman-Enskog expansion.

  12. Variable scaling method and Stark effect in hydrogen atom

    Choudhury, R.K.R.; Ghosh, B.

    1983-09-01

    By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)

  13. Validation of patient determined disease steps (PDDS) scale scores in persons with multiple sclerosis.

    Learmonth, Yvonne C; Motl, Robert W; Sandroff, Brian M; Pula, John H; Cadavid, Diego

    2013-04-25

    The Patient Determined Disease Steps (PDDS) is a promising patient-reported outcome (PRO) of disability in multiple sclerosis (MS). To date, there is limited evidence regarding the validity of PDDS scores, despite its sound conceptual development and broad inclusion in MS research. This study examined the validity of the PDDS based on (1) the association with Expanded Disability Status Scale (EDSS) scores and (2) the pattern of associations between PDDS and EDSS scores with Functional System (FS) scores as well as ambulatory and other outcomes. 96 persons with MS provided demographic/clinical information, completed the PDDS and other PROs including the Multiple Sclerosis Walking Scale-12 (MSWS-12), and underwent a neurological examination for generating FS and EDSS scores. Participants completed assessments of cognition, ambulation including the 6-minute walk (6 MW), and wore an accelerometer during waking hours over seven days. There was a strong correlation between EDSS and PDDS scores (ρ = .783). PDDS and EDSS scores were strongly correlated with Pyramidal (ρ = .578 &ρ = .647, respectively) and Cerebellar (ρ = .501 &ρ = .528, respectively) FS scores as well as 6 MW distance (ρ = .704 &ρ = .805, respectively), MSWS-12 scores (ρ = .801 &ρ = .729, respectively), and accelerometer steps/day (ρ = -.740 &ρ = -.717, respectively). This study provides novel evidence supporting the PDDS as valid PRO of disability in MS.

  14. Scaling of charged particle multiplicity in Pb-Pb collisions at SPS energies

    Abreu, M C; Alexa, C; Arnaldi, R; Ataian, M R; Baglin, C; Baldit, A; Bedjidian, Marc; Beolè, S; Boldea, V; Bordalo, P; Borges, G; Bussière, A; Capelli, L; Castanier, C; Castor, J I; Chaurand, B; Chevrot, I; Cheynis, B; Chiavassa, E; Cicalò, C; Claudino, T; Comets, M P; Constans, N; Constantinescu, S; Cortese, P; De Falco, A; De Marco, N; Dellacasa, G; Devaux, A; Dita, S; Drapier, O; Ducroux, L; Espagnon, B; Fargeix, J; Force, P; Gallio, M; Gavrilov, Yu K; Gerschel, C; Giubellino, P; Golubeva, M B; Gonin, M; Grigorian, A A; Grigorian, S; Grossiord, J Y; Guber, F F; Guichard, A; Gulkanian, H R; Hakobyan, R S; Haroutunian, R; Idzik, M; Jouan, D; Karavitcheva, T L; Kluberg, L; Kurepin, A B; Le Bornec, Y; Lourenço, C; Macciotta, P; MacCormick, M; Marzari-Chiesa, A; Masera, M; Masoni, A; Monteno, M; Musso, A; Petiau, P; Piccotti, A; Pizzi, J R; Prado da Silva, W L; Prino, F; Puddu, G; Quintans, C; Ramello, L; Ramos, S; Rato-Mendes, P; Riccati, L; Romana, A; Santos, H; Saturnini, P; Scalas, E; Scomparin, E; Serci, S; Shahoyan, R; Sigaudo, F; Silva, S; Sitta, M; Sonderegger, P; Tarrago, X; Topilskaya, N S; Usai, G L; Vercellin, Ermanno; Villatte, L; Willis, N

    2002-01-01

    The charged particle multiplicity distribution $dN_{ch}/d\\eta$ has been measured by the NA50 experiment in Pb--Pb collisions at the CERN SPS. Measurements were done at incident energies of 40 and 158 GeV per nucleon over a broad impact parameter range. The multiplicity distributions are studied as a function of centrality using the number of participating nucleons ($N_{part}$), or the number of binary nucleon--nucleon collisions ($N_{coll}$). Their values at midrapidity exhibit a power law scaling behaviour given by $N_{part}^{1.00}$ and $N_{coll}^{0.75}$ at 158 GeV. Compatible results are found for the scaling behaviour at 40 GeV. The width of the $dN_{ch}/d\\eta$ distributions is larger at 158 than at 40 GeV/nucleon and decreases slightly with centrality at both energies. Our results are compared to similar studies performed by other experiments both at the CERN SPS and at RHIC.}

  15. Managing multiple roles: development of the Work-Family Conciliation Strategies Scale.

    Matias, Marisa; Fontaine, Anne Marie

    2014-07-17

    Juggling the demands of work and family is becoming increasingly difficult in today's world. As dual-earners are now a majority and men and women's roles in both the workplace and at home have changed, questions have been raised regarding how individuals and couples can balance family and work. Nevertheless, research addressing work-family conciliation strategies is limited to a conflict-driven approach and context-specific instruments are scarce. This study develops an instrument for assessing how dual-earners manage their multiple roles detaching from a conflict point of view highlighting the work-family conciliation strategies put forward by these couples. Through qualitative and quantitative procedures the Work-Family Conciliation Strategies Scales was developed and is composed by 5 factors: Couple Coping; Positive Attitude Towards Multiple Roles, Planning and Management Skills, Professional Adjustments and Institutional Support; with good adjustment [χ2/df = 1.22; CFI = .90, RMSEA = .04, SRMR = .08.] and good reliability coefficients [from .67 to .87]. The developed scale contributes to research because of its specificity to the work-family framework and its focus on the proactive nature of balancing work and family roles. The results support further use of this instrument.

  16. Optimizing multiple reliable forward contracts for reservoir allocation using multitime scale streamflow forecasts

    Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward

    2017-03-01

    Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.

  17. An Exact Method for the Double TSP with Multiple Stacks

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2010-01-01

    The double travelling salesman problem with multiple stacks (DTSPMS) is a pickup and delivery problem in which all pickups must be completed before any deliveries can be made. The problem originates from a real-life application where a 40 foot container (configured as 3 columns of 11 rows) is used...

  18. An Exact Method for the Double TSP with Multiple Stacks

    Larsen, Jesper; Lusby, Richard Martin; Ehrgott, Matthias

    The double travelling salesman problem with multiple stacks (DTSPMS) is a pickup and delivery problem in which all pickups must be completed before any deliveries can be made. The problem originates from a real-life application where a 40 foot container (configured as 3 columns of 11 rows) is used...

  19. A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations

    Omelchenko, Yuri

    2007-04-01

    A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.

  20. Preliminary validation study of the Spanish version of the satisfaction with life scale in persons with multiple sclerosis.

    Lucas-Carrasco, Ramona; Sastre-Garriga, Jaume; Galán, Ingrid; Den Oudsten, Brenda L; Power, Michael J

    2014-01-01

    To assess Life Satisfaction, using the Satisfaction with Life Scale (SWLS), and to analyze its psychometric properties in Multiple Sclerosis (MS). Persons with MS (n = 84) recruited at the MS Centre of Catalonia (Spain) completed a battery of subjective assessments including the SWLS, the World Health Organization Quality of Life instrument and disability module (WHOQOL-BREF, WHOQOL-DIS) and the Hospital Anxiety Depression Scale-Depression (HADS-D); sociodemographic and disability status data were also gathered. Psychometric properties of the SWLS were investigated using standard psychometric methods. Internal consistency (Cronbach's alpha coefficient) was 0.84. A factor analysis using a principal components method showed a one factor structure accounting for 62.6% of the variance. Statistically significant correlations were confirmed between SWLS with WHOQOL-BREF, WHOQOL-DIS and HADS-D. SWLS scores were significantly different between a priori defined groups: probable depressed versus nondepressed and participants perceiving a mild versus severe impact of disability on their lives. To the best of our knowledge, this study is the first to report on the psychometrics properties of the SWLS in persons with MS. It might be a valuable tool to use in appraising persons with MS through the continuum of care. The Spanish version of the Satisfaction with Life Scale (SWLS) is a reliable and valid instrument in Multiple Sclerosis (MS). The SWLS is able to discriminate between participants with low or high scores on depressive symptoms or disability impact on life. SWLS might be useful through the continuum of care in persons with MS, including Rehabilitation Services.

  1. Community functional responses to soil and climate at multiple spatial scales: when does intraspecific variation matter?

    Andrew Siefert

    Full Text Available Despite increasing evidence of the importance of intraspecific trait variation in plant communities, its role in community trait responses to environmental variation, particularly along broad-scale climatic gradients, is poorly understood. We analyzed functional trait variation among early-successional herbaceous plant communities (old fields across a 1200-km latitudinal extent in eastern North America, focusing on four traits: vegetative height, leaf area, specific leaf area (SLA, and leaf dry matter content (LDMC. We determined the contributions of species turnover and intraspecific variation to between-site functional dissimilarity at multiple spatial scales and community trait responses to edaphic and climatic factors. Among-site variation in community mean trait values and community trait responses to the environment were generated by a combination of species turnover and intraspecific variation, with species turnover making a greater contribution for all traits. The relative importance of intraspecific variation decreased with increasing geographic and environmental distance between sites for SLA and leaf area. Intraspecific variation was most important for responses of vegetative height and responses to edaphic compared to climatic factors. Individual species displayed strong trait responses to environmental factors in many cases, but these responses were highly variable among species and did not usually scale up to the community level. These findings provide new insights into the role of intraspecific trait variation in plant communities and the factors controlling its relative importance. The contribution of intraspecific variation to community trait responses was greatest at fine spatial scales and along edaphic gradients, while species turnover dominated at broad spatial scales and along climatic gradients.

  2. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales

    Jihoon Oh

    2017-09-01

    Full Text Available Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders (N = 573 were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC was the highest for 1-month suicide attempts detection (0.93, followed by lifetime (0.89, and 1-year detection (0.87. Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87. Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  3. Large-scale recovery of an endangered amphibian despite ongoing exposure to multiple stressors

    Knapp, Roland A.; Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A. W.; Vrendenburg, Vance T.; Rosenblum, Erica Bree; Briggs, Cheryl J.

    2016-01-01

    Amphibians are one of the most threatened animal groups, with 32% of species at risk for extinction. Given this imperiled status, is the disappearance of a large fraction of the Earth’s amphibians inevitable, or are some declining species more resilient than is generally assumed? We address this question in a species that is emblematic of many declining amphibians, the endangered Sierra Nevada yellow-legged frog (Rana sierrae). Based on >7,000 frog surveys conducted across Yosemite National Park over a 20-y period, we show that, after decades of decline and despite ongoing exposure to multiple stressors, including introduced fish, the recently emerged disease chytridiomycosis, and pesticides, R. sierrae abundance increased sevenfold during the study and at a rate of 11% per year. These increases occurred in hundreds of populations throughout Yosemite, providing a rare example of amphibian recovery at an ecologically relevant spatial scale. Results from a laboratory experiment indicate that these increases may be in part because of reduced frog susceptibility to chytridiomycosis. The disappearance of nonnative fish from numerous water bodies after cessation of stocking also contributed to the recovery. The large-scale increases in R. sierrae abundance that we document suggest that, when habitats are relatively intact and stressors are reduced in their importance by active management or species’ adaptive responses, declines of some amphibians may be partially reversible, at least at a regional scale. Other studies conducted over similarly large temporal and spatial scales are critically needed to provide insight and generality about the reversibility of amphibian declines at a global scale.

  4. Termites Are Resistant to the Effects of Fire at Multiple Spatial Scales.

    Sarah C Avitabile

    Full Text Available Termites play an important ecological role in many ecosystems, particularly in nutrient-poor arid and semi-arid environments. We examined the distribution and occurrence of termites in the fire-prone, semi-arid mallee region of south-eastern Australia. In addition to periodic large wildfires, land managers use fire as a tool to achieve both asset protection and ecological outcomes in this region. Twelve taxa of termites were detected by using systematic searches and grids of cellulose baits at 560 sites, clustered in 28 landscapes selected to represent different fire mosaic patterns. There was no evidence of a significant relationship between the occurrence of termite species and time-since-fire at the site scale. Rather, the occurrence of species was related to habitat features such as the density of mallee trees and large logs (>10 cm diameter. Species richness was greater in chenopod mallee vegetation on heavier soils in swales, rather than Triodia mallee vegetation of the sandy dune slopes. At the landscape scale, there was little evidence that the frequency of occurrence of termite species was related to fire, and no evidence that habitat heterogeneity generated by fire influenced termite species richness. The most influential factor at the landscape scale was the environmental gradient represented by average annual rainfall. Although termites may be associated with flammable habitat components (e.g. dead wood, they appear to be buffered from the effects of fire by behavioural traits, including nesting underground, and the continued availability of dead wood after fire. There is no evidence to support the hypothesis that a fine-scale, diverse mosaic of post-fire age-classes will enhance the diversity of termites. Rather, termites appear to be resistant to the effects of fire at multiple spatial scales.

  5. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales.

    Oh, Jihoon; Yun, Kyongsik; Hwang, Ji-Hyun; Chae, Jeong-Ho

    2017-01-01

    Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders ( N  = 573) were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements) and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC) was the highest for 1-month suicide attempts detection (0.93), followed by lifetime (0.89), and 1-year detection (0.87). Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87). Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  6. Analyzing the Impacts of Alternated Number of Iterations in Multiple Imputation Method on Explanatory Factor Analysis

    Duygu KOÇAK

    2017-11-01

    Full Text Available The study aims to identify the effects of iteration numbers used in multiple iteration method, one of the methods used to cope with missing values, on the results of factor analysis. With this aim, artificial datasets of different sample sizes were created. Missing values at random and missing values at complete random were created in various ratios by deleting data. For the data in random missing values, a second variable was iterated at ordinal scale level and datasets with different ratios of missing values were obtained based on the levels of this variable. The data were generated using “psych” program in R software, while “dplyr” program was used to create codes that would delete values according to predetermined conditions of missing value mechanism. Different datasets were generated by applying different iteration numbers. Explanatory factor analysis was conducted on the datasets completed and the factors and total explained variances are presented. These values were first evaluated based on the number of factors and total variance explained of the complete datasets. The results indicate that multiple iteration method yields a better performance in cases of missing values at random compared to datasets with missing values at complete random. Also, it was found that increasing the number of iterations in both missing value datasets decreases the difference in the results obtained from complete datasets.

  7. Evaluation of the treatment efficacy of patients with multiple sclerosis using Barthel index and Expanded Disability Status Scale

    Edina Tanovic

    2014-09-01

    Full Text Available Introduction: Multiple sclerosis (MS is a chronic, autoimmune and progressive multifocal demyelinating disease of the central nervous system. The aim of this study was to evaluate rehabilitation of patients with multiple sclerosis using BI (Barthel index and EDDS (Expanded Disability Status Scale.Methods: A clinical observational study was made at the clinic for physical medicine and rehabilitation in Sarajevo. We analyzed 49 patients with MS in relation of gender, age and level of disability at admission and discharge, patient disability were estimated using EDDS scale. The ability of patients in their activities of daily living were also analyzed according to the BI at admission and discharge.Results: Of the total number of patients (n=49 there were 15 men and 34 women. The average age of female patient was 42.38±13.48 and male patient 46.06±9.56. EDDS values were significantly different at the beginning and at the end of the therapy (p=0.001 as was the value of BI (p=0.001.Conclusion: MS patients, after the rehabilitation in hospital conditions show significant recovery and a reduced level of disability; they show higher independence in activities but rehabilitation demands individual approach and adjustment with what patients are currently capable of achieving.

  8. Multilevel method for modeling large-scale networks.

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  9. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  10. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  11. Wechsler Adult Intelligence Scale-Fourth Edition performance in relapsing-remitting multiple sclerosis.

    Ryan, Joseph J; Gontkovsky, Samuel T; Kreiner, David S; Tree, Heather A

    2012-01-01

    Forty patients with relapsing-remitting multiple sclerosis (MS) completed the 10 core Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests. Means for age and education were 42.05 years (SD = 9.94) and 14.33 years (SD = 2.40). For all participants, the native language was English. The mean duration of MS diagnosis was 8.17 years (SD = 7.75), and the mean Expanded Disability Status Scale (EDSS; Kurtzke, 1983 ) score was 3.73 (SD = 1.41) with a range from 2.0 to 6.5. A control group of healthy individuals with similar demographic characteristics also completed the WAIS-IV and were provided by the test publisher. Compared to controls, patients with MS earned significantly lower subtest and composite scores. The patients' mean scores were consistently in the low-average to average range, and the patterns of performance across groups did not differ significantly, although there was a trend towards higher scores on the Verbal Comprehension Index (VCI) and lower scores on the Processing Speed Index (PSI). Approximately 78% of patients had actual Full Scale IQs that were significantly lower than preillness, demographically based IQ estimates.

  12. Quantitative evidence for the effects of multiple drivers on continental-scale amphibian declines

    Grant, Evan H. Campbell; Miller, David A. W.; Schmidt, Benedikt R.; Adams, Michael J.; Amburgey, Staci M.; Chambert, Thierry A.; Cruickshank, Sam S.; Fisher, Robert N.; Green, David M.; Hossack, Blake R.; Johnson, Pieter T.J.; Joseph, Maxwell B.; Rittenhouse, Tracy A. G.; Ryan, Maureen E.; Waddle, J. Hardin; Walls, Susan C.; Bailey, Larissa L.; Fellers, Gary M.; Gorman, Thomas A.; Ray, Andrew M.; Pilliod, David S.; Price, Steven J.; Saenz, Daniel; Sadinski, Walt; Muths, Erin L.

    2016-01-01

    Since amphibian declines were first proposed as a global phenomenon over a quarter century ago, the conservation community has made little progress in halting or reversing these trends. The early search for a “smoking gun” was replaced with the expectation that declines are caused by multiple drivers. While field observations and experiments have identified factors leading to increased local extinction risk, evidence for effects of these drivers is lacking at large spatial scales. Here, we use observations of 389 time-series of 83 species and complexes from 61 study areas across North America to test the effects of 4 of the major hypothesized drivers of declines. While we find that local amphibian populations are being lost from metapopulations at an average rate of 3.79% per year, these declines are not related to any particular threat at the continental scale; likewise the effect of each stressor is variable at regional scales. This result - that exposure to threats varies spatially, and populations vary in their response - provides little generality in the development of conservation strategies. Greater emphasis on local solutions to this globally shared phenomenon is needed.

  13. Towards a More Biologically-meaningful Climate Characterization: Variability in Space and Time at Multiple Scales

    Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.

    2013-12-01

    Sampling limitations and current modeling capacity justify the common use of mean temperature values in summaries of historical climate and future projections. However, a monthly mean temperature representing a 1-km2 area on the landscape is often unable to capture the climate complexity driving organismal and ecological processes. Estimates of variability in addition to mean values are more biologically meaningful and have been shown to improve projections of range shifts for certain species. Historical analyses of variance and extreme events at coarse spatial scales, as well as coarse-scale projections, show increasing temporal variability in temperature with warmer means. Few studies have considered how spatial variance changes with warming, and analysis for both temporal and spatial variability across scales is lacking. It is unclear how the spatial variability of fine-scale conditions relevant to plant and animal individuals may change given warmer coarse-scale mean values. A change in spatial variability will affect the availability of suitable habitat on the landscape and thus, will influence future species ranges. By characterizing variability across both temporal and spatial scales, we can account for potential bias in species range projections that use coarse climate data and enable improvements to current models. In this study, we use temperature data at multiple spatial and temporal scales to characterize spatial and temporal variability under a warmer climate, i.e., increased mean temperatures. Observational data from the Sierra Nevada (California, USA), experimental climate manipulation data from the eastern and western slopes of the Rocky Mountains (Colorado, USA), projected CMIP5 data for California (USA) and observed PRISM data (USA) allow us to compare characteristics of a mean-variance relationship across spatial scales ranging from sub-meter2 to 10,000 km2 and across temporal scales ranging from hours to decades. Preliminary spatial analysis at

  14. Determination of the QCD color factor ratio CA/CF from the scale dependence of multiplicity in three jet events

    Gary, J W

    2000-01-01

    I examine the determination of the QCD color factor ratio CA/CF from the scale evolution of particle multiplicity in e+e- three jet events. I fit an analytic expression for the multiplicity in three jet events to event samples generated with QCD multihadronic event generators. I demonstrate that a one parameter fit of CA/CF yields the expected result CA/CF=2.25 in the limit of asymptotically large energies if energy conservation is included in the calculation. In contrast, a two parameter fit of CA/CF and a constant offset to the gluon jet multiplicity, proposed in a recent study, does not yield CA/CF=2.25 in this limit. I apply the one parameter fit method to recently published data of the DELPHI experiment at LEP and determine the effective value of CA/CF from this technique, at the finite energy of the Z0 boson, to be 1.74+-0.03+-0.10, where the first uncertainty is statistical and the second is systematic.

  15. Multiple atomic scale solid surface interconnects for atom circuits and molecule logic gates

    Joachim, C; Martrou, D; Gauthier, S; Rezeq, M; Troadec, C; Jie Deng; Chandrasekhar, N

    2010-01-01

    The scientific and technical challenges involved in building the planar electrical connection of an atomic scale circuit to N electrodes (N > 2) are discussed. The practical, laboratory scale approach explored today to assemble a multi-access atomic scale precision interconnection machine is presented. Depending on the surface electronic properties of the targeted substrates, two types of machines are considered: on moderate surface band gap materials, scanning tunneling microscopy can be combined with scanning electron microscopy to provide an efficient navigation system, while on wide surface band gap materials, atomic force microscopy can be used in conjunction with optical microscopy. The size of the planar part of the circuit should be minimized on moderate band gap surfaces to avoid current leakage, while this requirement does not apply to wide band gap surfaces. These constraints impose different methods of connection, which are thoroughly discussed, in particular regarding the recent progress in single atom and molecule manipulations on a surface.

  16. Incipient multiple fault diagnosis in real time with applications to large-scale systems

    Chung, H.Y.; Bien, Z.; Park, J.H.; Seon, P.H.

    1994-01-01

    By using a modified signed directed graph (SDG) together with the distributed artificial neutral networks and a knowledge-based system, a method of incipient multi-fault diagnosis is presented for large-scale physical systems with complex pipes and instrumentations such as valves, actuators, sensors, and controllers. The proposed method is designed so as to (1) make a real-time incipient fault diagnosis possible for large-scale systems, (2) perform the fault diagnosis not only in the steady-state case but also in the transient case as well by using a concept of fault propagation time, which is newly adopted in the SDG model, (3) provide with highly reliable diagnosis results and explanation capability of faults diagnosed as in an expert system, and (4) diagnose the pipe damage such as leaking, break, or throttling. This method is applied for diagnosis of a pressurizer in the Kori Nuclear Power Plant (NPP) unit 2 in Korea under a transient condition, and its result is reported to show satisfactory performance of the method for the incipient multi-fault diagnosis of such a large-scale system in a real-time manner

  17. DL-sQUAL: A Multiple-Item Scale for Measuring Service Quality of Online Distance Learning Programs

    Shaik, Naj; Lowe, Sue; Pinegar, Kem

    2006-01-01

    Education is a service with multiplicity of student interactions over time and across multiple touch points. Quality teaching needs to be supplemented by consistent quality supporting services for programs to succeed under the competitive distance learning landscape. ServQual and e-SQ scales have been proposed for measuring quality of traditional…

  18. Methods of Scientific Research: Teaching Scientific Creativity at Scale

    Robbins, Dennis; Ford, K. E. Saavik

    2016-01-01

    We present a scaling-up plan for AstroComNYC's Methods of Scientific Research (MSR), a course designed to improve undergraduate students' understanding of science practices. The course format and goals, notably the open-ended, hands-on, investigative nature of the curriculum are reviewed. We discuss how the course's interactive pedagogical techniques empower students to learn creativity within the context of experimental design and control of variables thinking. To date the course has been offered to a limited numbers of students in specific programs. The goals of broadly implementing MSR is to reach more students and early in their education—with the specific purpose of supporting and improving retention of students pursuing STEM careers. However, we also discuss challenges in preserving the effectiveness of the teaching and learning experience at scale.

  19. Measuring the black hole mass in ultraluminous X-ray sources with the X-ray scaling method

    Jang, I.; Gliozzi, M.; Satyapal, S.; Titarchuk, L.

    2018-01-01

    In our recent work, we demonstrated that a novel X-ray scaling method, originally introduced for Galactic black holes (BH), could be reliably extended to estimate the mass of supermassive black holes accreting at moderate to high level. Here, we apply this X-ray scaling method to ultraluminous X-ray sources (ULXs) to constrain their MBH. Using 49 ULXs with multiple XMM-Newton observations, we infer that ULXs host both stellar mass BHs and intermediate mass BHs. The majority of the sources of our sample seem to be consistent with the hypothesis of highly accreting massive stellar BHs with MBH ∼ 100 M⊙. Our results are in general agreement with the MBH values obtained with alternative methods, including model-independent variability methods. This suggests that the X-ray scaling method is an actual scale-independent method that can be applied to all BH systems accreting at moderate-high rate.

  20. BOX-COX REGRESSION METHOD IN TIME SCALING

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  1. Change Analysis and Decision Tree Based Detection Model for Residential Objects across Multiple Scales

    CHEN Liyan

    2018-03-01

    Full Text Available Change analysis and detection plays important role in the updating of multi-scale databases.When overlap an updated larger-scale dataset and a to-be-updated smaller-scale dataset,people usually focus on temporal changes caused by the evolution of spatial entities.Little attention is paid to the representation changes influenced by map generalization.Using polygonal building data as an example,this study examines the changes from different perspectives,such as the reasons for their occurrence,their performance format.Based on this knowledge,we employ decision tree in field of machine learning to establish a change detection model.The aim of the proposed model is to distinguish temporal changes that need to be applied as updates to the smaller-scale dataset from representation changes.The proposed method is validated through tests using real-world building data from Guangzhou city.The experimental results show the overall precision of change detection is more than 90%,which indicates our method is effective to identify changed objects.

  2. Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.

    Gaertner, Jean-Claude; Maiorano, Porzia; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro

    2013-01-01

    Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.

  3. Data-Driven Approach for Analyzing Hydrogeology and Groundwater Quality Across Multiple Scales.

    Curtis, Zachary K; Li, Shu-Guang; Liao, Hua-Sheng; Lusch, David

    2017-08-29

    Recent trends of assimilating water well records into statewide databases provide a new opportunity for evaluating spatial dynamics of groundwater quality and quantity. However, these datasets are scarcely rigorously analyzed to address larger scientific problems because they are of lower quality and massive. We develop an approach for utilizing well databases to analyze physical and geochemical aspects of groundwater systems, and apply it to a multiscale investigation of the sources and dynamics of chloride (Cl - ) in the near-surface groundwater of the Lower Peninsula of Michigan. Nearly 500,000 static water levels (SWLs) were critically evaluated, extracted, and analyzed to delineate long-term, average groundwater flow patterns using a nonstationary kriging technique at the basin-scale (i.e., across the entire peninsula). Two regions identified as major basin-scale discharge zones-the Michigan and Saginaw Lowlands-were further analyzed with regional- and local-scale SWL models. Groundwater valleys ("discharge" zones) and mounds ("recharge" zones) were identified for all models, and the proportions of wells with elevated Cl - concentrations in each zone were calculated, visualized, and compared. Concentrations in discharge zones, where groundwater is expected to flow primarily upwards, are consistently and significantly higher than those in recharge zones. A synoptic sampling campaign in the Michigan Lowlands revealed concentrations generally increase with depth, a trend noted in previous studies of the Saginaw Lowlands. These strong, consistent SWL and Cl - distribution patterns across multiple scales suggest that a deep source (i.e., Michigan brines) is the primary cause for the elevated chloride concentrations observed in discharge areas across the peninsula. © 2017, National Ground Water Association.

  4. Color, Scale, and Rotation Independent Multiple License Plates Detection in Videos and Still Images

    Narasimha Reddy Soora

    2016-01-01

    Full Text Available Most of the existing license plate (LP detection systems have shown significant development in the processing of the images, with restrictions related to environmental conditions and plate variations. With increased mobility and internationalization, there is a need to develop a universal LP detection system, which can handle multiple LPs of many countries and any vehicle, in an open environment and all weather conditions, having different plate variations. This paper presents a novel LP detection method using different clustering techniques based on geometrical properties of the LP characters and proposed a new character extraction method, for noisy/missed character components of the LP due to the presence of noise between LP characters and LP border. The proposed method detects multiple LPs from an input image or video, having different plate variations, under different environmental and weather conditions because of the geometrical properties of the set of characters in the LP. The proposed method is tested using standard media-lab and Application Oriented License Plate (AOLP benchmark LP recognition databases and achieved the success rates of 97.3% and 93.7%, respectively. Results clearly indicate that the proposed approach is comparable to the previously published papers, which evaluated their performance on publicly available benchmark LP databases.

  5. A Divide and Conquer Strategy for Scaling Weather Simulations with Multiple Regions of Interest

    Preeti Malakar

    2013-01-01

    Full Text Available Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.

  6. Ozone flux of an urban orange grove: multiple scaled measurements and model comparisons

    Alstad, K. P.; Grulke, N. E.; Jenerette, D. G.; Schilling, S.; Marrett, K.

    2009-12-01

    There is significant uncertainty about the ozone sink properties of the phytosphere due to a complexity of interactions and feedbacks with biotic and abiotic factors. Improved understanding of the controls on ozone fluxes is critical to estimating and regulating the total ozone budget. Ozone exchanges of an orange orchard within the city of Riverside, CA were examined using a multiple-scaled approach. We access the carbon, water, and energy budgets at the stand- to leaf- level to elucidate the mechanisms controlling the variability in ozone fluxes of this agro-ecosystem. The two initial goals of the study were 1. To consider variations and controls on the ozone fluxes within the canopy; and, 2. To examine different modeling and scaling approaches for totaling the ozone fluxes of this orchard. Current understanding of the total ozone flux between the atmosphere near ground and the phytosphere (F-total) include consideration of a fraction which is absorbed by vegetation through stomatal uptake (F-absorb), and fractional components of deposition on external, non-stomatal, surfaces of the vegetation (F-external) and soil (F-soil). Multiplicative stomatal-conductance models have been commonly used to estimate F-absorb, since this flux cannot be measured directly. We approach F-absorb estimates for this orange orchard using chamber measurement of leaf stomatal-conductance, as well as non-chamber sap-conductance collected on branches of varied aspect and sun/shade conditions within the canopy. We use two approaches to measure the F-total of this stand. Gradient flux profiles were measured using slow-response ozone sensors collecting within and above the canopy (4.6 m), and at the top of the tower (8.5 m). In addition, an eddy-covariance system fitted with a high-frequency chemiluminescence ozone system will be deployed (8.5 m). Preliminary ozone gradient flux profiles demonstrate a substantial ozone sink strength of this orchard, with diurnal concentration differentials

  7. Measuring the impact of multiple sclerosis on psychosocial functioning: the development of a new self-efficacy scale.

    Airlie, J; Baker, G A; Smith, S J; Young, C A

    2001-06-01

    To develop a scale to measure self-efficacy in neurologically impaired patients with multiple sclerosis and to assess the scale's psychometric properties. Cross-sectional questionnaire study in a clinical setting, the retest questionnaire returned by mail after completion at home. Regional multiple sclerosis (MS) outpatient clinic or the Clinical Trials Unit (CTU) at a large neuroscience centre in the UK. One hundred persons with MS attending the Walton Centre for Neurology and Neurosurgery and Clatterbridge Hospital, Wirral, as outpatients. Cognitively impaired patients were excluded at an initial clinic assessment. Patients were asked to provide demographic data and complete the self-efficacy scale along with the following validated scales: Hospital Anxiety and Depression Scale, Rosenberg Self-Esteem Scale, Impact, Stigma and Mastery and Rankin Scales. The Rankin Scale and Barthel Index were also assessed by the physician. A new 11-item self-efficacy scale was constructed consisting of two domains of control and personal agency. The validity of the scale was confirmed using Cronbach's alpha analysis of internal consistency (alpha = 0.81). The test-retest reliability of the scale over two weeks was acceptable with an intraclass correlation coefficient of 0.79. Construct validity was investigated using Pearson's product moment correlation coefficient resulting in significant correlations with depression (r= -0.52) anxiety (r =-0.50) and mastery (r= 0.73). Multiple regression analysis demonstrated that these factors accounted for 70% of the variance of scores on the self-efficacy scale, with scores on mastery, anxiety and perceived disability being independently significant. Assessment of the psychometric properties of this new self-efficacy scale suggest that it possesses good validity and reliability in patients with multiple sclerosis.

  8. Understanding the Patterns and Drivers of Air Pollution on Multiple Time Scales: The Case of Northern China

    Liu, Yupeng; Wu, Jianguo; Yu, Deyong; Hao, Ruifang

    2018-06-01

    China's rapid economic growth during the past three decades has resulted in a number of environmental problems, including the deterioration of air quality. It is necessary to better understand how the spatial pattern of air pollutants varies with time scales and what drive these changes. To address these questions, this study focused on one of the most heavily air-polluted areas in North China. We first quantified the spatial pattern of air pollution, and then systematically examined the relationships of air pollution to several socioeconomic and climatic factors using the constraint line method, correlation analysis, and stepwise regression on decadal, annual, and seasonal scales. Our results indicate that PM2.5 was the dominant air pollutant in the Beijing-Tianjin-Hebei region, while PM2.5 and PM10 were both important pollutants in the Agro-pastoral Transitional Zone (APTZ) region. Our statistical analyses suggest that energy consumption and gross domestic product (GDP) in the industry were the most important factors for air pollution on the decadal scale, but the impacts of climatic factors could also be significant. On the annual and seasonal scales, high wind speed, low relative humidity, and long sunshine duration constrained PM2.5 accumulation; low wind speed and high relative humidity constrained PM10 accumulation; and short sunshine duration and high wind speed constrained O3 accumulation. Our study showed that analyses on multiple temporal scales are not only necessary to determine key drivers of air pollution, but also insightful for understanding the spatial patterns of air pollution, which was important for urban planning and air pollution control.

  9. Surface temperature and evapotranspiration: application of local scale methods to regional scales using satellite data

    Seguin, B.; Courault, D.; Guerif, M.

    1994-01-01

    Remotely sensed surface temperatures have proven useful for monitoring evapotranspiration (ET) rates and crop water use because of their direct relationship with sensible and latent energy exchange processes. Procedures for using the thermal infrared (IR) obtained with hand-held radiometers deployed at ground level are now well established and even routine for many agricultural research and management purposes. The availability of IR from meteorological satellites at scales from 1 km (NOAA-AVHRR) to 5 km (METEOSAT) permits extension of local, ground-based approaches to larger scale crop monitoring programs. Regional observations of surface minus air temperature (i.e., the stress degree day) and remote estimates of daily ET were derived from satellite data over sites in France, the Sahel, and North Africa and summarized here. Results confirm that similar approaches can be applied at local and regional scales despite differences in pixel size and heterogeneity. This article analyzes methods for obtaining these data and outlines the potential utility of satellite data for operational use at the regional scale. (author)

  10. A multiple-group measurement scale for interprofessional collaboration: Adaptation and validation into Italian and German languages.

    Vittadello, Fabio; Mischo-Kelling, Maria; Wieser, Heike; Cavada, Luisa; Lochner, Lukas; Naletto, Carla; Fink, Verena; Reeves, Scott

    2018-05-01

    This article presents a study that aimed to validate a translation of a multiple-group measurement scale for interprofessional collaboration (IPC). We used survey data gathered over a three month period as part of a mixed methods study that explored the nature of IPC in Northern Italy. Following a translation from English into Italian and German the survey was distributed online to over 5,000 health professionals (dieticians, nurses, occupational therapists, physicians, physiotherapists, speech therapists and psychologists) based in one regional health trust. In total, 2,238 different health professions completed the survey. Based on the original scale, three principal components were extracted and confirmed as relevant factors for IPC (communication, accommodation and isolation). A confirmatory analysis (3-factor model) was applied to the data of physicians and nurses by language group. In conclusion, the validation of the German and Italian IPC scale has provided an instrument of acceptable reliability and validity for the assessment of IPC involving physicians and nurses.

  11. Multiple-Time-Scales Hierarchical Frequency Stability Control Strategy of Medium-Voltage Isolated Microgrid

    Zhao, Zhuoli; Yang, Ping; Guerrero, Josep M.

    2016-01-01

    In this paper, an islanded medium-voltage (MV) microgrid placed in Dongao Island is presented, which integrates renewable-energy-based distributed generations (DGs), energy storage system (ESS), and local loads. In an isolated microgrid without connection to the main grid to support the frequency......, it is more complex to control and manage. Thus in order to maintain the frequency stability in multiple-time-scales, a hierarchical control strategy is proposed. The proposed control architecture divides the system frequency in three zones: (A) stable zone, (B) precautionary zone and (C) emergency zone...... of Zone B. Theoretical analysis, time-domain simulation and field test results under various conditions and scenarios in the Dongao Island microgrid are presented to prove the validity of the introduced control strategy....

  12. Measurement with multiple indicators and psychophysical scaling in the context of Fishbein and Ajzen's theory of reasoned action

    van den Putte, B.; Saris, W.E.; Hoogstraten, J.

    1995-01-01

    Two experiments were carried out to test the theory of reasoned action of Fishbein and Ajzen. The measurements were done using two category scales and two psychophysical scales. No consistent difference in results was found between the four modalities. However, if the latter were used as multiple

  13. Estimation of subcriticality by neutron source multiplication method

    Sakurai, Kiyoshi; Suzaki, Takenori; Arakawa, Takuya; Naito, Yoshitaka

    1995-03-01

    Subcritical cores were constructed in a core tank of the TCA by arraying 2.6% enriched UO 2 fuel rods into nxn square lattices of 1.956 cm pitch. Vertical distributions of the neutron count rates for the fifteen subcritical cores (n=17, 16, 14, 11, 8) with different water levels were measured at 5 cm interval with 235 U micro-fission counters at the in-core and out-core positions arranging a 252 C f neutron source at near core center. The continuous energy Monte Carlo code MCNP-4A was used for the calculation of neutron multiplication factors and neutron count rates. In this study, important conclusions are as follows: (1) Differences of neutron multiplication factors resulted from exponential experiment and MCNP-4A are below 1% in most cases. (2) Standard deviations of neutron count rates calculated from MCNP-4A with 500000 histories are 5-8%. The calculated neutron count rates are consistent with the measured one. (author)

  14. Experimental methods for laboratory-scale ensilage of lignocellulosic biomass

    Tanjore, Deepti; Richard, Tom L.; Marshall, Megan N.

    2012-01-01

    Anaerobic fermentation is a potential storage method for lignocellulosic biomass in biofuel production processes. Since biomass is seasonally harvested, stocks are often dried or frozen at laboratory scale prior to fermentation experiments. Such treatments prior to fermentation studies cause irreversible changes in the plant cells, influencing the initial state of biomass and thereby the progression of the fermentation processes itself. This study investigated the effects of drying, refrigeration, and freezing relative to freshly harvested corn stover in lab-scale ensilage studies. Particle sizes, as well as post-ensilage drying temperatures for compositional analysis, were tested to identify the appropriate sample processing methods. After 21 days of ensilage the lowest pH value (3.73 ± 0.03), lowest dry matter loss (4.28 ± 0.26 g. 100 g-1DM), and highest water soluble carbohydrate (WSC) concentrations (7.73 ± 0.26 g. 100 g-1DM) were observed in control biomass (stover ensiled within 12 h of harvest without any treatments). WSC concentration was significantly reduced in samples refrigerated for 7 days prior to ensilage (3.86 ± 0.49 g. 100 g −1 DM). However, biomass frozen prior to ensilage produced statistically similar results to the fresh biomass control, especially in treatments with cell wall degrading enzymes. Grinding to decrease particle size reduced the variance amongst replicates for pH values of individual reactors to a minor extent. Drying biomass prior to extraction of WSCs resulted in degradation of the carbohydrates and a reduced estimate of their concentrations. The methods developed in this study can be used to improve ensilage experiments and thereby help in developing ensilage as a storage method for biofuel production. -- Highlights: ► Laboratory-scale methods to assess the influence of ensilage biofuel production. ► Drying, freezing, and refrigeration of biomass influenced microbial fermentation. ► Freshly ensiled stover exhibited

  15. Distributed cerebellar plasticity implements generalized multiple-scale memory components in real-robot sensorimotor tasks

    Claudia eCasellato

    2015-02-01

    Full Text Available The cerebellum plays a crucial role in motor learning and it acts as a predictive controller. Modeling it and embedding it into sensorimotor tasks allows us to create functional links between plasticity mechanisms, neural circuits and behavioral learning. Moreover, if applied to real-time control of a neurorobot, the cerebellar model has to deal with a real noisy and changing environment, thus showing its robustness and effectiveness in learning. A biologically inspired cerebellar model with distributed plasticity, both at cortical and nuclear sites, has been used. Two cerebellum-mediated paradigms have been designed: an associative Pavlovian task and a vestibulo-ocular reflex, with multiple sessions of acquisition and extinction and with different stimuli and perturbation patterns. The cerebellar controller succeeded to generate conditioned responses and finely tuned eye movement compensation, thus reproducing human-like behaviors. Through a productive plasticity transfer from cortical to nuclear sites, the distributed cerebellar controller showed in both tasks the capability to optimize learning on multiple time-scales, to store motor memory and to effectively adapt to dynamic ranges of stimuli.

  16. Does the Assessment of Recovery Capital scale reflect a single or multiple domains?

    Arndt, Stephan; Sahker, Ethan; Hedden, Suzy

    2017-01-01

    The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.

  17. A feature point identification method for positron emission particle tracking with multiple tracers

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  18. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  19. Application of multiple timestep integration method in SSC

    Guppy, J.G.

    1979-01-01

    The thermohydraulic transient simulation of an entire LMFBR system is, by its very nature, complex. Physically, the entire plant consists of many subsystems which are coupled by various processes and/or components. The characteristic integration timesteps for these processes/components can vary over a wide range. To improve computing efficiency, a multiple timestep scheme (MTS) approach has been used in the development of the Super System Code (SSC). In this paper: (1) the partitioning of the system and the timestep control are described, and (2) results are presented showing a savings in computer running time using the MTS of as much as five times the time required using a single timestep scheme

  20. Multiple Beta Spectrum Analysis Method Based on Spectrum Fitting

    Lee, Uk Jae; Jung, Yun Song; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    When the sample of several mixed radioactive nuclides is measured, it is difficult to divide each nuclide due to the overlapping of spectrums. For this reason, simple mathematical analysis method for spectrum analysis of the mixed beta ray source has been studied. However, existing research was in need of more accurate spectral analysis method as it has a problem of accuracy. The study will describe the contents of the separation methods of the mixed beta ray source through the analysis of the beta spectrum slope based on the curve fitting to resolve the existing problem. The fitting methods including It was understood that sum of sine fitting method was the best one of such proposed methods as Fourier, polynomial, Gaussian and sum of sine to obtain equation for distribution of mixed beta spectrum. It was shown to be the most appropriate for the analysis of the spectrum with various ratios of mixed nuclides. It was thought that this method could be applied to rapid spectrum analysis of the mixed beta ray source.

  1. Multiple spatial scaling and the weak-coupling approximation. I. General formulation and equilibrium theory

    Kleinsmith, P E [Carnegie-Mellon Univ., Pittsburgh, Pa. (USA)

    1976-04-01

    Multiple spatial scaling is incorporated in a modified form of the Bogoliubov plasma cluster expansion; then this proposed reformulation of the plasma weak-coupling approximation is used to derive, from the BBGKY Hierarchy, a decoupled set of equations for the one-and two-particle distribution functions in the limit as the plasma parameter goes to zero. Because the reformulated cluster expansion permits retention of essential two-particle collisional information in the limiting equations, while simultaneously retaining the well-established Debye-scale relative ordering of the correlation functions, decoupling of the Hierarchy is accomplished without introduction of the divergence problems encountered in the Bogoliubov theory, as is indicated by an exact solution of the limiting equations for the equilibrium case. To establish additional links with existing plasma equilibrium theories, the two-particle equilibrium correlation function is used to calculate the interaction energy and the equation of state. The limiting equation for the equilibrium three-particle correlation function is then developed, and a formal solution is obtained.

  2. Urban land use decouples plant-herbivore-parasitoid interactions at multiple spatial scales.

    Amanda E Nelson

    Full Text Available Intense urban and agricultural development alters habitats, increases fragmentation, and may decouple trophic interactions if plants or animals cannot disperse to needed resources. Specialist insects represent a substantial proportion of global biodiversity and their fidelity to discrete microhabitats provides a powerful framework for investigating organismal responses to human land use. We sampled site occupancy and densities for two plant-herbivore-parasitoid systems from 250 sites across a 360 km2 urban/agricultural landscape to ask whether and how human development decouples interactions between trophic levels. We compared patterns of site occupancy, host plant density, herbivory and parasitism rates of insects at two trophic levels with respect to landcover at multiple spatial scales. Geospatial analyses were used to identify landcover characters predictive of insect distributions. We found that herbivorous insect densities were decoupled from host tree densities in urban landcover types at several spatial scales. This effect was amplified for the third trophic level in one of the two insect systems: despite being abundant regionally, a parasitoid species was absent from all urban/suburban landcover even where its herbivore host was common. Our results indicate that human land use patterns limit distributions of specialist insects. Dispersal constraints associated with urban built development are specifically implicated as a limiting factor.

  3. Quantifying Contributions to Transport in Ionic Polymers Across Multiple Length Scales

    Madsen, Louis

    Self-organized polymer membranes conduct mobile species (ions, water, alcohols, etc.) according to a hierarchy of structural motifs that span sub-nm to >10 μm in length scale. In order to comprehensively understand such materials, our group combines multiple types of NMR dynamics and transport measurements (spectroscopy, diffusometry, relaxometry, imaging) with structural information from scattering and microscopy as well as with theories of porous media,1 electrolytic transport, and oriented matter.2 In this presentation, I will discuss quantitative separation of the phenomena that govern transport in polymer membranes, from intermolecular interactions (<= 2 nm),3 to locally ordered polymer nanochannels (a few to 10s of nm),2 to larger polymer domain structures (10s of nm and larger).1 Using this multi-scale information, we seek to give informed feedback on the design of polymer membranes for use in, e . g . , efficient batteries, fuel cells, and mechanical actuators. References: [1] J. Hou, J. Li, D. Mountz, M. Hull, and L. A. Madsen. Journal of Membrane Science448, 292-298 (2013). [2] J. Li, J. K. Park, R. B. Moore, and L. A. Madsen. Nature Materials 10, 507-511 (2011). [3] M. D. Lingwood, Z. Zhang, B. E. Kidd, K. B. McCreary, J. Hou, and L. A. Madsen. Chemical Communications 49, 4283 - 4285 (2013).

  4. Statistical Genetics Methods for Localizing Multiple Breast Cancer Genes

    Ott, Jurg

    1998-01-01

    .... For a number of variables measured on a trait, a method, principal components of heritability, was developed that combines these variables in such a way that the resulting linear combination has highest heritability...

  5. Optimization of large-scale industrial systems : an emerging method

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  6. The multiple sclerosis rating scale, revised (MSRS-R: Development, refinement, and psychometric validation using an online community

    Wicks Paul

    2012-06-01

    Full Text Available Abstract Background In developing the PatientsLikeMe online platform for patients with Multiple Sclerosis (MS, we required a patient-reported assessment of functional status that was easy to complete and identified disability in domains other than walking. Existing measures of functional status were inadequate, clinician-reported, focused on walking, and burdensome to complete. In response, we developed the Multiple Sclerosis Rating Scale (MSRS. Methods We adapted a clinician-rated measure, the Guy’s Neurological Disability Scale, to a self-report scale and deployed it to an online community. As part of our validation process we reviewed discussions between patients, conducted patient cognitive debriefing, and made minor improvements to form a revised scale (MSRS-R before deploying a cross-sectional survey to patients with relapsing-remitting MS (RRMS on the PatientsLikeMe platform. The survey included MSRS-R and comparator measures: MSIS-29, PDDS, NARCOMS Performance Scales, PRIMUS, and MSWS-12. Results In total, 816 RRMS patients responded (19% response rate. The MSRS-R exhibited high internal consistency (Cronbach’s alpha = .86. The MSRS-R walking item was highly correlated with alternative walking measures (PDDS, ρ = .84; MSWS-12, ρ = .83; NARCOMS mobility question, ρ = .86. MSRS-R correlated well with comparison instruments and differentiated between known groups by PDDS disease stage and relapse burden in the past two years. Factor analysis suggested a single factor accounting for 51.5% of variance. Conclusions The MSRS-R is a concise measure of MS-related functional disability, and may have advantages for disease measurement over longer and more burdensome instruments that are restricted to a smaller number of domains or measure quality of life. Studies are underway describing the use of the instrument in contexts outside our online platform such as clinical practice or trials. The MSRS-R is released for use under

  7. THE METHOD OF MULTIPLE SPATIAL PLANNING BASIC MAP

    C. Zhang

    2018-04-01

    Full Text Available The “Provincial Space Plan Pilot Program” issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve “multi-compliance” of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  8. The Method of Multiple Spatial Planning Basic Map

    Zhang, C.; Fang, C.

    2018-04-01

    The "Provincial Space Plan Pilot Program" issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve "multi-compliance" of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton) to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.

  9. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  10. Non-native salmonids affect amphibian occupancy at multiple spatial scales

    Pilliod, David S.; Hossack, Blake R.; Bahls, Peter F.; Bull, Evelyn L.; Corn, Paul Stephen; Hokit, Grant; Maxell, Bryce A.; Munger, James C.; Wyrick, Aimee

    2010-01-01

    Aim The introduction of non-native species into aquatic environments has been linked with local extinctions and altered distributions of native species. We investigated the effect of non-native salmonids on the occupancy of two native amphibians, the long-toed salamander (Ambystoma macrodactylum) and Columbia spotted frog (Rana luteiventris), across three spatial scales: water bodies, small catchments and large catchments. Location Mountain lakes at ≥ 1500 m elevation were surveyed across the northern Rocky Mountains, USA. Methods We surveyed 2267 water bodies for amphibian occupancy (based on evidence of reproduction) and fish presence between 1986 and 2002 and modelled the probability of amphibian occupancy at each spatial scale in relation to habitat availability and quality and fish presence. Results After accounting for habitat features, we estimated that A. macrodactylum was 2.3 times more likely to breed in fishless water bodies than in water bodies with fish. Ambystoma macrodactylum also was more likely to occupy small catchments where none of the water bodies contained fish than in catchments where at least one water body contained fish. However, the probability of salamander occupancy in small catchments was also influenced by habitat availability (i.e. the number of water bodies within a catchment) and suitability of remaining fishless water bodies. We found no relationship between fish presence and salamander occupancy at the large-catchment scale, probably because of increased habitat availability. In contrast to A. macrodactylum, we found no relationship between fish presence and R. luteiventris occupancy at any scale. Main conclusions Our results suggest that the negative effects of non-native salmonids can extend beyond the boundaries of individual water bodies and increase A. macrodactylum extinction risk at landscape scales. We suspect that niche overlap between non-native fish and A. macrodactylum at higher elevations in the northern Rocky

  11. Using Multiple Regression in Estimating (semi) VOC Emissions and Concentrations at the European Scale

    Fauser, Patrik; Thomsen, Marianne; Pistocchi, Alberto

    2010-01-01

    chemicals available in the European Chemicals Bureau risk assessment reports (RARs). The method suggests a simple linear relationship between Henry's Law constant, octanol-water coefficient, use and production volumes, and emissions and PECs on a regional scale in the European Union. Emissions and PECs......This paper proposes a simple method for estimating emissions and predicted environmental concentrations (PECs) in water and air for organic chemicals that are used in household products and industrial processes. The method has been tested on existing data for 63 organic high-production volume...... are a result of a complex interaction between chemical properties, production and use patterns and geographical characteristics. A linear relationship cannot capture these complexities; however, it may be applied at a cost-efficient screening level for suggesting critical chemicals that are candidates...

  12. Comparison of multiple gene assembly methods for metabolic engineering

    Chenfeng Lu; Karen Mansoorabadi; Thomas Jeffries

    2007-01-01

    A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase...

  13. Comparison of two methods of surface profile extraction from multiple ultrasonic range measurements

    Barshan, B; Baskent, D

    Two novel methods for surface profile extraction based on multiple ultrasonic range measurements are described and compared. One of the methods employs morphological processing techniques, whereas the other employs a spatial voting scheme followed by simple thresholding. Morphological processing

  14. Multiple HEPA filter test methods, January--December 1976

    Schuster, B.; Kyle, T.; Osetek, D.

    1977-06-01

    The testing of tandem high-efficiency particulate air (HEPA) filter systems is of prime importance for the measurement of accurate overall system protection factors. A procedure, based on the use of an intra-cavity laser particle spectrometer, has been developed for measuring protection factors in the 10 8 range. A laboratory scale model of a filter system was constructed and initially tested to determine individual HEPA filter characteristics with regard to size and state (liquid or solid) of several test aerosols. Based on these laboratory measurements, in-situ testing has been successfully conducted on a number of single and tandem filter installations within the Los Alamos Scientific Laboratory as well as on extraordinary large single systems at Rocky Flats. For the purpose of recovery and for simplified solid waste disposal, or prefiltering purposes, two versions of an inhomogeneous electric field air cleaner have been devised and are undergoing testing. Initial experience with one of the systems, which relies on an electrostatic spraying phenomenon, indicates performance efficiency of greater than 99.9% for flow velocities commonly used in air cleaning systems. Among the effluents associated with nuclear fuel reprocessing is 129 I. An intra-cavity laser detection system is under development which shows promise of being able to detect mixing ratios of one part in 10 7 , I 2 in air

  15. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  16. Support Operators Method for the Diffusion Equation in Multiple Materials

    Winters, Andrew R. [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory

    2012-08-14

    A second-order finite difference scheme for the solution of the diffusion equation on non-uniform meshes is implemented. The method allows the heat conductivity to be discontinuous. The algorithm is formulated on a one dimensional mesh and is derived using the support operators method. A key component of the derivation is that the discrete analog of the flux operator is constructed to be the negative adjoint of the discrete divergence, in an inner product that is a discrete analog of the continuum inner product. The resultant discrete operators in the fully discretized diffusion equation are symmetric and positive definite. The algorithm is generalized to operate on meshes with cells which have mixed material properties. A mechanism to recover intermediate temperature values in mixed cells using a limited linear reconstruction is introduced. The implementation of the algorithm is verified and the linear reconstruction mechanism is compared to previous results for obtaining new material temperatures.

  17. Computing multiple zeros using a class of quartically convergent methods

    F. Soleymani

    2013-09-01

    For functions with finitely many real roots in an interval, relatively little literature is known, while in applications, the users wish to find all the real zeros at the same time. Hence, the second aim of this paper will be presented by designing a fourth-order algorithm, based on the developed methods, to find all the real solutions of a nonlinear equation in an interval using the programming package Mathematica 8.

  18. Quantifying submarine groundwater discharge in the coastal zone via multiple methods

    Burnett, W.C.; Aggarwal, P.K.; Aureli, A.; Bokuniewicz, H.; Cable, J.E.; Charette, M.A.; Kontar, E.; Krupa, S.; Kulkarni, K.M.; Loveless, A.; Moore, W.S.; Oberdorfer, J.A.; Oliveira, J.; Ozyurt, N.; Povinec, P.; Privitera, A.M.G.; Rajar, R.; Ramessur, R.T.; Scholten, J.; Stieglitz, T.; Taniguchi, M.; Turner, J.V.

    2006-01-01

    Submarine groundwater discharge (SGD) is now recognized as an important pathway between land and sea. As such, this flow may contribute to the biogeochemical and other marine budgets of near-shore waters. These discharges typically display significant spatial and temporal variability making assessments difficult. Groundwater seepage is patchy, diffuse, temporally variable, and may involve multiple aquifers. Thus, the measurement of its magnitude and associated chemical fluxes is a challenging enterprise. A joint project of UNESCO and the International Atomic Energy Agency (IAEA) has examined several methods of SGD assessment and carried out a series of five intercomparison experiments in different hydrogeologic environments (coastal plain, karst, glacial till, fractured crystalline rock, and volcanic terrains). This report reviews the scientific and management significance of SGD, measurement approaches, and the results of the intercomparison experiments. We conclude that while the process is essentially ubiquitous in coastal areas, the assessment of its magnitude at any one location is subject to enough variability that measurements should be made by a variety of techniques and over large enough spatial and temporal scales to capture the majority of these changing conditions. We feel that all the measurement techniques described here are valid although they each have their own advantages and disadvantages. It is recommended that multiple approaches be applied whenever possible. In addition, a continuing effort is required in order to capture long-period tidal fluctuations, storm effects, and seasonal variations

  19. Quantifying submarine groundwater discharge in the coastal zone via multiple methods

    Burnett, W.C. [Department of Oceanography, Florida State University, Tallahassee, FL 32306 (United States); Aggarwal, P.K.; Kulkarni, K.M. [Isotope Hydrology Section, International Atomic Energy Agency (Austria); Aureli, A. [Department Water Resources Management, University of Palermo, Catania (Italy); Bokuniewicz, H. [Marine Science Research Center, Stony Brook University (United States); Cable, J.E. [Department Oceanography, Louisiana State University (United States); Charette, M.A. [Department Marine Chemistry, Woods Hole Oceanographic Institution (United States); Kontar, E. [Shirshov Institute of Oceanology (Russian Federation); Krupa, S. [South Florida Water Management District (United States); Loveless, A. [University of Western Australia (Australia); Moore, W.S. [Department Geological Sciences, University of South Carolina (United States); Oberdorfer, J.A. [Department Geology, San Jose State University (United States); Oliveira, J. [Instituto de Pesquisas Energeticas e Nucleares (Brazil); Ozyurt, N. [Department Geological Engineering, Hacettepe (Turkey); Povinec, P.; Scholten, J. [Marine Environment Laboratory, International Atomic Energy Agency (Monaco); Privitera, A.M.G. [U.O. 4.17 of the G.N.D.C.I., National Research Council (Italy); Rajar, R. [Faculty of Civil and Geodetic Engineering, University of Ljubljana (Slovenia); Ramessur, R.T. [Department Chemistry, University of Mauritius (Mauritius); Stieglitz, T. [Mathematical and Physical Sciences, James Cook University (Australia); Taniguchi, M. [Research Institute for Humanity and Nature (Japan); Turner, J.V. [CSIRO, Land and Water, Perth (Australia)

    2006-08-31

    Submarine groundwater discharge (SGD) is now recognized as an important pathway between land and sea. As such, this flow may contribute to the biogeochemical and other marine budgets of near-shore waters. These discharges typically display significant spatial and temporal variability making assessments difficult. Groundwater seepage is patchy, diffuse, temporally variable, and may involve multiple aquifers. Thus, the measurement of its magnitude and associated chemical fluxes is a challenging enterprise. A joint project of UNESCO and the International Atomic Energy Agency (IAEA) has examined several methods of SGD assessment and carried out a series of five intercomparison experiments in different hydrogeologic environments (coastal plain, karst, glacial till, fractured crystalline rock, and volcanic terrains). This report reviews the scientific and management significance of SGD, measurement approaches, and the results of the intercomparison experiments. We conclude that while the process is essentially ubiquitous in coastal areas, the assessment of its magnitude at any one location is subject to enough variability that measurements should be made by a variety of techniques and over large enough spatial and temporal scales to capture the majority of these changing conditions. We feel that all the measurement techniques described here are valid although they each have their own advantages and disadvantages. It is recommended that multiple approaches be applied whenever possible. In addition, a continuing effort is required in order to capture long-period tidal fluctuations, storm effects, and seasonal variations. (author)

  20. Some problems of neutron source multiplication method for site measurement technology in nuclear critical safety

    Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo

    2004-01-01

    The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed

  1. Spatial heterogeneity regulates plant-pollinator networks across multiple landscape scales.

    Eduardo Freitas Moreira

    Full Text Available Mutualistic plant-pollinator interactions play a key role in biodiversity conservation and ecosystem functioning. In a community, the combination of these interactions can generate emergent properties, e.g., robustness and resilience to disturbances such as fluctuations in populations and extinctions. Given that these systems are hierarchical and complex, environmental changes must have multiple levels of influence. In addition, changes in habitat quality and in the landscape structure are important threats to plants, pollinators and their interactions. However, despite the importance of these phenomena for the understanding of biological systems, as well as for conservation and management strategies, few studies have empirically evaluated these effects at the network level. Therefore, the objective of this study was to investigate the influence of local conditions and landscape structure at multiple scales on the characteristics of plant-pollinator networks. This study was conducted in agri-natural lands in Chapada Diamantina, Bahia, Brazil. Pollinators were collected in 27 sampling units distributed orthogonally along a gradient of proportion of agriculture and landscape diversity. The Akaike information criterion was used to select models that best fit the metrics for network characteristics, comparing four hypotheses represented by a set of a priori candidate models with specific combinations of the proportion of agriculture, the average shape of the landscape elements, the diversity of the landscape and the structure of local vegetation. The results indicate that a reduction of habitat quality and landscape heterogeneity can cause species loss and decrease of networks nestedness. These structural changes can reduce robustness and resilience of plant-pollinator networks what compromises the reproductive success of plants, the maintenance of biodiversity and the pollination service stability. We also discuss the possible explanations for

  2. Spatial heterogeneity regulates plant-pollinator networks across multiple landscape scales.

    Moreira, Eduardo Freitas; Boscolo, Danilo; Viana, Blandina Felipe

    2015-01-01

    Mutualistic plant-pollinator interactions play a key role in biodiversity conservation and ecosystem functioning. In a community, the combination of these interactions can generate emergent properties, e.g., robustness and resilience to disturbances such as fluctuations in populations and extinctions. Given that these systems are hierarchical and complex, environmental changes must have multiple levels of influence. In addition, changes in habitat quality and in the landscape structure are important threats to plants, pollinators and their interactions. However, despite the importance of these phenomena for the understanding of biological systems, as well as for conservation and management strategies, few studies have empirically evaluated these effects at the network level. Therefore, the objective of this study was to investigate the influence of local conditions and landscape structure at multiple scales on the characteristics of plant-pollinator networks. This study was conducted in agri-natural lands in Chapada Diamantina, Bahia, Brazil. Pollinators were collected in 27 sampling units distributed orthogonally along a gradient of proportion of agriculture and landscape diversity. The Akaike information criterion was used to select models that best fit the metrics for network characteristics, comparing four hypotheses represented by a set of a priori candidate models with specific combinations of the proportion of agriculture, the average shape of the landscape elements, the diversity of the landscape and the structure of local vegetation. The results indicate that a reduction of habitat quality and landscape heterogeneity can cause species loss and decrease of networks nestedness. These structural changes can reduce robustness and resilience of plant-pollinator networks what compromises the reproductive success of plants, the maintenance of biodiversity and the pollination service stability. We also discuss the possible explanations for these relationships and

  3. The e-MSWS-12: improving the multiple sclerosis walking scale using item response theory.

    Engelhard, Matthew M; Schmidt, Karen M; Engel, Casey E; Brenton, J Nicholas; Patek, Stephen D; Goldman, Myla D

    2016-12-01

    The Multiple Sclerosis Walking Scale (MSWS-12) is the predominant patient-reported measure of multiple sclerosis (MS) -elated walking ability, yet it had not been analyzed using item response theory (IRT), the emerging standard for patient-reported outcome (PRO) validation. This study aims to reduce MSWS-12 measurement error and facilitate computerized adaptive testing by creating an IRT model of the MSWS-12 and distributing it online. MSWS-12 responses from 284 subjects with MS were collected by mail and used to fit and compare several IRT models. Following model selection and assessment, subpopulations based on age and sex were tested for differential item functioning (DIF). Model comparison favored a one-dimensional graded response model (GRM). This model met fit criteria and explained 87 % of response variance. The performance of each MSWS-12 item was characterized using category response curves (CRCs) and item information. IRT-based MSWS-12 scores correlated with traditional MSWS-12 scores (r = 0.99) and timed 25-foot walk (T25FW) speed (r =  -0.70). Item 2 showed DIF based on age (χ 2  = 19.02, df = 5, p Item 11 showed DIF based on sex (χ 2  = 13.76, df = 5, p = 0.02). MSWS-12 measurement error depends on walking ability, but could be lowered by improving or replacing items with low information or DIF. The e-MSWS-12 includes IRT-based scoring, error checking, and an estimated T25FW derived from MSWS-12 responses. It is available at https://ms-irt.shinyapps.io/e-MSWS-12 .

  4. Improved exact method for the double TSP with multiple stacks

    Lusby, Richard Martin; Larsen, Jesper

    2011-01-01

    and delivery problems. The results suggest an impressive improvement, and we report, for the first time, optimal solutions to several unsolved instances from the literature containing 18 customers. Instances with 28 customers are also shown to be solvable within a few percent of optimality. © 2011 Wiley...... the first delivery, and the container cannot be repacked once packed. In this paper we improve the previously proposed exact method of Lusby et al. (Int Trans Oper Res 17 (2010), 637–652) through an additional preprocessing technique that uses the longest common subsequence between the respective pickup...

  5. Effective multiplication factor measurement by feynman-α method. 3

    Mouri, Tomoaki; Ohtani, Nobuo

    1998-06-01

    The sub-criticality monitoring system has been developed for criticality safety control in nuclear fuel handling plants. In the past experiments performed with the Deuterium Critical Assembly (DCA), it was confirmed that the detection of sub-criticality was possible to k eff = 0.3. To investigate the applicability of the method to more generalized system, experiments were performed in the light-water-moderated system of the modified DCA core. From these experiments, it was confirmed that the prompt decay constant (α), which was a index of the sub-criticality, was detected between k eff = 0.623 and k eff = 0.870 and the difference of 0.05 - 0.1Δk could be distinguished. The α values were numerically calculated with 2D transport code TWODANT and monte carlo code KENO V.a, and the results were compared with the measured values. The differences between calculated and measured values were proved to be less than 13%, which was sufficient accuracy in the sub-criticality monitoring system. It was confirmed that Feynman-α method was applicable to sub-critical measurement of the light-water-moderated system. (author)

  6. A Hierarchical Approach for Measuring the Consistency of Water Areas between Multiple Representations of Tile Maps with Different Scales

    Yilang Shen

    2017-08-01

    Full Text Available In geographic information systems, the reliability of querying, analysing, or reasoning results depends on the data quality. One central criterion of data quality is consistency, and identifying inconsistencies is crucial for maintaining the integrity of spatial data from multiple sources or at multiple resolutions. In traditional methods of consistency assessment, vector data are used as the primary experimental data. In this manuscript, we describe the use of a new type of raster data, tile maps, to access the consistency of information from multiscale representations of the water bodies that make up drainage systems. We describe a hierarchical methodology to determine the spatial consistency of tile-map datasets that display water areas in a raster format. Three characteristic indices, the degree of global feature consistency, the degree of local feature consistency, and the degree of overlap, are proposed to measure the consistency of multiscale representations of water areas. The perceptual hash algorithm and the scale-invariant feature transform (SIFT descriptor are applied to extract and measure the global and local features of water areas. By performing combined calculations using these three characteristic indices, the degrees of consistency of multiscale representations of water areas can be divided into five grades: exactly consistent, highly consistent, moderately consistent, less consistent, and inconsistent. For evaluation purposes, the proposed method is applied to several test areas from the Tiandi map of China. In addition, we identify key technologies that are related to the process of extracting water areas from a tile map. The accuracy of the consistency assessment method is evaluated, and our experimental results confirm that the proposed methodology is efficient and accurate.

  7. Improving seasonal forecasts of hydroclimatic variables through the state of multiple large-scale climate signals

    Castelletti, A.; Giuliani, M.; Block, P. J.

    2017-12-01

    Increasingly uncertain hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide, emphasizing the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite modern forecasts are skillful over short lead time (from hours to days), predictability generally tends to decrease on longer lead times. Global climate teleconnection, such as El Niño Southern Oscillation (ENSO), may contribute in extending forecast lead times. However, ENSO teleconnection is well defined in some locations, such as Western USA and Australia, while there is no consensus on how it can be detected and used in other regions, particularly in Europe, Africa, and Asia. In this work, we generalize the Niño Index Phase Analysis (NIPA) framework by contributing the Multi Variate Niño Index Phase Analysis (MV-NIPA), which allows capturing the state of multiple large-scale climate signals (i.e. ENSO, North Atlantic Oscillation, Pacific Decadal Oscillation, Atlantic Multi-decadal Oscillation, Indian Ocean Dipole) to forecast hydroclimatic variables on a seasonal time scale. Specifically, our approach distinguishes the different phases of the considered climate signals and, for each phase, identifies relevant anomalies in Sea Surface Temperature (SST) that influence the local hydrologic conditions. The potential of the MV-NIPA framework is demonstrated through an application to the Lake Como system, a regulated lake in northern Italy which is mainly operated for flood control and irrigation supply. Numerical results show high correlations between seasonal SST values and one season-ahead precipitation in the Lake Como basin. The skill of the resulting MV-NIPA forecast outperforms the one of ECMWF products. This information represents a valuable contribution to partially anticipate the summer water availability, especially during drought events, ultimately supporting the improvement of the Lake Como

  8. Proteotyping of laboratory-scale biogas plants reveals multiple steady-states in community composition.

    Kohrs, F; Heyer, R; Bissinger, T; Kottler, R; Schallert, K; Püttker, S; Behne, A; Rapp, E; Benndorf, D; Reichl, U

    2017-08-01

    Complex microbial communities are the functional core of anaerobic digestion processes taking place in biogas plants (BGP). So far, however, a comprehensive characterization of the microbiomes involved in methane formation is technically challenging. As an alternative, enriched communities from laboratory-scale experiments can be investigated that have a reduced number of organisms and are easier to characterize by state of the art mass spectrometric-based (MS) metaproteomic workflows. Six parallel laboratory digesters were inoculated with sludge from a full-scale BGP to study the development of enriched microbial communities under defined conditions. During the first three month of cultivation, all reactors (R1-R6) were functionally comparable regarding biogas productions (375-625 NL L reactor volume -1 d -1 ), methane yields (50-60%), pH values (7.1-7.3), and volatile fatty acids (VFA, 1 gNH 3 L -1 ) showed an increase to pH 7.5-8.0, accumulation of acetate (>10 mM), and decreasing biogas production (<125 NL L reactor volume -1 d -1 ). Tandem MS (MS/MS)-based proteotyping allowed the identification of taxonomic abundances and biological processes. Although all reactors showed similar performances, proteotyping and terminal restriction fragment length polymorphisms (T-RFLP) fingerprinting revealed significant differences in the composition of individual microbial communities, indicating multiple steady-states. Furthermore, cellulolytic enzymes and cellulosomal proteins of Clostridium thermocellum were identified to be specific markers for the thermophilic reactors (R3, R4). Metaproteins found in R3 indicated hydrogenothrophic methanogenesis, whereas metaproteins of acetoclastic methanogenesis were identified in R4. This suggests not only an individual evolution of microbial communities even for the case that BGPs are started at the same initial conditions under well controlled environmental conditions, but also a high compositional variance of microbiomes under

  9. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    Daily, Jeffrey A.

    2015-01-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore's law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or 'homologous') on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K

  10. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)

    2015-05-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores

  11. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  12. Collective Influence of Multiple Spreaders Evaluated by Tracing Real Information Flow in Large-Scale Social Networks.

    Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A

    2016-10-26

    Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called "Collective Influence (CI)" has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes' significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct "virtual" information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes' importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.

  13. Collective Influence of Multiple Spreaders Evaluated by Tracing Real Information Flow in Large-Scale Social Networks

    Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.

    2016-01-01

    Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community. PMID:27782207

  14. Practice-oriented optical thin film growth simulation via multiple scale approach

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  15. Autonomous management of a recursive area hierarchy for large scale wireless sensor networks using multiple parents

    Cree, Johnathan Vee [Washington State Univ., Pullman, WA (United States); Delgado-Frias, Jose [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-03-01

    Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configure the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.

  16. Baseflow physical characteristics differ at multiple spatial scales in stream networks across diverse biomes

    Janine Ruegg; Walter K. Dodds; Melinda D. Daniels; Ken R. Sheehan; Christina L. Baker; William B. Bowden; Kaitlin J. Farrell; Michael B. Flinn; Tamara K. Harms; Jeremy B. Jones; Lauren E. Koenig; John S. Kominoski; William H. McDowell; Samuel P. Parker; Amy D. Rosemond; Matt T. Trentman; Matt Whiles; Wilfred M. Wollheim

    2016-01-01

    ContextSpatial scaling of ecological processes is facilitated by quantifying underlying habitat attributes. Physical and ecological patterns are often measured at disparate spatial scales limiting our ability to quantify ecological processes at broader spatial scales using physical attributes.

  17. Quantitative Analysis of Complex Multiple-Choice Items in Science Technology and Society: Item Scaling

    Ángel Vázquez Alonso

    2005-05-01

    Full Text Available The scarce attention to assessment and evaluation in science education research has been especially harmful for Science-Technology-Society (STS education, due to the dialectic, tentative, value-laden, and controversial nature of most STS topics. To overcome the methodological pitfalls of the STS assessment instruments used in the past, an empirically developed instrument (VOSTS, Views on Science-Technology-Society have been suggested. Some methodological proposals, namely the multiple response models and the computing of a global attitudinal index, were suggested to improve the item implementation. The final step of these methodological proposals requires the categorization of STS statements. This paper describes the process of categorization through a scaling procedure ruled by a panel of experts, acting as judges, according to the body of knowledge from history, epistemology, and sociology of science. The statement categorization allows for the sound foundation of STS items, which is useful in educational assessment and science education research, and may also increase teachers’ self-confidence in the development of the STS curriculum for science classrooms.

  18. Content Validity and Reliability of Multiple Intelligences Developmental Assessment Scales (MIDAS Translated into Persian

    Mahnaz Saeidi

    2012-11-01

    Full Text Available This study aimed to translate MIDAS questionnaire from English into Persian and determine its content validity and reliability. MIDAS was translated and validated on a sample (N = 110 of Iranian adult population. The participants were both male and female with the age range of 17-57. They were at different educational levels and from different ethnic groups in Iran. A translating team, consisting of five members, bilingual in English and Persian and familiar with multiple intelligences (MI theory and practice, were involved in translating and determining content validity, which included the processes of forward translation, back-translation, review, final proof-reading, and testing. The statistical analyses of inter-scale correlation were performed using the Cronbach's alpha coefficient. In an intra-class correlation, the Cronbach's alpha was high for all of the questions. Translation and content validity of MIDAS questionnaire was completed by a proper process leading to high reliability and validity. The results suggest that Persian MIDAS (P-MIDAS could serve as a valid and reliable instrument for measuring Iranian adults MIs.

  19. Buried interfaces - A systematic study to characterize an adhesive interface at multiple scales

    Haubrich, Jan; Löbbecke, Miriam; Watermeyer, Philipp; Wilde, Fabian; Requena, Guillermo; da Silva, Julio

    2018-03-01

    A comparative study of a model adhesive interface formed between laser-pretreated Ti15-3-3-3 and the thermoplastic polymer PEEK has been carried out in order to characterize the interfaces' structural details and the infiltration of the surface nano-oxide by the polymer at multiple scales. Destructive approaches such as scanning and transmission electron microscopy of microsections prepared by focused ion beam, and non-destructive imaging approaches including laser scanning and scanning electron microscopy of pretreated surfaces as well as synchrotron computed tomography techniques (micro- and ptychographic tomographies) were employed for resolving the large, μm-sized melt-structures and the fine nano-oxide substructure within the buried interface. Scanning electron microscopy showed that the fine, open-porous nano-oxide homogeneously covers the larger macrostructure features which in turn cover the joint surface. The open-porous nano-oxide forming the interface itself appears to be fully infiltrated and wetted by the polymer. No voids or even channels were detected down to the respective resolution limits of scanning and transmission electron microscopy.

  20. Examining the Psychometric Quality of Multiple-Choice Assessment Items using Mokken Scale Analysis.

    Wind, Stefanie A

    The concept of invariant measurement is typically associated with Rasch measurement theory (Engelhard, 2013). Concerned with the appropriateness of the parametric transformation upon which the Rasch model is based, Mokken (1971) proposed a nonparametric procedure for evaluating the quality of social science measurement that is theoretically and empirically related to the Rasch model. Mokken's nonparametric procedure can be used to evaluate the quality of dichotomous and polytomous items in terms of the requirements for invariant measurement. Despite these potential benefits, the use of Mokken scaling to examine the properties of multiple-choice (MC) items in education has not yet been fully explored. A nonparametric approach to evaluating MC items is promising in that this approach facilitates the evaluation of assessments in terms of invariant measurement without imposing potentially inappropriate transformations. Using Rasch-based indices of measurement quality as a frame of reference, data from an eighth-grade physical science assessment are used to illustrate and explore Mokken-based techniques for evaluating the quality of MC items. Implications for research and practice are discussed.

  1. Karhunen-Loève (PCA) based detection of multiple oscillations in multiple measurement signals from large-scale process plants

    Odgaard, Peter Fogh; Wickerhauser, M.V.

    2007-01-01

     In the perspective of optimizing the control and operation of large scale process plants, it is important to detect and to locate oscillations in the plants. This paper presents a scheme for detecting and localizing multiple oscillations in multiple measurements from such a large-scale power plant....... The scheme is based on a Karhunen-Lo\\`{e}ve analysis of the data from the plant. The proposed scheme is subsequently tested on two sets of data: a set of synthetic data and a set of data from a coal-fired power plant. In both cases the scheme detects the beginning of the oscillation within only a few samples....... In addition the oscillation localization has also shown its potential by localizing the oscillations in both data sets....

  2. Managing multiple diffuse pressures on water quality and ecological habitat: Spatially targeting effective mitigation actions at the landscape scale.

    Joyce, Hannah; Reaney, Sim

    2015-04-01

    Catchment systems provide multiple benefits for society, including: land for agriculture, climate regulation and recreational space. Yet, these systems also have undesirable externalities, such as flooding, and the benefits they create can be compromised through societal use. For example, agriculture, forestry and urban land use practices can increase the export of fine sediment and faecal indicator organisms (FIO) delivered to river systems. These diffuse landscape pressures are coupled with pressures on the in stream temperature environment from projected climate change. Such pressures can have detrimental impacts on water quality and ecological habitat and consequently the benefits they provide for society. These diffuse and in-stream pressures can be reduced through actions at the landscape scale but are commonly tackled individually. Any intervention may have benefits for other pressures and hence the challenge is to consider all of the different pressures simultaneously to find solutions with high levels of cross-pressure benefits. This research presents (1) a simple but spatially distributed model to predict the pattern of multiple pressures at the landscape scale, and (2) a method for spatially targeting the optimum location for riparian woodland planting as mitigation action against these pressures. The model follows a minimal information requirement approach along the lines of SCIMAP (www.scimap.org.uk). This approach defines the critical source areas of fine sediment diffuse pollution, rapid overland flow and FIOs, based on the analysis of the pattern of the pressure in the landscape and the connectivity from source areas to rivers. River temperature was modeled using a simple energy balance equation; focusing on temperature of inflowing and outflowing water across a catchment. The model has been calibrated using a long term observed temperature record. The modelling outcomes enabled the identification of the severity of each pressure in relative rather

  3. Hydrologic test plans for large-scale, multiple-well tests in support of site characterization at Hanford, Washington

    Rogers, P.M.; Stone, R.; Lu, A.H.

    1985-01-01

    The Basalt Waste Isolation Project is preparing plans for tests and has begun work on some tests that will provide the data necessary for the hydrogeologic characterization of a site located on a United States government reservation at Hanford, Washington. This site is being considered for the Nation's first geologic repository of high level nuclear waste. Hydrogeologic characterization of this site requires several lines of investigation which include: surface-based small-scale tests, testing performed at depth from an exploratory shaft, geochemistry investigations, regional studies, and site-specific investigations using large-scale, multiple-well hydraulic tests. The large-scale multiple-well tests are planned for several locations in and around the site. These tests are being designed to provide estimates of hydraulic parameter values of the geologic media, chemical properties of the groundwater, and hydrogeologic boundary conditions at a scale appropriate for evaluating repository performance with respect to potential radionuclide transport

  4. Curvelet-domain multiple matching method combined with cubic B-spline function

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  5. Multiple sclerosis: Left advantage for auditory laterality in dichotic tests of central auditory processing and relationship of psychoacoustic tests with the Multiple Sclerosis Disability Scale-EDSS.

    Peñaloza López, Yolanda Rebeca; Orozco Peña, Xóchitl Daisy; Pérez Ruiz, Santiago Jesús

    2018-04-03

    To evaluate the central auditory processing disorders in patients with multiple sclerosis, emphasizing auditory laterality by applying psychoacoustic tests and to identify their relationship with the Multiple Sclerosis Disability Scale (EDSS) functions. Depression scales (HADS), EDSS, and 9 psychoacoustic tests to study CAPD were applied to 26 individuals with multiple sclerosis and 26 controls. Correlation tests were performed between the EDSS and psychoacoustic tests. Seven out of 9 psychoacoustic tests were significantly different (P<.05); right or left (14/19 explorations) with respect to control. In dichotic digits there was a left-ear advantage compared to the usual predominance of RDD. There was significant correlation in five psychoacoustic tests and the specific functions of EDSS. The left-ear advantage detected and interpreted as an expression of deficient influences of the corpus callosum and attention in multiple sclerosis should be investigated. There was a correlation between psychoacoustic tests and specific EDSS functions. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Measuring conditions and trends in ecosystem services at multiple scales: the Southern African Millennium Ecosystem Assessment (SAfMA) experience

    van Jaarsveld, A.S; Biggs, R; Scholes, R.J; Bohensky, E; Reyers, B; Lynam, T; Musvoto, C; Fabricius, C

    2005-01-01

    The Southern African Millennium Ecosystem Assessment (SAfMA) evaluated the relationships between ecosystem services and human well-being at multiple scales, ranging from local through to sub-continental. Trends in ecosystem services (fresh water, food, fuel-wood, cultural and biodiversity) over the period 1990-2000 were mixed across scales. Freshwater resources appear strained across the continent with large numbers of people not securing adequate supplies, especially of good quality water. T...

  7. Scale breaking parton fragmentation functions, analytical parametrizations and comparison with charged multiplicities in e+e- annihilation

    Perlt, H.

    1980-01-01

    Scale breaking quark and gluon fragmentation functions obtained by solving numerically Altarelli-Parisi type equations are presented. Analytical parametrizations are given for the fragmentation of u and d quarks into pions. The calculated Q 2 dependent fragmentation functions are compared with experimental data. With these scale breaking fragmentation functions the average charged multiplicity is calculated in e + e - annihilation, which rises with energy more than logarithmically and is in good agreement with experiment. (author)

  8. Design, development and integration of a large scale multiple source X-ray computed tomography system

    Malcolm, Andrew A.; Liu, Tong; Ng, Ivan Kee Beng; Teng, Wei Yuen; Yap, Tsi Tung; Wan, Siew Ping; Kong, Chun Jeng

    2013-01-01

    X-ray Computed Tomography (CT) allows visualisation of the physical structures in the interior of an object without physically opening or cutting it. This technology supports a wide range of applications in the non-destructive testing, failure analysis or performance evaluation of industrial products and components. Of the numerous factors that influence the performance characteristics of an X-ray CT system the energy level in the X-ray spectrum to be used is one of the most significant. The ability of the X-ray beam to penetrate a given thickness of a specific material is directly related to the maximum available energy level in the beam. Higher energy levels allow penetration of thicker components made of more dense materials. In response to local industry demand and in support of on-going research activity in the area of 3D X-ray imaging for industrial inspection the Singapore Institute of Manufacturing Technology (SIMTech) engaged in the design, development and integration of large scale multiple source X-ray computed tomography system based on X-ray sources operating at higher energies than previously available in the Institute. The system consists of a large area direct digital X-ray detector (410 x 410 mm), a multiple-axis manipulator system, a 225 kV open tube microfocus X-ray source and a 450 kV closed tube millifocus X-ray source. The 225 kV X-ray source can be operated in either transmission or reflection mode. The body of the 6-axis manipulator system is fabricated from heavy-duty steel onto which high precision linear and rotary motors have been mounted in order to achieve high accuracy, stability and repeatability. A source-detector distance of up to 2.5 m can be achieved. The system is controlled by a proprietary X-ray CT operating system developed by SIMTech. The system currently can accommodate samples up to 0.5 x 0.5 x 0.5 m in size with weight up to 50 kg. These specifications will be increased to 1.0 x 1.0 x 1.0 m and 100 kg in future

  9. Exploring the brain on multiple scales with correlative two-photon and light sheet microscopy

    Silvestri, Ludovico; Allegra Mascaro, Anna Letizia; Costantini, Irene; Sacconi, Leonardo; Pavone, Francesco S.

    2014-02-01

    One of the unique features of the brain is that its activity cannot be framed in a single spatio-temporal scale, but rather spans many orders of magnitude both in space and time. A single imaging technique can reveal only a small part of this complex machinery. To obtain a more comprehensive view of brain functionality, complementary approaches should be combined into a correlative framework. Here, we describe a method to integrate data from in vivo two-photon fluorescence imaging and ex vivo light sheet microscopy, taking advantage of blood vessels as reference chart. We show how the apical dendritic arbor of a single cortical pyramidal neuron imaged in living thy1-GFP-M mice can be found in the large-scale brain reconstruction obtained with light sheet microscopy. Starting from the apical portion, the whole pyramidal neuron can then be segmented. The correlative approach presented here allows contextualizing within a three-dimensional anatomic framework the neurons whose dynamics have been observed with high detail in vivo.

  10. Measuring Multiple Minority Stress: The LGBT People of Color Microaggressions Scale

    Balsam, Kimberly F.; Molina, Yamile; Beadnell, Blair; Simoni, Jane; Walters, Karina

    2014-01-01

    Lesbian, gay, and bisexual individuals who are also racial/ethnic minorities (LGBT-POC) are a multiply marginalized population subject to microaggressions associated with both racism and heterosexism. To date, research on this population has been hampered by the lack of a measurement tool to assess the unique experiences associated with the intersection of these oppressions. To address this gap in the literature, we conducted a three-phase, mixed method empirical study to assess microaggressions among LGBT-POC. The LGBT People of Color Microaggressions Scale is an 18-item self-report scale assessing the unique types of microaggressions experienced by ethnic minority LGBT adults. The measure includes three subscales: (a) Racism in LGBT communities, (b) Heterosexism in Racial/Ethnic Minority Communities, and (c) Racism in Dating and Close Relationships, that are theoretically consistent with prior literature on racial/ethnic minority LGBTs and have strong psychometric properties including internal consistency and construct validity in terms of correlations with measures of psychological distress and LGBT-identity variables. Men scored higher on the LGBT-PCMS than women, lesbians and gay men scored higher than bisexual women and men, and Asian Americans scored higher than African Americans and Latina/os. PMID:21604840

  11. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  12. 29 CFR 4010.12 - Alternative method of compliance for certain sponsors of multiple employer plans.

    2010-07-01

    ... BENEFIT GUARANTY CORPORATION CERTAIN REPORTING AND DISCLOSURE REQUIREMENTS ANNUAL FINANCIAL AND ACTUARIAL INFORMATION REPORTING § 4010.12 Alternative method of compliance for certain sponsors of multiple employer... part for an information year if any contributing sponsor of the multiple employer plan provides a...

  13. Patterns in foliar nutrient resorption stoichiometry at multiple scales: controlling factors and ecosystem consequences (Invited)

    Reed, S.; Cleveland, C. C.; Davidson, E. A.; Townsend, A. R.

    2013-12-01

    During leaf senescence, nutrient rich compounds are transported to other parts of the plant and this 'resorption' recycles nutrients for future growth, reducing losses of potentially limiting nutrients. Variations in leaf chemistry resulting from nutrient resorption also directly affect litter quality, in turn, regulating decomposition rates and soil nutrient availability. Here we investigated stoichiometric patterns of nitrogen (N) and phosphorus (P) resorption efficiency at multiple spatial scales. First, we assembled a global database to explore nutrient resorption among and within biomes and to examine potential relationships between resorption stoichiometry and ecosystem nutrient status. Next, we used a forest regeneration chronosequence in Brazil to assess how resorption stoichiometry linked with a suite of other nutrient cycling measures and with ideas of how nutrient limitation may change over secondary forest regrowth. Finally, we measured N:P resorption ratios of six canopy tree species in a Costa Rican tropical forest. We calculated species-specific resorption ratios and compared them with patterns in leaf litter and topsoil nutrient concentrations. At the global scale, N:P resorption ratios increased with latitude and decreased with mean annual temperature (MAT) and precipitation (MAP; P1 in latitudes >23°. Focusing on tropical sites in our global dataset we found that, despite fewer data and a restricted latitudinal range, a significant relationship between latitude and N:P resorption ratios persisted (PAmazon Basin chronosequence of regenerating forests, where previous work reported a transition from apparent N limitation in younger forests to P limitation in mature forests, we found N resorption was highest in the youngest forest, whereas P resorption was greatest in the mature forest. Over the course of succession, N resorption efficiency leveled off but P resorption continued to increase with forest age. In Costa Rica, though we found species

  14. Quantifying the heterogeneity of soil compaction, physical soil properties and soil moisture across multiple spatial scales

    Coates, Victoria; Pattison, Ian; Sander, Graham

    2016-04-01

    England's rural landscape is dominated by pastoral agriculture, with 40% of land cover classified as either improved or semi-natural grassland according to the Land Cover Map 2007. Since the Second World War the intensification of agriculture has resulted in greater levels of soil compaction, associated with higher stocking densities in fields. Locally compaction has led to loss of soil storage and an increased in levels of ponding in fields. At the catchment scale soil compaction has been hypothesised to contribute to increased flood risk. Previous research (Pattison, 2011) on a 40km2 catchment (Dacre Beck, Lake District, UK) has shown that when soil characteristics are homogeneously parameterised in a hydrological model, downstream peak discharges can be 65% higher for a heavy compacted soil than for a lightly compacted soil. However, at the catchment scale there is likely to be a significant amount of variability in compaction levels within and between fields, due to multiple controlling factors. This research focusses in on one specific type of land use (permanent pasture with cattle grazing) and areas of activity within the field (feeding area, field gate, tree shelter, open field area). The aim was to determine if the soil characteristics and soil compaction levels are homogeneous in the four areas of the field. Also, to determine if these levels stayed the same over the course of the year, or if there were differences at the end of the dry (October) and wet (April) periods. Field experiments were conducted in the River Skell catchment, in Yorkshire, UK, which has an area of 120km2. The dynamic cone penetrometer was used to determine the structural properties of the soil, soil samples were collected to assess the bulk density, organic matter content and permeability in the laboratory and the Hydrosense II was used to determine the soil moisture content in the topsoil. Penetration results show that the tree shelter is the most compacted and the open field area

  15. Mapping compound cosmic telescopes containing multiple projected cluster-scale halos

    Ammons, S. Mark [Lawrence Livermore National Laboratory, Physics Division L-210, 7000 East Ave., Livermore, CA 94550 (United States); Wong, Kenneth C. [EACOA Fellow, Institute of Astronomy and Astrophysics, Academia Sinica (ASIAA), Taipei 10641, Taiwan (China); Zabludoff, Ann I. [Steward Observatory, University of Arizona, 933 Cherry Ave., Tucson, AZ 85721 (United States); Keeton, Charles R., E-mail: ammons1@llnl.gov, E-mail: kwong@as.arizona.edu, E-mail: aiz@email.arizona.edu, E-mail: keeton@physics.rutgers.edu [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States)

    2014-01-20

    Lines of sight with multiple projected cluster-scale gravitational lenses have high total masses and complex lens plane interactions that can boost the area of magnification, or étendue, making detection of faint background sources more likely than elsewhere. To identify these new 'compound' cosmic telescopes, we have found directions in the sky with the highest integrated mass densities, as traced by the projected concentrations of luminous red galaxies (LRGs). We use new galaxy spectroscopy to derive preliminary magnification maps for two such lines of sight with total mass exceeding ∼3 × 10{sup 15} M {sub ☉}. From 1151 MMT Hectospec spectra of galaxies down to i {sub AB} = 21.2, we identify two to three group- and cluster-scale halos in each beam. These are well traced by LRGs. The majority of the mass in beam J085007.6+360428 (0850) is contributed by Zwicky 1953, a massive cluster at z = 0.3774, whereas beam J130657.5+463219 (1306) is composed of three halos with virial masses of 6 × 10{sup 14}-2 × 10{sup 15} M {sub ☉}, one of which is A1682. The magnification maps derived from our mass models based on spectroscopy and Sloan Digital Sky Survey photometry alone display substantial étendue: the 68% confidence bands on the lens plane area with magnification exceeding 10 for a source plane of z{sub s} = 10 are [1.2, 3.8] arcmin{sup 2} for 0850 and [2.3, 6.7] arcmin{sup 2} for 1306. In deep Subaru Suprime-Cam imaging of beam 0850, we serendipitously discover a candidate multiply imaged V-dropout source at z {sub phot} = 5.03. The location of the candidate multiply imaged arcs is consistent with the critical curves for a source plane of z = 5.03 predicted by our mass model. Incorporating the position of the candidate multiply imaged galaxy as a constraint on the critical curve location in 0850 narrows the 68% confidence band on the lens plane area with μ > 10 and z{sub s} = 10 to [1.8, 4.2] arcmin{sup 2}, an étendue range comparable to that of

  16. The scale-dependent market trend: Empirical evidences using the lagged DFA method

    Li, Daye; Kou, Zhun; Sun, Qiankun

    2015-09-01

    In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.

  17. Relevance of multiple spatial scales in habitat models: A case study with amphibians and grasshoppers

    Altmoos, Michael; Henle, Klaus

    2010-11-01

    Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.

  18. Multi-Scale Entropy Analysis as a Method for Time-Series Analysis of Climate Data

    Heiko Balzter

    2015-03-01

    Full Text Available Evidence is mounting that the temporal dynamics of the climate system are changing at the same time as the average global temperature is increasing due to multiple climate forcings. A large number of extreme weather events such as prolonged cold spells, heatwaves, droughts and floods have been recorded around the world in the past 10 years. Such changes in the temporal scaling behaviour of climate time-series data can be difficult to detect. While there are easy and direct ways of analysing climate data by calculating the means and variances for different levels of temporal aggregation, these methods can miss more subtle changes in their dynamics. This paper describes multi-scale entropy (MSE analysis as a tool to study climate time-series data and to identify temporal scales of variability and their change over time in climate time-series. MSE estimates the sample entropy of the time-series after coarse-graining at different temporal scales. An application of MSE to Central European, variance-adjusted, mean monthly air temperature anomalies (CRUTEM4v is provided. The results show that the temporal scales of the current climate (1960–2014 are different from the long-term average (1850–1960. For temporal scale factors longer than 12 months, the sample entropy increased markedly compared to the long-term record. Such an increase can be explained by systems theory with greater complexity in the regional temperature data. From 1961 the patterns of monthly air temperatures are less regular at time-scales greater than 12 months than in the earlier time period. This finding suggests that, at these inter-annual time scales, the temperature variability has become less predictable than in the past. It is possible that climate system feedbacks are expressed in altered temporal scales of the European temperature time-series data. A comparison with the variance and Shannon entropy shows that MSE analysis can provide additional information on the

  19. Strong and nonlinear effects of fragmentation on ecosystem service provision at multiple scales

    Mitchell, Matthew G. E.; Bennett, Elena M.; Gonzalez, Andrew

    2015-09-01

    Human actions, such as converting natural land cover to agricultural or urban land, result in the loss and fragmentation of natural habitat, with important consequences for the provision of ecosystem services. Such habitat loss is especially important for services that are supplied by fragments of natural land cover and that depend on flows of organisms, matter, or people across the landscape to produce benefits, such as pollination, pest regulation, recreation and cultural services. However, our quantitative knowledge about precisely how different patterns of landscape fragmentation might affect the provision of these types of services is limited. We used a simple, spatially explicit model to evaluate the potential impact of natural land cover loss and fragmentation on the provision of hypothetical ecosystem services. Based on current literature, we assumed that fragments of natural land cover provide ecosystem services to the area surrounding them in a distance-dependent manner such that ecosystem service flow depended on proximity to fragments. We modeled seven different patterns of natural land cover loss across landscapes that varied in the overall level of landscape fragmentation. Our model predicts that natural land cover loss will have strong and unimodal effects on ecosystem service provision, with clear thresholds indicating rapid loss of service provision beyond critical levels of natural land cover loss. It also predicts the presence of a tradeoff between maximizing ecosystem service provision and conserving natural land cover, and a mismatch between ecosystem service provision at landscape versus finer spatial scales. Importantly, the pattern of landscape fragmentation mitigated or intensified these tradeoffs and mismatches. Our model suggests that managing patterns of natural land cover loss and fragmentation could help influence the provision of multiple ecosystem services and manage tradeoffs and synergies between services across different human

  20. Analysis of streamflow variability in Alpine catchments at multiple spatial and temporal scales

    Pérez Ciria, T.; Chiogna, G.

    2017-12-01

    Alpine watersheds play a pivotal role in Europe for water provisioning and for hydropower production. In these catchments, temporal fluctuations of river discharge occur at multiple temporal scales due to natural as well as anthropogenic driving forces. In the last decades, modifications of the flow regime have been observed and their origin lies in the complex interplay between construction of dams for hydro power production, changes in water management policies and climatic changes. The alteration of the natural flow has negative impacts on the freshwater biodiversity and threatens the ecosystem integrity of the Alpine region. Therefore, understanding the temporal and spatial variability of river discharge has recently become a particular concern for environmental protection and represents a crucial contribution to achieve sustainable water resources management in the Alps. In this work, time series analysis is conducted for selected gauging stations in the Inn and the Adige catchments, which cover a large part of the central and eastern region of the Alps. We analyze the available time series using the continuous wavelet transform and change-point analyses for determining how and where changes have taken place. Although both catchments belong to different climatic zones of the Greater Alpine Region, streamflow properties share some similar characteristics. The comparison of the collected streamflow time series in the two catchments permits detecting gradients in the hydrological system dynamics that depend on station elevation, longitudinal location in the Alps and catchment area. This work evidences that human activities (e.g., water management practices and flood protection measures, changes in legislation and market regulation) have major impacts on streamflow and should be rigorously considered in hydrological models.

  1. A catchment scale evaluation of multiple stressor effects in headwater streams.

    Rasmussen, Jes J; McKnight, Ursula S; Loinaz, Maria C; Thomsen, Nanna I; Olsson, Mikael E; Bjerg, Poul L; Binning, Philip J; Kronvang, Brian

    2013-01-01

    Mitigation activities to improve water quality and quantity in streams as well as stream management and restoration efforts are conducted in the European Union aiming to improve the chemical, physical and ecological status of streams. Headwater streams are often characterised by impairment of hydromorphological, chemical, and ecological conditions due to multiple anthropogenic impacts. However, they are generally disregarded as water bodies for mitigation activities in the European Water Framework Directive despite their importance for supporting a higher ecological quality in higher order streams. We studied 11 headwater streams in the Hove catchment in the Copenhagen region. All sites had substantial physical habitat and water quality impairments due to anthropogenic influence (intensive agriculture, urban settlements, contaminated sites and low base-flow due to water abstraction activities in the catchment). We aimed to identify the dominating anthropogenic stressors at the catchment scale causing ecological impairment of benthic macroinvertebrate communities and provide a rank-order of importance that could help in prioritising mitigation activities. We identified numerous chemical and hydromorphological impacts of which several were probably causing major ecological impairments, but we were unable to provide a robust rank-ordering of importance suggesting that targeted mitigation efforts on single anthropogenic stressors in the catchment are unlikely to have substantial effects on the ecological quality in these streams. The SPEcies At Risk (SPEAR) index explained most of the variability in the macroinvertebrate community structure, and notably, SPEAR index scores were often very low (<10% SPEAR abundance). An extensive re-sampling of a subset of the streams provided evidence that especially insecticides were probably essential contributors to the overall ecological impairment of these streams. Our results suggest that headwater streams should be considered in

  2. Statistical theory and transition in multiple-scale-lengths turbulence in plasmas

    Itoh, Sanae-I. [Research Institute for Applied Mechanics, Kyushu Univ., Kasuga, Fukuoka (Japan); Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2001-06-01

    The statistical theory of strong turbulence in inhomogeneous plasmas is developed for the cases where fluctuations with different scale-lengths coexist. Nonlinear interactions in the same kind of fluctuations as well as nonlinear interplay between different classes of fluctuations are kept in the analysis. Nonlinear interactions are modelled as turbulent drag, nonlinear noise and nonlinear drive, and a set of Langevin equations is formulated. With the help of an Ansatz of a large number of degrees of freedom with positive Lyapunov number, Langevin equations are solved and the fluctuation dissipation theorem in the presence of strong plasma turbulence has been derived. A case where two driving mechanisms (one for micro mode and the other for semi-micro mode) coexist is investigated. It is found that there are several states of fluctuations: in one state, the micro mode is excited and the semi-micro mode is quenched; in the other state, the semi-micro mode is excited, and the micro mode remains at finite but suppressed level. New type of turbulence transition is obtained, and a cusp type catastrophe is revealed. A phase diagram is drawn for turbulence which is composed of multiple classes of fluctuations. Influence of the inhomogeneous global radial electric field is discussed. A new insight is given for the physics of internal transport barrier. Finally, the nonlocal heat transport due to the long-wave-length fluctuations, which are noise-pumped by shorter-wave-length ones, is analyzed and the impact on transient transport problems is discussed. (author)

  3. Quantitative atom probe analysis of nanostructure containing clusters and precipitates with multiple length scales

    Marceau, R.K.W.; Stephenson, L.T.; Hutchinson, C.R.; Ringer, S.P.

    2011-01-01

    A model Al-3Cu-(0.05 Sn) (wt%) alloy containing a bimodal distribution of relatively shear-resistant θ' precipitates and shearable GP zones is considered in this study. It has recently been shown that the addition of the GP zones to such microstructures can lead to significant increases in strength without a decrease in the uniform elongation. In this study, atom probe tomography (APT) has been used to quantitatively characterise the evolution of the GP zones and the solute distribution in the bimodal microstructure as a function of applied plastic strain. Recent nuclear magnetic resonance (NMR) analysis has clearly shown strain-induced dissolution of the GP zones, which is supported by the current APT data with additional spatial information. There is significant repartitioning of Cu from the GP zones into the solid solution during deformation. A new approach for cluster finding in APT data has been used to quantitatively characterise the evolution of the sizes and shapes of the Cu containing features in the solid solution solute as a function of applied strain. -- Research highlights: → A new approach for cluster finding in atom probe tomography (APT) data has been used to quantitatively characterise the evolution of the sizes and shapes of the Cu containing features with multiple length scales. → In this study, a model Al-3Cu-(0.05 Sn) (wt%) alloy containing a bimodal distribution of relatively shear-resistant θ' precipitates and shearable GP zones is considered. → APT has been used to quantitatively characterise the evolution of the GP zones and the solute distribution in the bimodal microstructure as a function of applied plastic strain. → It is clearly shown that there is strain-induced dissolution of the GP zones with significant repartitioning of Cu from the GP zones into the solid solution during deformation.

  4. Phenology Data Products to Support Assessment and Forecasting of Phenology on Multiple Spatiotemporal Scales

    Gerst, K.; Enquist, C.; Rosemartin, A.; Denny, E. G.; Marsh, L.; Moore, D. J.; Weltzin, J. F.

    2014-12-01

    The USA National Phenology Network (USA-NPN; www.usanpn.org) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and environmental change. The National Phenology Database maintained by USA-NPN now has over 3.7 million records for plants and animals for the period 1954-2014, with the majority of these observations collected since 2008 as part of a broad, national contributory science strategy. These data have been used in a number of science, conservation and resource management applications, including national assessments of historical and potential future trends in phenology, regional assessments of spatio-temporal variation in organismal activity, and local monitoring for invasive species detection. Customizable data downloads are freely available, and data are accompanied by FGDC-compliant metadata, data-use and data-attribution policies, vetted and documented methodologies and protocols, and version control. While users are free to develop custom algorithms for data cleaning, winnowing and summarization prior to analysis, the National Coordinating Office of USA-NPN is developing a suite of standard data products to facilitate use and application by a diverse set of data users. This presentation provides a progress report on data product development, including: (1) Quality controlled raw phenophase status data; (2) Derived phenometrics (e.g. onset, duration) at multiple scales; (3) Data visualization tools; (4) Tools to support assessment of species interactions and overlap; (5) Species responsiveness to environmental drivers; (6) Spatially gridded phenoclimatological products; and (7) Algorithms for modeling and forecasting future phenological responses. The prioritization of these data products is a direct response to stakeholder needs related to informing management and policy decisions. We anticipate that these products will contribute to broad understanding of plant

  5. Dealing with missing data in a multi-question depression scale: a comparison of imputation methods

    Stuart Heather

    2006-12-01

    Full Text Available Abstract Background Missing data present a challenge to many research projects. The problem is often pronounced in studies utilizing self-report scales, and literature addressing different strategies for dealing with missing data in such circumstances is scarce. The objective of this study was to compare six different imputation techniques for dealing with missing data in the Zung Self-reported Depression scale (SDS. Methods 1580 participants from a surgical outcomes study completed the SDS. The SDS is a 20 question scale that respondents complete by circling a value of 1 to 4 for each question. The sum of the responses is calculated and respondents are classified as exhibiting depressive symptoms when their total score is over 40. Missing values were simulated by randomly selecting questions whose values were then deleted (a missing completely at random simulation. Additionally, a missing at random and missing not at random simulation were completed. Six imputation methods were then considered; 1 multiple imputation, 2 single regression, 3 individual mean, 4 overall mean, 5 participant's preceding response, and 6 random selection of a value from 1 to 4. For each method, the imputed mean SDS score and standard deviation were compared to the population statistics. The Spearman correlation coefficient, percent misclassified and the Kappa statistic were also calculated. Results When 10% of values are missing, all the imputation methods except random selection produce Kappa statistics greater than 0.80 indicating 'near perfect' agreement. MI produces the most valid imputed values with a high Kappa statistic (0.89, although both single regression and individual mean imputation also produced favorable results. As the percent of missing information increased to 30%, or when unbalanced missing data were introduced, MI maintained a high Kappa statistic. The individual mean and single regression method produced Kappas in the 'substantial agreement' range

  6. A numerical comparison between the multiple-scales and finite-element solution for sound propagation in lined flow ducts

    Rienstra, S.W.; Eversman, W.

    2001-01-01

    An explicit, analytical, multiple-scales solution for modal sound transmission through slowly varying ducts with mean flow and acoustic lining is tested against a numerical finite-element solution solving the same potential flow equations. The test geometry taken is representative of a high-bypass

  7. Modeling hydrologic responses to deforestation/forestation and climate change at multiple scales in the Southern US and China

    Ge Sun; Steven McNulty; Jianbiao Lu; James Vose; Devendra Amayta; Guoyi Zhou; Zhiqiang Zhang

    2006-01-01

    Watershed management and restoration practices require a clear understanding of the basic eco-hydrologic processes and ecosystem responses to disturbances at multiple scales (Bruijnzeel, 2004; Scott et al., 2005). Worldwide century-long forest hydrologic research has documented that deforestation and forestation (i.e. reforestation and afforestation) can have variable...

  8. Performed and perceived walking ability in relation to the Expanded Disability Status Scale in persons with multiple sclerosis

    Langeskov-Christensen, D; Feys, P; Baert, I

    2017-01-01

    BACKGROUND: The severity of walking impairment in persons with multiple sclerosis (pwMS) at different levels on the expanded disability status scale (EDSS) is unclear. Furthermore, it is unclear if the EDSS is differently related to performed- and perceived walking capacity tests. AIMS: To quantify...

  9. Trace element analysis of environmental samples by multiple prompt gamma-ray analysis method

    Oshima, Masumi; Matsuo, Motoyuki; Shozugawa, Katsumi

    2011-01-01

    The multiple γ-ray detection method has been proved to be a high-resolution and high-sensitivity method in application to nuclide quantification. The neutron prompt γ-ray analysis method is successfully extended by combining it with the γ-ray detection method, which is called Multiple prompt γ-ray analysis, MPGA. In this review we show the principle of this method and its characteristics. Several examples of its application to environmental samples, especially river sediments in the urban area and sea sediment samples are also described. (author)

  10. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  11. Analysis and performance estimation of the Conjugate Gradient method on multiple GPUs

    Verschoor, M.; Jalba, A.C.

    2012-01-01

    The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of

  12. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  13. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  14. Simple and effective method of determining multiplicity distribution law of neutrons emitted by fissionable material with significant self -multiplication effect

    Yanjushkin, V.A.

    1991-01-01

    At developing new methods of non-destructive determination of plutonium full mass in nuclear materials and products being involved in uranium -plutonium fuel cycle by its intrinsic neutron radiation, it may be useful to know not only separate moments but the multiplicity distribution law itself of neutron leaving this material surface using the following as parameters - firstly, unconditional multiplicity distribution laws of neutrons formed in spontaneous and induced fission acts of the given fissionable material corresponding nuclei and unconditional multiplicity distribution law of neutrons caused by (α,n) reactions at light nuclei of some elements which compose this material chemical structure; -secondly, probability of induced fission of this material nuclei by an incident neutron of any nature formed during the previous fissions or(α,n) reactions. An attempt to develop similar theory has been undertaken. Here the author proposes his approach to this problem. The main advantage of this approach, to our mind, consists in its mathematical simplicity and easy realization at the computer. In principle, the given model guarantees any good accuracy at any real value of induced fission probability without limitations dealing with physico-chemical composition of nuclear material

  15. The multiple imputation method: a case study involving secondary data analysis.

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  16. Features of the method of large-scale paleolandscape reconstructions

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  17. Application of discrete scale invariance method on pipe rupture

    Rajkovic, M.; Mihailovic, Z.; Riznic, J.

    2007-01-01

    'Full text:' A process of material failure of a mechanical system in the form of cracks and microcracks, a catastrophic phenomenon of considerable technological and scientific importance, may be forecasted according to the recent advances in the theory of critical phenomena in statistical physics. Critical rupture scenario states that, in many concrete and composite heterogeneous materials under compression and materials with large distributed residual stresses, rupture is a genuine critical point, i.e., the culmination of a self-organization of damage and cracking characterized by power law signatures. The concept of discrete scale invariance leads to a complex critical exponent (or dimension) and may occur spontaneously in systems and materials developing rupture. It establishes, theoretically, the power law dependence of a measurable observable, such as the rate of acoustic emissions radiated during loading or rate of heat released during the process, upon the time to failure. However, the problem is the power law can be distinguished from other parametric functional forms such as an exponential only close to the critical time. In this paper we modify the functional renormalization method to include the noise elimination procedure and dimension reduction. The aim is to obtain the prediction of the critical rupture time only from the knowledge of the power law parameters at early times prior to rupture, and based on the assumption that the dynamics close to rupture is governed by the power law dependence of the temperature measured along the perimeter of the tube upon the time-to-failure. Such an analysis would not only enhance the precision of prediction related to the rupture mechanism but also significantly help in determining and predicting the leak rates. The prediction will be compared to experimental data on Zr-2.5%Nb made tubes. Note: The views expressed in the paper are those of the authors and do not necessary represents those of the commission. (author)

  18. Network Events on Multiple Space and Time Scales in Cultured Neural Networks and in a Stochastic Rate Model.

    Guido Gigante

    2015-11-01

    Full Text Available Cortical networks, in-vitro as well as in-vivo, can spontaneously generate a variety of collective dynamical events such as network spikes, UP and DOWN states, global oscillations, and avalanches. Though each of them has been variously recognized in previous works as expression of the excitability of the cortical tissue and the associated nonlinear dynamics, a unified picture of the determinant factors (dynamical and architectural is desirable and not yet available. Progress has also been partially hindered by the use of a variety of statistical measures to define the network events of interest. We propose here a common probabilistic definition of network events that, applied to the firing activity of cultured neural networks, highlights the co-occurrence of network spikes, power-law distributed avalanches, and exponentially distributed 'quasi-orbits', which offer a third type of collective behavior. A rate model, including synaptic excitation and inhibition with no imposed topology, synaptic short-term depression, and finite-size noise, accounts for all these different, coexisting phenomena. We find that their emergence is largely regulated by the proximity to an oscillatory instability of the dynamics, where the non-linear excitable behavior leads to a self-amplification of activity fluctuations over a wide range of scales in space and time. In this sense, the cultured network dynamics is compatible with an excitation-inhibition balance corresponding to a slightly sub-critical regime. Finally, we propose and test a method to infer the characteristic time of the fatigue process, from the observed time course of the network's firing rate. Unlike the model, possessing a single fatigue mechanism, the cultured network appears to show multiple time scales, signalling the possible coexistence of different fatigue mechanisms.

  19. Homogeneity analysis with k sets of variables: An alternating least squares method with optimal scaling features

    van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée

    1988-01-01

    Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple

  20. Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics

    Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.

    2018-01-01

    Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.

  1. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  2. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  3. Uncertainty analysis of multiple canister repository model by large-scale calculation

    Tsujimoto, K.; Okuda, H.; Ahn, J.

    2007-01-01

    A prototype uncertainty analysis has been made by using the multiple-canister radionuclide transport code, VR, for performance assessment for the high-level radioactive waste repository. Fractures in the host rock determine main conduit of groundwater, and thus significantly affect the magnitude of radionuclide release rates from the repository. In this study, the probability distribution function (PDF) for the number of connected canisters in the same fracture cluster that bears water flow has been determined in a Monte-Carlo fashion by running the FFDF code with assumed PDFs for fracture geometry. The uncertainty for the release rate of 237 Np from a hypothetical repository containing 100 canisters has been quantitatively evaluated by using the VR code with PDFs for the number of connected canisters and the near field rock porosity. The calculation results show that the mass transport is greatly affected by (1) the magnitude of the radionuclide source determined by the number of connected canisters by the fracture cluster, and (2) the canister concentration effect in the same fracture network. The results also show the two conflicting tendencies that the more fractures in the repository model space, the greater average value but the smaller uncertainty of the peak fractional release rate is. To perform a vast amount of calculation, we have utilized the Earth Simulator and SR8000. The multi-level hybrid programming method is applied in the optimization to exploit high performance of the Earth Simulator. The Latin Hypercube Sampling has been utilized to reduce the number of samplings in Monte-Carlo calculation. (authors)

  4. The extended Beer-Lambert theory for ray tracing modeling of LED chip-scaled packaging application with multiple luminescence materials

    Yuan, Cadmus C. A.

    2015-12-01

    Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.

  5. Multiple and mixed methods in formative evaluation: Is more better? Reflections from a South African study

    Willem Odendaal

    2016-12-01

    Full Text Available Abstract Background Formative programme evaluations assess intervention implementation processes, and are seen widely as a way of unlocking the ‘black box’ of any programme in order to explore and understand why a programme functions as it does. However, few critical assessments of the methods used in such evaluations are available, and there are especially few that reflect on how well the evaluation achieved its objectives. This paper describes a formative evaluation of a community-based lay health worker programme for TB and HIV/AIDS clients across three low-income communities in South Africa. It assesses each of the methods used in relation to the evaluation objectives, and offers suggestions on ways of optimising the use of multiple, mixed-methods within formative evaluations of complex health system interventions. Methods The evaluation’s qualitative methods comprised interviews, focus groups, observations and diary keeping. Quantitative methods included a time-and-motion study of the lay health workers’ scope of practice and a client survey. The authors conceptualised and conducted the evaluation, and through iterative discussions, assessed the methods used and their results. Results Overall, the evaluation highlighted programme issues and insights beyond the reach of traditional single methods evaluations. The strengths of the multiple, mixed-methods in this evaluation included a detailed description and nuanced understanding of the programme and its implementation, and triangulation of the perspectives and experiences of clients, lay health workers, and programme managers. However, the use of multiple methods needs to be carefully planned and implemented as this approach can overstretch the logistic and analytic resources of an evaluation. Conclusions For complex interventions, formative evaluation designs including multiple qualitative and quantitative methods hold distinct advantages over single method evaluations. However

  6. Multiple Site-Directed and Saturation Mutagenesis by the Patch Cloning Method.

    Taniguchi, Naohiro; Murakami, Hiroshi

    2017-01-01

    Constructing protein-coding genes with desired mutations is a basic step for protein engineering. Herein, we describe a multiple site-directed and saturation mutagenesis method, termed MUPAC. This method has been used to introduce multiple site-directed mutations in the green fluorescent protein gene and in the moloney murine leukemia virus reverse transcriptase gene. Moreover, this method was also successfully used to introduce randomized codons at five desired positions in the green fluorescent protein gene, and for simple DNA assembly for cloning.

  7. Forward-weighted CADIS method for variance reduction of Monte Carlo calculations of distributions and multiple localized quantities

    Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.

    2009-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)

  8. A Scale Development for Teacher Competencies on Cooperative Learning Method

    Kocabas, Ayfer; Erbil, Deniz Gokce

    2017-01-01

    Cooperative learning method is a learning method studied both in Turkey and in the world for long years as an active learning method. Although cooperative learning method takes place in training programs, it cannot be implemented completely in the direction of its principles. The results of the researches point out that teachers have problems with…

  9. The Initial Rise Method in the case of multiple trapping levels

    Furetta, C.; Guzman, S.; Cruz Z, E.

    2009-10-01

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  10. Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method

    Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui

    1991-01-01

    A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients

  11. The Initial Rise Method in the case of multiple trapping levels

    Furetta, C. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F. (Mexico); Guzman, S.; Cruz Z, E. [Instituto de Ciencias Nucleares, UNAM, A. P. 70-543, 04510 Mexico D. F. (Mexico)

    2009-10-15

    The aim of the paper is to extent the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to the minerals extracted from Nopal herb and Oregano spice because the thermoluminescent glow curves shape suggests a trap distribution instead of a single trapping level. (Author)

  12. A method for the generation of random multiple Coulomb scattering angles

    Campbell, J.R.

    1995-06-01

    A method for the random generation of spatial angles drawn from non-Gaussian multiple Coulomb scattering distributions is presented. The method employs direct numerical inversion of cumulative probability distributions computed from the universal non-Gaussian angular distributions of Marion and Zimmerman. (author). 12 refs., 3 figs

  13. Hadronic multiplicity and total cross-section: a new scaling in wide energy range

    Kobylinsky, N.A.; Martynov, E.S.; Shelest, V.P.

    1983-01-01

    The ratio of mean multiplicity to total cross-section is shown to be the same for all the Regge models and to rise with energy as lns which is confirmed by experimental data. Hence, a power of multiplicity growth is unambiguously connected with that of total cross-section. As regards the observed growth, approximately ln 2 s, it tells about a dipole character of pomeron singularity

  14. Ecosystem assessment methods for cumulative effects at the regional scale

    Hunsaker, C.T.

    1989-01-01

    Environmental issues such as nonpoint-source pollution, acid rain, reduced biodiversity, land use change, and climate change have widespread ecological impacts and require an integrated assessment approach. Since 1978, the implementing regulations for the National Environmental Policy Act (NEPA) have required assessment of potential cumulative environmental impacts. Current environmental issues have encouraged ecologists to improve their understanding of ecosystem process and function at several spatial scales. However, management activities usually occur at the local scale, and there is little consideration of the potential impacts to the environmental quality of a region. This paper proposes that regional ecological risk assessment provides a useful approach for assisting scientists in accomplishing the task of assessing cumulative impacts. Critical issues such as spatial heterogeneity, boundary definition, and data aggregation are discussed. Examples from an assessment of acidic deposition effects on fish in Adirondack lakes illustrate the importance of integrated data bases, associated modeling efforts, and boundary definition at the regional scale

  15. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  16. An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery

    Haiyan Gu

    2018-04-01

    Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.

  17. Measures of spike train synchrony for data with multiple time scales

    Satuvuori, Eero; Mulansky, Mario; Bozanic, Nebojsa; Malvestio, Irene; Zeldenrust, Fleur; Lenk, Kerstin; Kreuz, Thomas

    2017-01-01

    Background Measures of spike train synchrony are widely used in both experimental and computational neuroscience. Time-scale independent and parameter-free measures, such as the ISI-distance, the SPIKE-distance and SPIKE-synchronization, are preferable to time scale parametric measures, since by

  18. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  19. Study of the multiple scattering effect in TEBENE using the Monte Carlo method

    Singkarat, Somsorn.

    1990-01-01

    The neutron time-of-flight and energy spectra, from the TEBENE set-up, have been calculated by a computer program using the Monte Carlo method. The neutron multiple scattering within the polyethylene scatterer ring is closely investigated. The results show that multiple scattering has a significant effect on the detected neutron yield. They also indicate that the thickness of the scatterer ring has to be carefully chosen. (author)

  20. Personality factors in recently diagnosed multiple sclerosis patients: a preliminary investigation with the NEO-FFI scale

    Aline Braz de Lima

    2015-03-01

    Full Text Available This article describes some prevalent personality dimensions of recently diagnosed multiple sclerosis patients. A sample of 33 female recently diagnosed with relapsing-remitting multiple sclerosis (RRMS was assessed with the NEO-FFI personality scale. Beck depression (BDI and anxiety (BAI scales were also used. No significant levels of anxiety or depression were identified in this group. As for personality factors, conscientiousness was the most common factor found, whereas openness to experience was the least observed. Literature on the relationship between personality and MS is scarce and there are no Brazilian studies on this subject. Some personality traits might complicate or facilitate the experience of living with a chronic, disabling and uncertain neurological condition such as MS.

  1. A linear multiple balance method for discrete ordinates neutron transport equations

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  2. Scaling Methods to Measure Psychopathology in Persons with Intellectual Disabilities

    Matson, Johnny L.; Belva, Brian C.; Hattier, Megan A.; Matson, Michael L.

    2012-01-01

    Psychopathology prior to the last four decades was generally viewed as a set of problems and disorders that did not occur in persons with intellectual disabilities (ID). That notion now seems very antiquated. In no small part, a revolutionary development of scales worldwide has occurred for the assessment of emotional problems in persons with ID.…

  3. The Large-Scale Structure of Scientific Method

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  4. Newton Methods for Large Scale Problems in Machine Learning

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  5. Determination of 226Ra contamination depth in soil using the multiple photopeaks method

    Haddad, Kh.; Al-Masri, M.S.; Doubal, A.W.

    2014-01-01

    Radioactive contamination presents a diverse range of challenges in many industries. Determination of radioactive contamination depth plays a vital role in the assessment of contaminated sites, because it can be used to estimate the activity content. It is determined traditionally by measuring the activity distributions along the depth. This approach gives accurate results, but it is time consuming, lengthy and costly. The multiple photopeaks method was developed in this work for 226 Ra contamination depth determination in a NORM contaminated soil using in-situ gamma spectrometry. The developed method bases on linear correlation between the attenuation ratio of different gamma lines emitted by 214 Bi and the 226 Ra contamination depth. Although this method is approximate, but it is much simpler, faster and cheaper than the traditional one. This method can be applied for any case of multiple gamma emitter contaminant. -- Highlights: • The multiple photopeaks method was developed for 226 Ra contamination depth determination using in-situ gamma spectrometry. • The method bases on linear correlation between the attenuation ratio of 214 Bi gamma lines and 226 Ra contamination depth. • This method is simpler, faster and cheaper than the traditional one, it can be applied for any multiple gamma contaminant

  6. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  7. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs

    Yahyanejad, Saeed; Rinner, Bernhard

    2015-06-01

    The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.

  8. Dual worth trade-off method and its application for solving multiple criteria decision making problems

    Feng Junwen

    2006-01-01

    To overcome the limitations of the traditional surrogate worth trade-off (SWT) method and solve the multiple criteria decision making problem more efficiently and interactively, a new method labeled dual worth trade-off (DWT) method is proposed. The DWT method dynamically uses the duality theory related to the multiple criteria decision making problem and analytic hierarchy process technique to obtain the decision maker's solution preference information and finally find the satisfactory compromise solution of the decision maker. Through the interactive process between the analyst and the decision maker, trade-off information is solicited and treated properly, the representative subset of efficient solutions and the satisfactory solution to the problem are found. The implementation procedure for the DWT method is presented. The effectiveness and applicability of the DWT method are shown by a practical case study in the field of production scheduling.

  9. A Method to Construct Plasma with Nonlinear Density Enhancement Effect in Multiple Internal Inductively Coupled Plasmas

    Chen Zhipeng; Li Hong; Liu Qiuyan; Luo Chen; Xie Jinlin; Liu Wandong

    2011-01-01

    A method is proposed to built up plasma based on a nonlinear enhancement phenomenon of plasma density with discharge by multiple internal antennas simultaneously. It turns out that the plasma density under multiple sources is higher than the linear summation of the density under each source. This effect is helpful to reduce the fast exponential decay of plasma density in single internal inductively coupled plasma source and generating a larger-area plasma with multiple internal inductively coupled plasma sources. After a careful study on the balance between the enhancement and the decay of plasma density in experiments, a plasma is built up by four sources, which proves the feasibility of this method. According to the method, more sources and more intensive enhancement effect can be employed to further build up a high-density, large-area plasma for different applications. (low temperature plasma)

  10. Multiple Positive Symmetric Solutions to p-Laplacian Dynamic Equations on Time Scales

    You-Hui Su

    2009-01-01

    two examples are given to illustrate the main results and their differences. These results are even new for the special cases of continuous and discrete equations, as well as in the general time-scale setting.

  11. The multiple time scales of sleep dynamics as a challenge for modelling the sleeping brain.

    Olbrich, Eckehard; Claussen, Jens Christian; Achermann, Peter

    2011-10-13

    A particular property of the sleeping brain is that it exhibits dynamics on very different time scales ranging from the typical sleep oscillations such as sleep spindles and slow waves that can be observed in electroencephalogram (EEG) segments of several seconds duration over the transitions between the different sleep stages on a time scale of minutes to the dynamical processes involved in sleep regulation with typical time constants in the range of hours. There is an increasing body of work on mathematical and computational models addressing these different dynamics, however, usually considering only processes on a single time scale. In this paper, we review and present a new analysis of the dynamics of human sleep EEG at the different time scales and relate the findings to recent modelling efforts pointing out both the achievements and remaining challenges.

  12. Scale Development and Initial Tests of the Multidimensional Complex Adaptive Leadership Scale for School Principals: An Exploratory Mixed Method Study

    Özen, Hamit; Turan, Selahattin

    2017-01-01

    This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…

  13. Investigations of grain size dependent sediment transport phenomena on multiple scales

    Thaxton, Christopher S.

    Sediment transport processes in coastal and fluvial environments resulting from disturbances such as urbanization, mining, agriculture, military operations, and climatic change have significant impact on local, regional, and global environments. Primarily, these impacts include the erosion and deposition of sediment, channel network modification, reduction in downstream water quality, and the delivery of chemical contaminants. The scale and spatial distribution of these effects are largely attributable to the size distribution of the sediment grains that become eligible for transport. An improved understanding of advective and diffusive grain-size dependent sediment transport phenomena will lead to the development of more accurate predictive models and more effective control measures. To this end, three studies were performed that investigated grain-size dependent sediment transport on three different scales. Discrete particle computer simulations of sheet flow bedload transport on the scale of 0.1--100 millimeters were performed on a heterogeneous population of grains of various grain sizes. The relative transport rates and diffusivities of grains under both oscillatory and uniform, steady flow conditions were quantified. These findings suggest that boundary layer formalisms should describe surface roughness through a representative grain size that is functionally dependent on the applied flow parameters. On the scale of 1--10m, experiments were performed to quantify the hydrodynamics and sediment capture efficiency of various baffles installed in a sediment retention pond, a commonly used sedimentation control measure in watershed applications. Analysis indicates that an optimum sediment capture effectiveness may be achieved based on baffle permeability, pond geometry and flow rate. Finally, on the scale of 10--1,000m, a distributed, bivariate watershed terain evolution module was developed within GRASS GIS. Simulation results for variable grain sizes and for

  14. Multiple drivers, scales, and interactions influence southern Appalachian stream salamander occupancy

    Cecala, Kristen K.; Maerz, John C.; Halstead, Brian J.; Frisch, John R.; Gragson, Ted L.; Hepinstall-Cymerman, Jeffrey; Leigh, David S.; Jackson, C. Rhett; Peterson, James T.; Pringle, Catherine M.

    2018-01-01

    Understanding how factors that vary in spatial scale relate to population abundance is vital to forecasting species responses to environmental change. Stream and river ecosystems are inherently hierarchical, potentially resulting in organismal responses to fine‐scale changes in patch characteristics that are conditional on the watershed context. Here, we address how populations of two salamander species are affected by interactions among hierarchical processes operating at different scales within a rapidly changing landscape of the southern Appalachian Mountains. We modeled reach‐level occupancy of larval and adult black‐bellied salamanders (Desmognathus quadramaculatus) and larval Blue Ridge two‐lined salamanders (Eurycea wilderae) as a function of 17 different terrestrial and aquatic predictor variables that varied in spatial extent. We found that salamander occurrence varied widely among streams within fully forested catchments, but also exhibited species‐specific responses to changes in local conditions. While D. quadramaculatus declined predictably in relation to losses in forest cover, larval occupancy exhibited the strongest negative response to forest loss as well as decreases in elevation. Conversely, occupancy of E. wilderae was unassociated with watershed conditions, only responding negatively to higher proportions of fast‐flowing stream habitat types. Evaluation of hierarchical relationships demonstrated that most fine‐scale variables were closely correlated with broad watershed‐scale variables, suggesting that local reach‐scale factors have relatively smaller effects within the context of the larger landscape. Our results imply that effective management of southern Appalachian stream salamanders must first focus on the larger scale condition of watersheds before management of local‐scale conditions should proceed. Our findings confirm the results of some studies while refuting the results of others, which may indicate that

  15. An eigenfunction method for reconstruction of large-scale and high-contrast objects.

    Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P

    2007-07-01

    A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.

  16. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  17. Scale Sensitivity and Question Order in the Contingent Valuation Method

    Andersson, Henrik; Svensson, Mikael

    2010-01-01

    This study examines the effect on respondents' willingness to pay to reduce mortality risk by the order of the questions in a stated preference study. Using answers from an experiment conducted on a Swedish sample where respondents' cognitive ability was measured and where they participated in a contingent valuation survey, it was found that scale sensitivity is strongest when respondents are asked about a smaller risk reduction first ('bottom-up' approach). This contradicts some previous evi...

  18. Managing Small-Scale Fisheries : Alternative Directions and Methods

    Managing Small-scale Fisheries va plus loin que le champ d'application de la gestion classique des pêches pour aborder d'autres concepts, outils, méthodes et ... Les gestionnaires des pêches, tant du secteur public que du secteur privé, les chargés de cours et les étudiants en gestion des pêches, les organisations et les ...

  19. Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration

    Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.

    2009-01-01

    equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid

  20. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  1. Evidence of self-affine multiplicity scaling of charged-particle ...

    In the past few years many workers reported on large density fluctuations in different interacting systems [6–12]. Several theoretical interpretations of the origin of large .... of effects with this parameter, already observed for the case of shower multiplicity. ... properties may be different for different regions of the system.

  2. Qualification of new design of flexible pipe against singing: testing at multiple scales

    Golliard, J.; Lunde, K.; Vijlbrief, O.

    2016-01-01

    Flexible pipes for production of oil and gas typically present a corrugated inner surface. This has been identified as the cause of "singing risers": Flow-Induced Pulsations due to the interaction of sound waves with the shear layers at the small cavities present at each of the multiple

  3. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    Quintin, Jean-Noel

    2013-10-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon\\'s algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon\\'s algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  4. Application of Item Response Theory to Modeling of Expanded Disability Status Scale in Multiple Sclerosis.

    Novakovic, A.M.; Krekels, E.H.; Munafo, A.; Ueckert, S.; Karlsson, M.O.

    2016-01-01

    In this study, we report the development of the first item response theory (IRT) model within a pharmacometrics framework to characterize the disease progression in multiple sclerosis (MS), as measured by Expanded Disability Status Score (EDSS). Data were collected quarterly from a 96-week phase III

  5. Feasibility of large-scale deployment of multiple wearable sensors in Parkinson's disease

    Silva de Lima, A.L.; Hahn, T.; Evers, L.J.W.; Vries, N.M. de; Cohen, E.; Afek, M.; Bataille, L.; Daeschler, M.; Claes, K.; Boroojerdi, B.; Terricabras, D.; Little, M.A.; Baldus, H.; Bloem, B.R.; Faber, M.J.

    2017-01-01

    Wearable devices can capture objective day-to-day data about Parkinson's Disease (PD). This study aims to assess the feasibility of implementing wearable technology to collect data from multiple sensors during the daily lives of PD patients. The Parkinson@home study is an observational, two-cohort

  6. Hierarchical Parallel Matrix Multiplication on Large-Scale Distributed Memory Platforms

    Quintin, Jean-Noel; Hasanov, Khalid; Lastovetsky, Alexey

    2013-01-01

    Matrix multiplication is a very important computation kernel both in its own right as a building block of many scientific applications and as a popular representative for other scientific applications. Cannon's algorithm which dates back to 1969 was the first efficient algorithm for parallel matrix multiplication providing theoretically optimal communication cost. However this algorithm requires a square number of processors. In the mid-1990s, the SUMMA algorithm was introduced. SUMMA overcomes the shortcomings of Cannon's algorithm as it can be used on a nonsquare number of processors as well. Since then the number of processors in HPC platforms has increased by two orders of magnitude making the contribution of communication in the overall execution time more significant. Therefore, the state of the art parallel matrix multiplication algorithms should be revisited to reduce the communication cost further. This paper introduces a new parallel matrix multiplication algorithm, Hierarchical SUMMA (HSUMMA), which is a redesign of SUMMA. Our algorithm reduces the communication cost of SUMMA by introducing a two-level virtual hierarchy into the two-dimensional arrangement of processors. Experiments on an IBM BlueGene/P demonstrate the reduction of communication cost up to 2.08 times on 2048 cores and up to 5.89 times on 16384 cores. © 2013 IEEE.

  7. Study of fission time scale from measurement of pre-scission light particle and γ-ray multiplicities

    Ramachandran, K.; Chatterjee, A.; Navin, A.

    2014-01-01

    This work presents the result of a simultaneous measurement of pre-scission multiplicities and analysis using the statistical model code JOANNE2 which includes deformation effects. Evaporation residue cross-sections has also been measured for the same system and analyzed in a consistent manner. The neutron, charged particle, GDR γ-ray and ER data could be explained consistently. The emission of neutrons seems to be favored towards larger deformation as compared to charged particles. The pre-scission time scale is deduced as 0-2 x 10 -21 s whereas the saddle-to-scission time scale is 36-39 x 10 -21 s. The total fission time scale is deduced as 36-41 x 10 -21 s

  8. Genetic differentiation across multiple spatial scales of the Red Sea of the corals Stylophora pistillata and Pocillopora verrucosa

    Monroe, Alison

    2015-12-01

    Observing populations at different spatial scales gives greater insight into the specific processes driving genetic differentiation and population structure. Here we determined population connectivity across multiple spatial scales in the Red Sea to determine the population structures of two reef building corals Stylophora pistillata and Pocillopora verrucosa. The Red sea is a 2,250 km long body of water with extremely variable latitudinal environmental gradients. Mitochondrial and microsatellite markers were used to determine distinct lineages and to look for genetic differentiation among sampling sites. No distinctive population structure across the latitudinal gradient was discovered within this study suggesting a phenotypic plasticity of both these species to various environments. Stylophora pistillata displayed a heterogeneous distribution of three distinct genetic populations on both a fine and large scale. Fst, Gst, and Dest were all significant (p-value<0.05) and showed moderate genetic differentiation between all sampling sites. However this seems to be byproduct of the heterogeneous distribution, as no distinct genetic population breaks were found. Stylophora pistillata showed greater population structure on a fine scale suggesting genetic selection based on fine scale environmental variations. However, further environmental and oceanographic data is needed to make more inferences on this structure at small spatial scales. This study highlights the deficits of knowledge of both the Red Sea and coral plasticity in regards to local environmental conditions.

  9. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  10. An implementation of multiple multipole method in the analyse of elliptical objects to enhance backscattering light

    Jalali, T.

    2015-07-01

    In this paper, we present dielectric elliptical shapes modelling with respect to a highly confined power distribution in the resulting nanojet, which has been parameterized according to the beam waist and its beam divergence. The method is based on spherical bessel function as a basis function, which is adapted to standard multiple multipole method. This method can handle elliptically shaped particles due to the change of size and refractive indices, which have been studied under plane wave illumination in two and three dimensional multiple multipole method. Because of its fast and good convergence, the results obtained from simulation are highly accurate and reliable. The simulation time is less than minute for two and three dimension. Therefore, the proposed method is found to be computationally efficient, fast and accurate.

  11. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Furetta, C. [CICATA-Legaria, Instituto Politecnico Nacional, 11500 Mexico D.F. (Mexico); Guzman, S. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Ruiz, B. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico); Departamento de Agricultura y Ganaderia, Universidad de Sonora, A.P. 305, 83190 Hermosillo, Sonora (Mexico); Cruz-Zaragoza, E., E-mail: ecruz@nucleares.unam.m [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 Mexico D.F. (Mexico)

    2011-02-15

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  12. The initial rise method extended to multiple trapping levels in thermoluminescent materials.

    Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E

    2011-02-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. The initial rise method extended to multiple trapping levels in thermoluminescent materials

    Furetta, C.; Guzman, S.; Ruiz, B.; Cruz-Zaragoza, E.

    2011-01-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.

  14. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.

  15. VIKOR Method for Interval Neutrosophic Multiple Attribute Group Decision-Making

    Yu-Han Huang

    2017-11-01

    Full Text Available In this paper, we will extend the VIKOR (VIsekriterijumska optimizacija i KOmpromisno Resenje method to multiple attribute group decision-making (MAGDM with interval neutrosophic numbers (INNs. Firstly, the basic concepts of INNs are briefly presented. The method first aggregates all individual decision-makers’ assessment information based on an interval neutrosophic weighted averaging (INWA operator, and then employs the extended classical VIKOR method to solve MAGDM problems with INNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by a comparison with the existing methods.

  16. Methods of fast, multiple-point in vivo T1 determination

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  17. Interior Point Methods for Large-Scale Nonlinear Programming

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2005-01-01

    Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005

  18. Methods for testing of geometrical down-scaled rotor blades

    Branner, Kim; Berring, Peter

    further developed since then. Structures in composite materials are generally difficult and time consuming to test for fatigue resistance. Therefore, several methods for testing of blades have been developed and exist today. Those methods are presented in [1]. Current experimental test performed on full...

  19. Age-related changes in the plasticity and toughness of human cortical bone at multiple length-scales

    Zimmermann, Elizabeth A.; Schaible, Eric; Bale, Hrishikesh; Barth, Holly D.; Tang, Simon Y.; Reichert, Peter; Busse, Bjoern; Alliston, Tamara; Ager III, Joel W.; Ritchie, Robert O.

    2011-08-10

    The structure of human cortical bone evolves over multiple length-scales from its basic constituents of collagen and hydroxyapatite at the nanoscale to osteonal structures at nearmillimeter dimensions, which all provide the basis for its mechanical properties. To resist fracture, bone’s toughness is derived intrinsically through plasticity (e.g., fibrillar sliding) at structural-scales typically below a micron and extrinsically (i.e., during crack growth) through mechanisms (e.g., crack deflection/bridging) generated at larger structural-scales. Biological factors such as aging lead to a markedly increased fracture risk, which is often associated with an age-related loss in bone mass (bone quantity). However, we find that age-related structural changes can significantly degrade the fracture resistance (bone quality) over multiple lengthscales. Using in situ small-/wide-angle x-ray scattering/diffraction to characterize sub-micron structural changes and synchrotron x-ray computed tomography and in situ fracture-toughness measurements in the scanning electron microscope to characterize effects at micron-scales, we show how these age-related structural changes at differing size-scales degrade both the intrinsic and extrinsic toughness of bone. Specifically, we attribute the loss in toughness to increased non-enzymatic collagen cross-linking which suppresses plasticity at nanoscale dimensions and to an increased osteonal density which limits the potency of crack-bridging mechanisms at micron-scales. The link between these processes is that the increased stiffness of the cross-linked collagen requires energy to be absorbed by “plastic” deformation at higher structural levels, which occurs by the process of microcracking.

  20. The impact of secure messaging on workflow in primary care: Results of a multiple-case, multiple-method study.

    Hoonakker, Peter L T; Carayon, Pascale; Cartmill, Randi S

    2017-04-01

    Secure messaging is a relatively new addition to health information technology (IT). Several studies have examined the impact of secure messaging on (clinical) outcomes but very few studies have examined the impact on workflow in primary care clinics. In this study we examined the impact of secure messaging on workflow of clinicians, staff and patients. We used a multiple case study design with multiple data collections methods (observation, interviews and survey). Results show that secure messaging has the potential to improve communication and information flow and the organization of work in primary care clinics, partly due to the possibility of asynchronous communication. However, secure messaging can also have a negative effect on communication and increase workload, especially if patients send messages that are not appropriate for the secure messaging medium (for example, messages that are too long, complex, ambiguous, or inappropriate). Results show that clinicians are ambivalent about secure messaging. Secure messaging can add to their workload, especially if there is high message volume, and currently they are not compensated for these activities. Staff is -especially compared to clinicians- relatively positive about secure messaging and patients are overall very satisfied with secure messaging. Finally, clinicians, staff and patients think that secure messaging can have a positive effect on quality of care and patient safety. Secure messaging is a tool that has the potential to improve communication and information flow. However, the potential of secure messaging to improve workflow is dependent on the way it is implemented and used. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Research on performance evaluation and anti-scaling mechanism of green scale inhibitors by static and dynamic methods

    Liu, D.

    2011-01-01

    Increasing environmental concerns and discharge limitations have imposed additional challenges in treating process waters. Thus, the concept of 'Green Chemistry' was proposed and green scale inhibitors became a focus of water treatment technology. Finding some economical and environmentally friendly inhibitors is one of the major research focuses nowadays. In this dissertation, the inhibition performance of different phosphonates as CaCO 3 scale inhibitors in simulated cooling water was evaluated. Homo-, co-, and ter-polymers were also investigated for their performance as Ca-phosphonate inhibitors. Addition of polymers as inhibitors with phosphonates could reduce Ca-phosphonate precipitation and enhance the inhibition efficiency for CaCO 3 scale. The synergistic effect of poly-aspartic acid (PASP) and Poly-epoxy-succinic acid (PESA) on inhibition of scaling has been studied using both static and dynamic methods. Results showed that the anti-scaling performance of PASP combined with PESA was superior to that of PASP or PESA alone for CaCO 3 , CaSO 4 and BaSO 4 scale. The influence of dosage, temperature and Ca 2+ concentration was also investigated in simulated cooling water circuit. Moreover, SEM analysis demonstrated the modification of crystalline morphology in the presence of PASP and PESA. In this work, we also investigated the respective inhibition effectiveness of copper and zinc ions for scaling in drinking water by the method of Rapid Controlled Precipitation (RCP). The results indicated that the zinc ion and copper ion were high efficient inhibitors of low concentration, and the analysis of SEM and IR showed that copper and zinc ions could affect the calcium carbonate germination and change the crystal morphology. Moreover, the influence of temperature and dissolved CO 2 on the scaling potential of a mineral water (Salvetat) in the presence of copper and zinc ions was studied by laboratory experiments. An ideal scale inhibitor should be a solid form

  2. Comparison between Two Assessment Methods; Modified Essay Questions and Multiple Choice Questions

    Assadi S.N.* MD

    2015-09-01

    Full Text Available Aims Using the best assessment methods is an important factor in educational development of health students. Modified essay questions and multiple choice questions are two prevalent methods of assessing the students. The aim of this study was to compare two methods of modified essay questions and multiple choice questions in occupational health engineering and work laws courses. Materials & Methods This semi-experimental study was performed during 2013 to 2014 on occupational health students of Mashhad University of Medical Sciences. The class of occupational health and work laws course in 2013 was considered as group A and the class of 2014 as group B. Each group had 50 students.The group A students were assessed by modified essay questions method and the group B by multiple choice questions method.Data were analyzed in SPSS 16 software by paired T test and odd’s ratio. Findings The mean grade of occupational health and work laws course was 18.68±0.91 in group A (modified essay questions and was 18.78±0.86 in group B (multiple choice questions which was not significantly different (t=-0.41; p=0.684. The mean grade of chemical chapter (p<0.001 in occupational health engineering and harmful work law (p<0.001 and other (p=0.015 chapters in work laws were significantly different between two groups. Conclusion Modified essay questions and multiple choice questions methods have nearly the same student assessing value for the occupational health engineering and work laws course.

  3. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    Chen, Li; He, Ya-Ling [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Kang, Qinjun [Computational Earth Science Group (EES-16), Los Alamos National Laboratory, Los Alamos, NM (United States); Tao, Wen-Quan, E-mail: wqtao@mail.xjtu.edu.cn [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of which obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.

  4. The MIMIC Method with Scale Purification for Detecting Differential Item Functioning

    Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien

    2009-01-01

    This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…

  5. A Spatial Framework to Map Heat Health Risks at Multiple Scales.

    Ho, Hung Chak; Knudby, Anders; Huang, Wei

    2015-12-18

    In the last few decades extreme heat events have led to substantial excess mortality, most dramatically in Central Europe in 2003, in Russia in 2010, and even in typically cool locations such as Vancouver, Canada, in 2009. Heat-related morbidity and mortality is expected to increase over the coming centuries as the result of climate-driven global increases in the severity and frequency of extreme heat events. Spatial information on heat exposure and population vulnerability may be combined to map the areas of highest risk and focus mitigation efforts there. However, a mismatch in spatial resolution between heat exposure and vulnerability data can cause spatial scale issues such as the Modifiable Areal Unit Problem (MAUP). We used a raster-based model to integrate heat exposure and vulnerability data in a multi-criteria decision analysis, and compared it to the traditional vector-based model. We then used the Getis-Ord G(i) index to generate spatially smoothed heat risk hotspot maps from fine to coarse spatial scales. The raster-based model allowed production of maps at spatial resolution, more description of local-scale heat risk variability, and identification of heat-risk areas not identified with the vector-based approach. Spatial smoothing with the Getis-Ord G(i) index produced heat risk hotspots from local to regional spatial scale. The approach is a framework for reducing spatial scale issues in future heat risk mapping, and for identifying heat risk hotspots at spatial scales ranging from the block-level to the municipality level.

  6. MULTIPLE CRITERA METHODS WITH FOCUS ON ANALYTIC HIERARCHY PROCESS AND GROUP DECISION MAKING

    Lidija Zadnik-Stirn

    2010-12-01

    Full Text Available Managing natural resources is a group multiple criteria decision making problem. In this paper the analytic hierarchy process is the chosen method for handling the natural resource problems. The one decision maker problem is discussed and, three methods: the eigenvector method, data envelopment analysis method, and logarithmic least squares method are presented for the derivation of the priority vector. Further, the group analytic hierarchy process is discussed and six methods for the aggregation of individual judgments or priorities: weighted arithmetic mean method, weighted geometric mean method, and four methods based on data envelopment analysis are compared. The case study on land use in Slovenia is applied. The conclusions review consistency, sensitivity analyses, and some future directions of research.

  7. Large-scale assessment of benthic communities across multiple marine protected areas using an autonomous underwater vehicle.

    Ferrari, Renata; Marzinelli, Ezequiel M; Ayroza, Camila Rezende; Jordan, Alan; Figueira, Will F; Byrne, Maria; Malcolm, Hamish A; Williams, Stefan B; Steinberg, Peter D

    2018-01-01

    Marine protected areas (MPAs) are designed to reduce threats to biodiversity and ecosystem functioning from anthropogenic activities. Assessment of MPAs effectiveness requires synchronous sampling of protected and non-protected areas at multiple spatial and temporal scales. We used an autonomous underwater vehicle to map benthic communities in replicate 'no-take' and 'general-use' (fishing allowed) zones within three MPAs along 7o of latitude. We recorded 92 taxa and 38 morpho-groups across three large MPAs. We found that important habitat-forming biota (e.g. massive sponges) were more prevalent and abundant in no-take zones, while short ephemeral algae were more abundant in general-use zones, suggesting potential short-term effects of zoning (5-10 years). Yet, short-term effects of zoning were not detected at the community level (community structure or composition), while community structure varied significantly among MPAs. We conclude that by allowing rapid, simultaneous assessments at multiple spatial scales, autonomous underwater vehicles are useful to document changes in marine communities and identify adequate scales to manage them. This study advanced knowledge of marine benthic communities and their conservation in three ways. First, we quantified benthic biodiversity and abundance, generating the first baseline of these benthic communities against which the effectiveness of three large MPAs can be assessed. Second, we identified the taxonomic resolution necessary to assess both short and long-term effects of MPAs, concluding that coarse taxonomic resolution is sufficient given that analyses of community structure at different taxonomic levels were generally consistent. Yet, observed differences were taxa-specific and may have not been evident using our broader taxonomic classifications, a classification of mid to high taxonomic resolution may be necessary to determine zoning effects on key taxa. Third, we provide an example of statistical analyses and

  8. Mathematical programming methods for large-scale topology optimization problems

    Rojas Labanda, Susana

    for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  9. Laboratory-scale evaluations of alternative plutonium precipitation methods

    Martella, L.L.; Saba, M.T.; Campbell, G.K.

    1984-01-01

    Plutonium(III), (IV), and (VI) carbonate; plutonium(III) fluoride; plutonium(III) and (IV) oxalate; and plutonium(IV) and (VI) hydroxide precipitation methods were evaluated for conversion of plutonium nitrate anion-exchange eluate to a solid, and compared with the current plutonium peroxide precipitation method used at Rocky Flats. Plutonium(III) and (IV) oxalate, plutonium(III) fluoride, and plutonium(IV) hydroxide precipitations were the most effective of the alternative conversion methods tested because of the larger particle-size formation, faster filtration rates, and the low plutonium loss to the filtrate. These were found to be as efficient as, and in some cases more efficient than, the peroxide method. 18 references, 14 figures, 3 tables

  10. SCALE--A Conceptual and Transactional Method of Legal Study.

    Johnson, Darrell B.

    1985-01-01

    Southwestern University School of Law's two-year, intensive, year-round program, the Southwestern Conceptual Approach to Legal Education, which emphasizes hypothetical problems as teaching tools rather than the case-book method, is described. (MSE)

  11. Quantitative evidence for the effects of multiple drivers on continental-scale amphibian declines

    Evan H. Campbell Grant; David A. W. Miller; Benedikt R. Schmidt; Michael J. Adams; Staci M. Amburgey; Thierry Chambert; Sam S. Cruickshank; Robert N. Fisher; David M. Green; Blake R. Hossack; Pieter T. J. Johnson; Maxwell B. Joseph; Tracy A. G. Rittenhouse; Maureen E. Ryan; J. Hardin Waddle; Susan C. Walls; Larissa L. Bailey; Gary M. Fellers; Thomas A. Gorman; Andrew M. Ray; David S. Pilliod; Steven J. Price; Daniel Saenz; Walt Sadinski; Erin Muths

    2016-01-01

    Since amphibian declines were first proposed as a global phenomenon over a quarter century ago, the conservation community has made little progress in halting or reversing these trends. The early search for a “smoking gun” was replaced with the expectation that declines are caused by multiple drivers. While field observations and experiments have identified factors...

  12. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  13. An Integrative Bioinformatics Framework for Genome-scale Multiple Level Network Reconstruction of Rice

    Liu Lili

    2013-06-01

    Full Text Available Understanding how metabolic reactions translate the genome of an organism into its phenotype is a grand challenge in biology. Genome-wide association studies (GWAS statistically connect genotypes to phenotypes, without any recourse to known molecular interactions, whereas a molecular mechanistic description ties gene function to phenotype through gene regulatory networks (GRNs, protein-protein interactions (PPIs and molecular pathways. Integration of different regulatory information levels of an organism is expected to provide a good way for mapping genotypes to phenotypes. However, the lack of curated metabolic model of rice is blocking the exploration of genome-scale multi-level network reconstruction. Here, we have merged GRNs, PPIs and genome-scale metabolic networks (GSMNs approaches into a single framework for rice via omics’ regulatory information reconstruction and integration. Firstly, we reconstructed a genome-scale metabolic model, containing 4,462 function genes, 2,986 metabolites involved in 3,316 reactions, and compartmentalized into ten subcellular locations. Furthermore, 90,358 pairs of protein-protein interactions, 662,936 pairs of gene regulations and 1,763 microRNA-target interactions were integrated into the metabolic model. Eventually, a database was developped for systematically storing and retrieving the genome-scale multi-level network of rice. This provides a reference for understanding genotype-phenotype relationship of rice, and for analysis of its molecular regulatory network.

  14. Neocortical dynamics at multiple scales: EEG standing waves, statistical mechanics, and physical analogs.

    Ingber, Lester; Nunez, Paul L

    2011-02-01

    The dynamic behavior of scalp potentials (EEG) is apparently due to some combination of global and local processes with important top-down and bottom-up interactions across spatial scales. In treating global mechanisms, we stress the importance of myelinated axon propagation delays and periodic boundary conditions in the cortical-white matter system, which is topologically close to a spherical shell. By contrast, the proposed local mechanisms are multiscale interactions between cortical columns via short-ranged non-myelinated fibers. A mechanical model consisting of a stretched string with attached nonlinear springs demonstrates the general idea. The string produces standing waves analogous to large-scale coherent EEG observed in some brain states. The attached springs are analogous to the smaller (mesoscopic) scale columnar dynamics. Generally, we expect string displacement and EEG at all scales to result from both global and local phenomena. A statistical mechanics of neocortical interactions (SMNI) calculates oscillatory behavior consistent with typical EEG, within columns, between neighboring columns via short-ranged non-myelinated fibers, across cortical regions via myelinated fibers, and also derives a string equation consistent with the global EEG model. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Environmental variables measured at multiple spatial scales exert uneven influence on fish assemblages of floodplain lakes

    Dembkowski, Daniel J.; Miranda, Leandro E.

    2014-01-01

    We examined the interaction between environmental variables measured at three different scales (i.e., landscape, lake, and in-lake) and fish assemblage descriptors across a range of over 50 floodplain lakes in the Mississippi Alluvial Valley of Mississippi and Arkansas. Our goal was to identify important local- and landscape-level determinants of fish assemblage structure. Relationships between fish assemblage structure and variables measured at broader scales (i.e., landscape-level and lake-level) were hypothesized to be stronger than relationships with variables measured at finer scales (i.e., in-lake variables). Results suggest that fish assemblage structure in floodplain lakes was influenced by variables operating on three different scales. However, and contrary to expectations, canonical correlations between in-lake environmental characteristics and fish assemblage structure were generally stronger than correlations between landscape-level and lake-level variables and fish assemblage structure, suggesting a hierarchy of influence. From a resource management perspective, our study suggests that landscape-level and lake-level variables may be manipulated for conservation or restoration purposes, and in-lake variables and fish assemblage structure may be used to monitor the success of such efforts.

  16. Broadband Structural Dynamics: Understanding the Impulse-Response of Structures Across Multiple Length and Time Scales

    2010-08-18

    Spectral domain response calculated • Time domain response obtained through inverse transform Approach 4: WASABI Wavelet Analysis of Structural Anomalies...differences at unity scale! Time Function Transform Apply Spectral Domain Transfer Function Time Function Inverse Transform Transform Transform  mtP

  17. Early College for All: Efforts to Scale up Early Colleges in Multiple Settings

    Edmunds, Julie A.

    2016-01-01

    Given the positive impacts of the small, stand-alone early college model and the desire to provide those benefits to more students, organizations have begun efforts to scale up the early college model in a variety of settings. These efforts have been supported by the federal government, particularly by the Investing in Innovation (i3) program.…

  18. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set

    Pingping Chi

    2013-03-01

    Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

  19. Novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs

    Muhammad, Akram; Musavarah, Sarwar

    2016-01-01

    In this research study, we introduce the concept of bipolar neutrosophic graphs. We present the dominating and independent sets of bipolar neutrosophic graphs. We describe novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs. We also develop an algorithm for computing domination in bipolar neutrosophic graphs.

  20. Magic Finger Teaching Method in Learning Multiplication Facts among Deaf Students

    Thai, Liong; Yasin, Mohd. Hanafi Mohd

    2016-01-01

    Deaf students face problems in mastering multiplication facts. This study aims to identify the effectiveness of Magic Finger Teaching Method (MFTM) and students' perception towards MFTM. The research employs a quasi experimental with non-equivalent pre-test and post-test control group design. Pre-test, post-test and questionnaires were used. As…

  1. Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?

    Xu, Yanbo; Mostow, Jack

    2012-01-01

    A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…

  2. A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants

    Cooper, Paul D.

    2010-01-01

    A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…

  3. Creep compliance and percent recovery of Oklahoma certified binder using the multiple stress recovery (MSCR) method.

    2015-04-01

    A laboratory study was conducted to develop guidelines for the Multiple Stress Creep Recovery : (MSCR) test method for local conditions prevailing in Oklahoma. The study consisted of : commonly used binders in Oklahoma, namely PG 64-22, PG 70-28, and...

  4. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    Sabourin, D; Snakenborg, D; Dufva, M

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observation. The interconnection block method is scalable, flexible and supports high interconnection density. The average pressure limit of the interconnection block was near 5.5 bar and all individual results were well above the 2 bar threshold considered applicable to most microfluidic applications

  5. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  6. A Multiple Criteria Decision Making Method Based on Relative Value Distances

    Shyur Huan-jyh

    2015-12-01

    Full Text Available This paper proposes a new multiple criteria decision-making method called ERVD (election based on relative value distances. The s-shape value function is adopted to replace the expected utility function to describe the risk-averse and risk-seeking behavior of decision makers. Comparisons and experiments contrasting with the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution method are carried out to verify the feasibility of using the proposed method to represent the decision makers’ preference in the decision making process. Our experimental results show that the proposed approach is an appropriate and effective MCDM method.

  7. The plasma transport equations derived by multiple time-scale expansions and turbulent transport. I. General theory

    Edenstrasser, J.W.

    1995-01-01

    A multiple time-scale derivative expansion scheme is applied to the dimensionless Fokker--Planck equation and to Maxwell's equations, where the parameter range of a typical fusion plasma was assumed. Within kinetic theory, the four time scales considered are those of Larmor gyration, particle transit, collisions, and classical transport. The corresponding magnetohydrodynamic (MHD) time scales are those of ion Larmor gyration, Alfven, MHD collision, and resistive diffusion. The solution of the zeroth-order equations results in the force-free equilibria and ideal Ohm's law. The solution of the first-order equations leads under the assumption of a weak collisional plasma to the ideal MHD equations. On the MHD-collision time scale, not only the full set of the MHD transport equations is obtained, but also turbulent terms, where the related transport quantities are one order in the expansion parameter larger than those of classical transport. Finally, at the resistive diffusion time scale the known transport equations are arrived at including, however, also turbulent contributions. copyright 1995 American Institute of Physics

  8. Kernel methods for large-scale genomic data analysis

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  9. Evaluation of the Multiple Sclerosis Walking Scale-12 (MSWS-12) in a Dutch sample: Application of item response theory.

    Mokkink, Lidwine Brigitta; Galindo-Garre, Francisca; Uitdehaag, Bernard Mj

    2016-12-01

    The Multiple Sclerosis Walking Scale-12 (MSWS-12) measures walking ability from the patients' perspective. We examined the quality of the MSWS-12 using an item response theory model, the graded response model (GRM). A total of 625 unique Dutch multiple sclerosis (MS) patients were included. After testing for unidimensionality, monotonicity, and absence of local dependence, a GRM was fit and item characteristics were assessed. Differential item functioning (DIF) for the variables gender, age, duration of MS, type of MS and severity of MS, reliability, total test information, and standard error of the trait level (θ) were investigated. Confirmatory factor analysis showed a unidimensional structure of the 12 items of the scale, explaining 88% of the variance. Item 2 did not fit into the GRM model. Reliability was 0.93. Items 8 and 9 (of the 11 and 12 item version respectively) showed DIF on the variable severity, based on the Expanded Disability Status Scale (EDSS). However, the EDSS is strongly related to the content of both items. Our results confirm the good quality of the MSWS-12. The trait level (θ) scores and item parameters of both the 12- and 11-item versions were highly comparable, although we do not suggest to change the content of the MSWS-12. © The Author(s), 2016.

  10. Searching for intermediate-mass black holes in galaxies with low-luminosity AGN: a multiple-method approach

    Koliopanos, F.; Ciambur, B.; Graham, A.; Webb, N.; Coriat, M.; Mutlu-Pakdil, B.; Davis, B.; Godet, O.; Barret, D.; Seigar, M.

    2017-10-01

    Intermediate Mass Black Holes (IMBHs) are predicted by a variety of models and are the likely seeds for super massive BHs (SMBHs). However, we have yet to establish their existence. One method, by which we can discover IMBHs, is by measuring the mass of an accreting BH, using X-ray and radio observations and drawing on the correlation between radio luminosity, X-ray luminosity and the BH mass, known as the fundamental plane of BH activity (FP-BH). Furthermore, the mass of BHs in the centers of galaxies, can be estimated using scaling relations between BH mass and galactic properties. We are initiating a campaign to search for IMBH candidates in dwarf galaxies with low-luminosity AGN, using - for the first time - three different scaling relations and the FP-BH, simultaneously. In this first stage of our campaign, we measure the mass of seven LLAGN, that have been previously suggested to host central IMBHs, investigate the consistency between the predictions of the BH scaling relations and the FP-BH, in the low mass regime and demonstrate that this multiple method approach provides a robust average mass prediction. In my talk, I will discuss our methodology, results and next steps of this campaign.

  11. Investigating salt frost scaling by using statistical methods

    Hasholt, Marianne Tange; Clemmensen, Line Katrine Harder

    2010-01-01

    A large data set comprising data for 118 concrete mixes on mix design, air void structure, and the outcome of freeze/thaw testing according to SS 13 72 44 has been analysed by use of statistical methods. The results show that with regard to mix composition, the most important parameter...

  12. Gully Erosion Mapping and Monitoring at Multiple Scales Based on Multi-Source Remote Sensing Data of the Sancha River Catchment, Northeast China

    Ranghu Wang

    2016-11-01

    Full Text Available This research is focused on gully erosion mapping and monitoring at multiple spatial scales using multi-source remote sensing data of the Sancha River catchment in Northeast China, where gullies extend over a vast area. A high resolution satellite image (Pleiades 1A, 0.7 m was used to obtain the spatial distribution of the gullies of the overall basin. Image visual interpretation with field verification was employed to map the geometric gully features and evaluate gully erosion as well as the topographic differentiation characteristics. Unmanned Aerial Vehicle (UAV remote sensing data and the 3D photo-reconstruction method were employed for detailed gully mapping at a site scale. The results showed that: (1 the sub-meter image showed a strong ability in the recognition of various gully types and obtained satisfactory results, and the topographic factors of elevation, slope and slope aspects exerted significant influence on the gully spatial distribution at the catchment scale; and (2 at a more detailed site scale, UAV imagery combined with 3D photo-reconstruction provided a Digital Surface Model (DSM and ortho-image at the centimeter level as well as a detailed 3D model. The resulting products revealed the area of agricultural utilization and its shaping by human agricultural activities and water erosion in detail, and also provided the gully volume. The present study indicates that using multi-source remote sensing data, including satellite and UAV imagery simultaneously, results in an effective assessment of gully erosion over multiple spatial scales. The combined approach should be continued to regularly monitor gully erosion to understand the erosion process and its relationship with the environment from a comprehensive perspective.

  13. Large-scale data analysis of power grid resilience across multiple US service regions

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  14. Neocortical Dynamics at Multiple Scales: EEG Standing Waves, Statistical Mechanics, and Physical Analogs

    Ingber, Lester; Nunez, Paul L.

    2010-01-01

    The dynamic behavior of scalp potentials (EEG) is apparently due to some combination of global and local processes with important top-down and bottom-up interactions across spatial scales. In treating global mechanisms, we stress the importance of myelinated axon propagation delays and periodic boundary conditions in the cortical-white matter system, which is topologically close to a spherical shell. By contrast, the proposed local mechanisms are multiscale interactions between cortical colum...

  15. Use of multiple methods to determine factors affecting quality of care of patients with diabetes.

    Khunti, K

    1999-10-01

    The process of care of patients with diabetes is complex; however, GPs are playing a greater role in its management. Despite the research evidence, the quality of care of patients with diabetes is variable. In order to improve care, information is required on the obstacles faced by practices in improving care. Qualitative and quantitative methods can be used for formation of hypotheses and the development of survey procedures. However, to date few examples exist in general practice research on the use of multiple methods using both quantitative and qualitative techniques for hypothesis generation. We aimed to determine information on all factors that may be associated with delivery of care to patients with diabetes. Factors for consideration on delivery of diabetes care were generated by multiple qualitative methods including brainstorming with health professionals and patients, a focus group and interviews with key informants which included GPs and practice nurses. Audit data showing variations in care of patients with diabetes were used to stimulate the brainstorming session. A systematic literature search focusing on quality of care of patients with diabetes in primary care was also conducted. Fifty-four potential factors were identified by multiple methods. Twenty (37.0%) were practice-related factors, 14 (25.9%) were patient-related factors and 20 (37.0%) were organizational factors. A combination of brainstorming and the literature review identified 51 (94.4%) factors. Patients did not identify factors in addition to those identified by other methods. The complexity of delivery of care to patients with diabetes is reflected in the large number of potential factors identified in this study. This study shows the feasibility of using multiple methods for hypothesis generation. Each evaluation method provided unique data which could not otherwise be easily obtained. This study highlights a way of combining various traditional methods in an attempt to overcome the

  16. Application of the 2-D discrete-ordinates method to multiple scattering of laser radiation

    Zardecki, A.; Gerstl, S.A.W.; Embury, J.F.

    1983-01-01

    The discrete-ordinates finite-element radiation transport code twotran is applied to describe the multiple scattering of a laser beam from a reflecting target. For a model scenario involving a 99% relative humidity rural aerosol we compute the average intensity of the scattered radiation and correction factors to the Beer-Lambert law arising from multiple scattering. As our results indicate, 2-D x-y and r-z geometry modeling can reliably describe a realistic 3-D scenario. Specific results are presented for the two visual ranges of 1.52 and 0.76 km which show that, for sufficiently high aerosol concentrations (e.g., equivalent to V = 0.76 km), the target signature in a distant detector becomes dominated by multiply scattered radiation from interactions of the laser light with the aerosol environment. The merits of the scaling group and the delta-M approximation for the transfer equation are also explored

  17. Disentangling multiple drivers of pollination in a landscape-scale experiment.

    Schüepp, Christof; Herzog, Felix; Entling, Martin H

    2014-01-07

    Animal pollination is essential for the reproductive success of many wild and crop plants. Loss and isolation of (semi-)natural habitats in agricultural landscapes can cause declines of plants and pollinators and endanger pollination services. We investigated the independent effects of these drivers on pollination of young cherry trees in a landscape-scale experiment. We included (i) isolation of study trees from other cherry trees (up to 350 m), (ii) the amount of cherry trees in the landscape, (iii) the isolation from other woody habitats (up to 200 m) and (iv) the amount of woody habitats providing nesting and floral resources for pollinators. At the local scale, we considered effects of (v) cherry flower density and (vi) heterospecific flower density. Pollinators visited flowers more often in landscapes with high amount of woody habitat and at sites with lower isolation from the next cherry tree. Fruit set was reduced by isolation from the next cherry tree and by a high local density of heterospecific flowers but did not directly depend on pollinator visitation. These results reveal the importance of considering the plant's need for conspecific pollen and its pollen competition with co-flowering species rather than focusing only on pollinators' habitat requirements and flower visitation. It proved to be important to disentangle habitat isolation from habitat loss, local from landscape-scale effects, and direct effects of pollen availability on fruit set from indirect effects via pollinator visitation to understand the delivery of an agriculturally important ecosystem service.

  18. Consumer preference for seeds and seedlings of rare species impacts tree diversity at multiple scales.

    Young, Hillary S; McCauley, Douglas J; Guevara, Roger; Dirzo, Rodolfo

    2013-07-01

    Positive density-dependent seed and seedling predation, where herbivores selectively eat seeds or seedlings of common species, is thought to play a major role in creating and maintaining plant community diversity. However, many herbivores and seed predators are known to exhibit preferences for rare foods, which could lead to negative density-dependent predation. In this study, we first demonstrate the occurrence of increased predation of locally rare tree species by a widespread group of insular seed and seedling predators, land crabs. We then build computer simulations based on these empirical data to examine the effects of such predation on diversity patterns. Simulations show that herbivore preferences for locally rare species are likely to drive scale-dependent effects on plant community diversity: at small scales these foraging patterns decrease plant community diversity via the selective consumption of rare plant species, while at the landscape level they should increase diversity, at least for short periods, by promoting clustered local dominance of a variety of species. Finally, we compared observed patterns of plant diversity at the site to those obtained via computer simulations, and found that diversity patterns generated under simulations were highly consistent with observed diversity patterns. We posit that preference for rare species by herbivores may be prevalent in low- or moderate-diversity systems, and that these effects may help explain diversity patterns across different spatial scales in such ecosystems.

  19. LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM

    Higgins, G.H.; Crane, W.W.T.

    1959-05-19

    A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

  20. From fuel cells to batteries: Synergies, scales and simulation methods

    Bessler, Wolfgang G.

    2011-01-01

    The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...

  1. Regional management units for marine turtles: a novel framework for prioritizing conservation and research across multiple scales.

    Wallace, Bryan P; DiMatteo, Andrew D; Hurley, Brendan J; Finkbeiner, Elena M; Bolten, Alan B; Chaloupka, Milani Y; Hutchinson, Brian J; Abreu-Grobois, F Alberto; Amorocho, Diego; Bjorndal, Karen A; Bourjea, Jerome; Bowen, Brian W; Dueñas, Raquel Briseño; Casale, Paolo; Choudhury, B C; Costa, Alice; Dutton, Peter H; Fallabrino, Alejandro; Girard, Alexandre; Girondot, Marc; Godfrey, Matthew H; Hamann, Mark; López-Mendilaharsu, Milagros; Marcovaldi, Maria Angela; Mortimer, Jeanne A; Musick, John A; Nel, Ronel; Pilcher, Nicolas J; Seminoff, Jeffrey A; Troëng, Sebastian; Witherington, Blair; Mast, Roderic B

    2010-12-17

    Resolving threats to widely distributed marine megafauna requires definition of the geographic distributions of both the threats as well as the population unit(s) of interest. In turn, because individual threats can operate on varying spatial scales, their impacts can affect different segments of a population of the same species. Therefore, integration of multiple tools and techniques--including site-based monitoring, genetic analyses, mark-recapture studies and telemetry--can facilitate robust definitions of population segments at multiple biological and spatial scales to address different management and research challenges. To address these issues for marine turtles, we collated all available studies on marine turtle biogeography, including nesting sites, population abundances and trends, population genetics, and satellite telemetry. We georeferenced this information to generate separate layers for nesting sites, genetic stocks, and core distributions of population segments of all marine turtle species. We then spatially integrated this information from fine- to coarse-spatial scales to develop nested envelope models, or Regional Management Units (RMUs), for marine turtles globally. The RMU framework is a solution to the challenge of how to organize marine turtles into units of protection above the level of nesting populations, but below the level of species, within regional entities that might be on independent evolutionary trajectories. Among many potential applications, RMUs provide a framework for identifying data gaps, assessing high diversity areas for multiple species and genetic stocks, and evaluating conservation status of marine turtles. Furthermore, RMUs allow for identification of geographic barriers to gene flow, and can provide valuable guidance to marine spatial planning initiatives that integrate spatial distributions of protected species and human activities. In addition, the RMU framework--including maps and supporting metadata--will be an

  2. Regional Management Units for Marine Turtles: A Novel Framework for Prioritizing Conservation and Research across Multiple Scales

    Wallace, Bryan P.; DiMatteo, Andrew D.; Hurley, Brendan J.; Finkbeiner, Elena M.; Bolten, Alan B.; Chaloupka, Milani Y.; Hutchinson, Brian J.; Abreu-Grobois, F. Alberto; Amorocho, Diego; Bjorndal, Karen A.; Bourjea, Jerome; Bowen, Brian W.; Dueñas, Raquel Briseño; Casale, Paolo; Choudhury, B. C.; Costa, Alice; Dutton, Peter H.; Fallabrino, Alejandro; Girard, Alexandre; Girondot, Marc; Godfrey, Matthew H.; Hamann, Mark; López-Mendilaharsu, Milagros; Marcovaldi, Maria Angela; Mortimer, Jeanne A.; Musick, John A.; Nel, Ronel; Pilcher, Nicolas J.; Seminoff, Jeffrey A.; Troëng, Sebastian; Witherington, Blair; Mast, Roderic B.

    2010-01-01

    Background Resolving threats to widely distributed marine megafauna requires definition of the geographic distributions of both the threats as well as the population unit(s) of interest. In turn, because individual threats can operate on varying spatial scales, their impacts can affect different segments of a population of the same species. Therefore, integration of multiple tools and techniques — including site-based monitoring, genetic analyses, mark-recapture studies and telemetry — can facilitate robust definitions of population segments at multiple biological and spatial scales to address different management and research challenges. Methodology/Principal Findings To address these issues for marine turtles, we collated all available studies on marine turtle biogeography, including nesting sites, population abundances and trends, population genetics, and satellite telemetry. We georeferenced this information to generate separate layers for nesting sites, genetic stocks, and core distributions of population segments of all marine turtle species. We then spatially integrated this information from fine- to coarse-spatial scales to develop nested envelope models, or Regional Management Units (RMUs), for marine turtles globally. Conclusions/Significance The RMU framework is a solution to the challenge of how to organize marine turtles into units of protection above the level of nesting populations, but below the level of species, within regional entities that might be on independent evolutionary trajectories. Among many potential applications, RMUs provide a framework for identifying data gaps, assessing high diversity areas for multiple species and genetic stocks, and evaluating conservation status of marine turtles. Furthermore, RMUs allow for identification of geographic barriers to gene flow, and can provide valuable guidance to marine spatial planning initiatives that integrate spatial distributions of protected species and human activities. In addition

  3. Detection-Discrimination Method for Multiple Repeater False Targets Based on Radar Polarization Echoes

    Z. W. ZONG

    2014-04-01

    Full Text Available Multiple repeat false targets (RFTs, created by the digital radio frequency memory (DRFM system of jammer, are widely used in practical to effectively exhaust the limited tracking and discrimination resource of defence radar. In this paper, common characteristic of radar polarization echoes of multiple RFTs is used for target recognition. Based on the echoes from two receiving polarization channels, the instantaneous polarization radio (IPR is defined and its variance is derived by employing Taylor series expansion. A detection-discrimination method is designed based on probability grids. By using the data from microwave anechoic chamber, the detection threshold of the method is confirmed. Theoretical analysis and simulations indicate that the method is valid and feasible. Furthermore, the estimation performance of IPRs of RFTs due to the influence of signal noise ratio (SNR is also covered.

  4. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  5. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  6. Multiple time-scale optimization scheduling for islanded microgrids including PV, wind turbine, diesel generator and batteries

    Xiao, Zhao xia; Nan, Jiakai; Guerrero, Josep M.

    2017-01-01

    A multiple time-scale optimization scheduling including day ahead and short time for an islanded microgrid is presented. In this paper, the microgrid under study includes photovoltaics (PV), wind turbine (WT), diesel generator (DG), batteries, and shiftable loads. The study considers the maximum...... efficiency operation area for the diesel engine and the cost of the battery charge/discharge cycle losses. The day-ahead generation scheduling takes into account the minimum operational cost and the maximum load satisfaction as the objective function. Short-term optimal dispatch is based on minimizing...

  7. Research of the effectiveness of parallel multithreaded realizations of interpolation methods for scaling raster images

    Vnukov, A. A.; Shershnev, M. B.

    2018-01-01

    The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.

  8. Modelling of multiple short-length-scale stall cells in an axial compressor using evolved GMDH neural networks

    Amanifard, N.; Nariman-Zadeh, N.; Farahani, M.H.; Khalkhali, A.

    2008-01-01

    Over the past 15 years there have been several research efforts to capture the stall inception nature in axial flow compressors. However previous analytical models could not explain the formation of short-length-scale stall cells. This paper provides a new model based on evolved GMDH neural network for transient evolution of multiple short-length-scale stall cells in an axial compressor. Genetic Algorithms (GAs) are also employed for optimal design of connectivity configuration of such GMDH-type neural networks. In this way, low-pass filter (LPF) pressure trace near the rotor leading edge is modelled with respect to the variation of pressure coefficient, flow rate coefficient, and number of rotor rotations which are defined as inputs

  9. Multiple linear regression to develop strength scaled equations for knee and elbow joints based on age, gender and segment mass

    D'Souza, Sonia; Rasmussen, John; Schwirtz, Ansgar

    2012-01-01

    and valuable ergonomic tool. Objective: To investigate age and gender effects on the torque-producing ability in the knee and elbow in older adults. To create strength scaled equations based on age, gender, upper/lower limb lengths and masses using multiple linear regression. To reduce the number of dependent...... flexors. Results: Males were signifantly stronger than females across all age groups. Elbow peak torque (EPT) was better preserved from 60s to 70s whereas knee peak torque (KPT) reduced significantly (PGender, thigh mass and age best...... predicted KPT (R2=0.60). Gender, forearm mass and age best predicted EPT (R2=0.75). Good crossvalidation was established for both elbow and knee models. Conclusion: This cross-sectional study of muscle strength created and validated strength scaled equations of EPT and KPT using only gender, segment mass...

  10. Precipitation-productivity Relation in Grassland in Northern China: Investigations at Multiple Spatiotemporal Scales

    Hu, Z.

    2017-12-01

    Climate change is predicted to cause dramatic variability in precipitation regime, not only in terms of change in annual precipitation amount, but also in precipitation seasonal distribution and precipitation event characteristics (high frenquency extrem precipitation, larger but fewer precipitation events), which combined to influence productivity of grassland in arid and semiarid regions. In this study, combining remote sensing products with in-situ measurements of aboveground net primary productivity (ANPP) and gross primary productivity (GPP) data from eddy covariance system in grassland of northern China, we quantified the effects of spatio-temporal vairation in precipitation on productivity from local sites to region scale. We found that, for an individual precipitation event, the duration of GPP-response to the individual precipitation event and the maximum absolute GPP response induced by the individual precipitation event increased linearly with the size of precipitation events. Comparison of the productivity-precipitation relationships between multi-sites determined that the predominant characteristics of precipitation events (PEC) that affected GPP differed remarkably between the water-limited temperate steppe and the temperature-limited alpine meadow. The number of heavy precipitation events (>10 mm d-1) was the most important PEC to impact GPP in the temperate steppe through affecting soil moisture at different soil profiles, while precipitation interval was the factor that affected GPP most in the alpine meadow via its effects on temperature. At the region scale, shape of ANPP-precipitation relationship varies with distinct spatial scales, and besides annual precipitation, precipitation seasonal distribution also has comparable impacts on spatial variation in ANPP. Temporal variability in ANPP was lower at both the dry and wet end, and peaked at a precipitation of 243.1±3.5mm, which is the transition region between typical steppe and desert steppe

  11. Investigation of colistin sensitivity via three different methods in Acinetobacter baumannii isolates with multiple antibiotic resistance.

    Sinirtaş, Melda; Akalin, Halis; Gedikoğlu, Suna

    2009-09-01

    In recent years there has been an increase in life-threatening infections caused by Acinetobacter baumannii with multiple antibiotic resistance, which has lead to the use of polymyxins, especially colistin, being reconsidered. The aim of this study was to investigate the colistin sensitivity of A. baumannii isolates with multiple antibiotic resistance via different methods, and to evaluate the disk diffusion method for colistin against multi-resistant Acinetobacter isolates, in comparison to the E-test and Phoenix system. The study was carried out on 100 strains of A. baumannii (colonization or infection) isolated from the microbiological samples of different patients followed in the clinics and intensive care units of Uludağ University Medical School between the years 2004 and 2005. Strains were identified and characterized for their antibiotic sensitivity by Phoenix system (Becton Dickinson, Sparks, MD, USA). In all studied A. baumannii strains, susceptibility to colistin was determined to be 100% with the disk diffusion, E-test, and broth microdilution methods. Results of the E-test and broth microdilution method, which are accepted as reference methods, were found to be 100% consistent with the results of the disk diffusion tests; no very major or major error was identified upon comparison of the tests. The sensitivity and the positive predictive value of the disk diffusion method were found to be 100%. Colistin resistance in A. baumannii was not detected in our region, and disk diffusion method results are in accordance with those of E-test and broth microdilution methods.

  12. The scaling of stress distribution under small scale yielding by T-scaling method and application to prediction of the temperature dependence on fracture toughness

    Ishihara, Kenichi; Hamada, Takeshi; Meshii, Toshiyuki

    2017-01-01

    In this paper, a new method for scaling the crack tip stress distribution under small scale yielding condition was proposed and named as T-scaling method. This method enables to identify the different stress distributions for materials with different tensile properties but identical load in terms of K or J. Then by assuming that the temperature dependence of a material is represented as the stress-strain relationship temperature dependence, a method to predict the fracture load at an arbitrary temperature from the already known fracture load at a reference temperature was proposed. This method combined the T-scaling method and the knowledge “fracture stress for slip induced cleavage fracture is temperature independent.” Once the fracture load is predicted, fracture toughness J c at the temperature under consideration can be evaluated by running elastic-plastic finite element analysis. Finally, the above-mentioned framework to predict the J c temperature dependency of a material in the ductile-to-brittle temperature distribution was validated for 0.55% carbon steel JIS S55C. The proposed framework seems to have a possibility to solve the problem the master curve is facing in the relatively higher temperature region, by requiring only tensile tests. (author)

  13. Multiple metals exposure in a small-scale artisanal gold mining community.

    Basu, Niladri; Nam, Dong-Ha; Kwansaa-Ansah, Edward; Renne, Elisha P; Nriagu, Jerome O

    2011-04-01

    Urinary metals were characterized in 57 male residents of a small-scale gold mining community in Ghana. Chromium and arsenic exceeded health guideline values for 52% and 34%, respectively, of all participants. About 10-40% of the participants had urinary levels of aluminum, copper, manganese, nickel, selenium, and zinc that fell outside the U.S. reference range. Exposures appear ubiquitous across the community as none of the elements were associated with occupation, age, and diet. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  15. Comparison of multiple-criteria decision-making methods - results of simulation study

    Michał Adamczak

    2016-12-01

    Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the  Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value  was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.

  16. Traffic Management by Using Admission Control Methods in Multiple Node IMS Network

    Filip Chamraz

    2016-01-01

    Full Text Available The paper deals with Admission Control methods (AC as a possible solution for traffic management in IMS networks (IP Multimedia Subsystem - from the point of view of an efficient redistribution of the available network resources and keeping the parameters of Quality of Service (QoS. The paper specifically aims at the selection of the most appropriate method for the specific type of traffic and traffic management concept using AC methods on multiple nodes. The potential benefit and disadvantage of the used solution is evaluated.

  17. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Ward identities and consistency relations for the large scale structure with multiple species

    Peloso, Marco; Pietroni, Massimo

    2014-01-01

    We present fully nonlinear consistency relations for the squeezed bispectrum of Large Scale Structure. These relations hold when the matter component of the Universe is composed of one or more species, and generalize those obtained in [1,2] in the single species case. The multi-species relations apply to the standard dark matter + baryons scenario, as well as to the case in which some of the fields are auxiliary quantities describing a particular population, such as dark matter halos or a specific galaxy class. If a large scale velocity bias exists between the different populations new terms appear in the consistency relations with respect to the single species case. As an illustration, we discuss two physical cases in which such a velocity bias can exist: (1) a new long range scalar force in the dark matter sector (resulting in a violation of the equivalence principle in the dark matter-baryon system), and (2) the distribution of dark matter halos relative to that of the underlying dark matter field

  19. Habitat selection by Forster's Terns (Sterna forsteri) at multiple spatial scales in an urbanized estuary: The importance of salt ponds

    Bluso-Demers, Jill; Ackerman, Joshua T.; Takekawa, John Y.; Peterson, Sarah

    2016-01-01

    The highly urbanized San Francisco Bay Estuary, California, USA, is currently undergoing large-scale habitat restoration, and several thousand hectares of former salt evaporation ponds are being converted to tidal marsh. To identify potential effects of this habitat restoration on breeding waterbirds, habitat selection of radiotagged Forster's Terns (Sterna forsteri) was examined at multiple spatial scales during the pre-breeding and breeding seasons of 2005 and 2006. At each spatial scale, habitat selection ratios were calculated by season, year, and sex. Forster's Terns selected salt pond habitats at most spatial scales and demonstrated the importance of salt ponds for foraging and roosting. Salinity influenced the types of salt pond habitats that were selected. Specifically, Forster's Terns strongly selected lower salinity salt ponds (0.5–30 g/L) and generally avoided higher salinity salt ponds (≥31 g/L). Forster's Terns typically used tidal marsh and managed marsh habitats in proportion to their availability, avoided upland and tidal flat habitats, and strongly avoided open bay habitats. Salt ponds provide important habitat for breeding waterbirds, and restoration efforts to convert former salt ponds to tidal marsh may reduce the availability of preferred breeding and foraging areas.

  20. Biosensors in the small scale: methods and technology trends.

    Senveli, Sukru U; Tigli, Onur

    2013-03-01

    This study presents a review on biosensors with an emphasis on recent developments in the field. A brief history accompanied by a detailed description of the biosensor concepts is followed by rising trends observed in contemporary micro- and nanoscale biosensors. Performance metrics to quantify and compare different detection mechanisms are presented. A comprehensive analysis on various types and subtypes of biosensors are given. The fields of interest within the scope of this review are label-free electrical, mechanical and optical biosensors as well as other emerging and popular technologies. Especially, the latter half of the last decade is reviewed for the types, methods and results of the most prominently researched detection mechanisms. Tables are provided for comparison of various competing technologies in the literature. The conclusion part summarises the noteworthy advantages and disadvantages of all biosensors reviewed in this study. Furthermore, future directions that the micro- and nanoscale biosensing technologies are expected to take are provided along with the immediate outlook.