A hybrid method for provincial scale energy-related carbon emission allocation in China.
Bai, Hongtao; Zhang, Yingxuan; Wang, Huizhi; Huang, Yanying; Xu, He
2014-01-01
Achievement of carbon emission reduction targets proposed by national governments relies on provincial/state allocations. In this study, a hybrid method for provincial energy-related carbon emissions allocation in China was developed to provide a good balance between production- and consumption-based approaches. In this method, provincial energy-related carbon emissions are decomposed into direct emissions of local activities other than thermal power generation and indirect emissions as a result of electricity consumption. Based on the carbon reduction efficiency principle, the responsibility for embodied emissions of provincial product transactions is assigned entirely to the production area. The responsibility for carbon generation during the production of thermal power is borne by the electricity consumption area, which ensures that different regions with resource endowments have rational development space. Empirical studies were conducted to examine the hybrid method and three indices, per capita GDP, resource endowment index and the proportion of energy-intensive industries, were screened to preliminarily interpret the differences among China's regional carbon emissions. Uncertainty analysis and a discussion of this method are also provided herein.
Directory of Open Access Journals (Sweden)
Huan-Yu Bi
2015-09-01
Full Text Available The Principle of Maximum Conformality (PMC eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I; the other, more recent, method (PMC-II uses the Rδ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfy all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio Re+e− and the Higgs partial width Γ(H→bb¯. Both methods lead to the same resummed (‘conformal’ series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {βi}-terms in the pQCD expansion are taken into account. We also show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.
International Nuclear Information System (INIS)
Braendas, E.
1986-01-01
The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented
Relating system-to-CFD coupled code analyses to theoretical framework of a multi-scale method
International Nuclear Information System (INIS)
Cadinu, F.; Kozlowski, T.; Dinh, T.N.
2007-01-01
Over past decades, analyses of transient processes and accidents in a nuclear power plant have been performed, to a significant extent and with a great success, by means of so called system codes, e.g. RELAP5, CATHARE, ATHLET codes. These computer codes, based on a multi-fluid model of two-phase flow, provide an effective, one-dimensional description of the coolant thermal-hydraulics in the reactor system. For some components in the system, wherever needed, the effect of multi-dimensional flow is accounted for through approximate models. The later are derived from scaled experiments conducted for selected accident scenarios. Increasingly, however, we have to deal with newer and ever more complex accident scenarios. In some such cases the system codes fail to serve as simulation vehicle, largely due to its deficient treatment of multi-dimensional flow (in e.g. downcomer, lower plenum). A possible way of improvement is to use the techniques of Computational Fluid Dynamics (CFD). Based on solving Navier-Stokes equations, CFD codes have been developed and used, broadly, to perform analysis of multi-dimensional flow, dominantly in non-nuclear industry and for single-phase flow applications. It is clear that CFD simulations can not substitute system codes but just complement them. Given the intrinsic multi-scale nature of this problem, we propose to relate it to the more general field of research on multi-scale simulations. Even though multi-scale methods are developed on case-by-case basis, the need for a unified framework brought to the development of the heterogeneous multi-scale method (HMM)
Horion, Stephanie; Ivits, Eva; Verzandvoort, Simone; Fensholt, Rasmus
2017-04-01
Ongoing pressures on European land are manifold with extreme climate events and non-sustainable use of land resources being amongst the most important drivers altering the functioning of the ecosystems. The protection and conservation of European natural capital is one of the key objectives of the 7th Environmental Action Plan (EAP). The EAP stipulates that European land must be managed in a sustainable way by 2020 and the UN Sustainable development goals define a Land Degradation Neutral world as one of the targets. This implies that land degradation (LD) assessment of European ecosystems must be performed repeatedly allowing for the assessment of the current state of LD as well as changes compared to a baseline adopted by the UNCCD for the objective of land degradation neutrality. However, scientifically robust methods are still lacking for large-scale assessment of LD and repeated consistent mapping of the state of terrestrial ecosystems. Historical land degradation assessments based on various methods exist, but methods are generally non-replicable or difficult to apply at continental scale (Allan et al. 2007). The current lack of research methods applicable at large spatial scales is notably caused by the non-robust definition of LD, the scarcity of field data on LD, as well as the complex inter-play of the processes driving LD (Vogt et al., 2011). Moreover, the link between LD and changes in land use (how land use changes relates to change in vegetation productivity and ecosystem functioning) is not straightforward. In this study we used the segmented trend method developed by Horion et al. (2016) for large-scale systematic assessment of hotspots of change in ecosystem functioning in relation to LD. This method alleviates shortcomings of widely used linear trend model that does not account for abrupt change, nor adequately captures the actual changes in ecosystem functioning (de Jong et al. 2013; Horion et al. 2016). Here we present a new methodology for
Optimal renormalization scales and commensurate scale relations
International Nuclear Information System (INIS)
Brodsky, S.J.; Lu, H.J.
1996-01-01
Commensurate scale relations relate observables to observables and thus are independent of theoretical conventions, such as the choice of intermediate renormalization scheme. The physical quantities are related at commensurate scales which satisfy a transitivity rule which ensures that predictions are independent of the choice of an intermediate renormalization scheme. QCD can thus be tested in a new and precise way by checking that the observables track both in their relative normalization and in their commensurate scale dependence. For example, the radiative corrections to the Bjorken sum rule at a given momentum transfer Q can be predicted from measurements of the e+e - annihilation cross section at a corresponding commensurate energy scale √s ∝ Q, thus generalizing Crewther's relation to non-conformal QCD. The coefficients that appear in this perturbative expansion take the form of a simple geometric series and thus have no renormalon divergent behavior. The authors also discuss scale-fixed relations between the threshold corrections to the heavy quark production cross section in e+e - annihilation and the heavy quark coupling α V which is measurable in lattice gauge theory
Multiple time scale methods in tokamak magnetohydrodynamics
International Nuclear Information System (INIS)
Jardin, S.C.
1984-01-01
Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed
Methods of numerical relativity
International Nuclear Information System (INIS)
Piran, T.
1983-01-01
Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)
A New Class of Scaling Correction Methods
International Nuclear Information System (INIS)
Mei Li-Jie; Wu Xin; Liu Fu-Yao
2012-01-01
When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)
Time Scale in Least Square Method
Directory of Open Access Journals (Sweden)
Özgür Yeniay
2014-01-01
Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.
Scaling as an Organizational Method
DEFF Research Database (Denmark)
Papazu, Irina Maria Clara Hansen; Nelund, Mette
2018-01-01
Organization studies have shown limited interest in the part that scaling plays in organizational responses to climate change and sustainability. Moreover, while scales are viewed as central to the diagnosis of the organizational challenges posed by climate change and sustainability, the role...... turn something as immense as the climate into a small and manageable problem, thus making abstract concepts part of concrete, organizational practice....
Preface: Introductory Remarks: Linear Scaling Methods
Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.
2008-07-01
It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up
Kim, Jinhyun; Jung, Yoomi
2009-08-01
This paper analyzed alternative methods of calculating the conversion factor for nurse-midwife's delivery services in the national health insurance and estimated the optimal reimbursement level for the services. A cost accounting model and Sustainable Growth Rate (SGR) model were developed to estimate the conversion factor of Resource-Based Relative Value Scale (RBRVS) for nurse-midwife's services, depending on the scope of revenue considered in financial analysis. The data and sources from the government and the financial statements from nurse-midwife clinics were used in analysis. The cost accounting model and SGR model showed a 17.6-37.9% increase and 19.0-23.6% increase, respectively, in nurse-midwife fee for delivery services in the national health insurance. The SGR model measured an overall trend of medical expenditures rather than an individual financial status of nurse-midwife clinics, and the cost analysis properly estimated the level of reimbursement for nurse-midwife's services. Normal vaginal delivery in nurse-midwife clinics is considered cost-effective in terms of insurance financing. Upon a declining share of health expenditures on midwife clinics, designing a reimbursement strategy for midwife's services could be an opportunity as well as a challenge when it comes to efficient resource allocation.
Level density in the complex scaling method
International Nuclear Information System (INIS)
Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki
2005-01-01
It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)
Modified dispersion relations, inflation, and scale invariance
Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward
2018-02-01
For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.
On the evolution of cluster scaling relations
International Nuclear Information System (INIS)
Diemer, Benedikt; Kravtsov, Andrey V.; More, Surhud
2013-01-01
Understanding the evolution of scaling relations between the observable properties of clusters and their total mass is key to realizing their potential as cosmological probes. In this study, we investigate whether the evolution of cluster scaling relations is affected by the spurious evolution of mass caused by the evolving reference density with respect to which halo masses are defined (pseudo-evolution). We use the relation between mass, M, and velocity dispersion, σ, as a test case, and show that the deviation from the M-σ relation of cluster-sized halos caused by pseudo-evolution is smaller than 10% for a wide range of mass definitions. The reason for this small impact is a tight relation between the velocity dispersion and mass profiles, σ(
Origins of scaling relations in nonequilibrium growth
International Nuclear Information System (INIS)
Escudero, Carlos; Korutcheva, Elka
2012-01-01
Scaling and hyperscaling laws provide exact relations among critical exponents describing the behavior of a system at criticality. For nonequilibrium growth models with a conserved drift, there exist few of them. One such relation is α + z = 4, found to be inexact in a renormalization group calculation for several classical models in this field. Herein, we focus on the two-dimensional case and show that it is possible to construct conserved surface growth equations for which the relation α + z = 4 is exact in the renormalization group sense. We explain the presence of this scaling law in terms of the existence of geometric principles dominating the dynamics. (paper)
Scaling relations for eddy current phenomena
International Nuclear Information System (INIS)
Dodd, C.V.; Deeds, W.E.
1975-11-01
Formulas are given for various electromagnetic quantities for coils in the presence of conductors, with the scaling parameters factored out so that small-scale model experiments can be related to large-scale apparatus. Particular emphasis is given to such quantities as eddy current heating, forces, power, and induced magnetic fields. For axially symmetric problems, closed-form integrals are available for the vector potential and all the other quantities obtainable from it. For unsymmetrical problems, a three-dimensional relaxation program can be used to obtain the vector potential and then the derivable quantities. Data on experimental measurements are given to verify the validity of the scaling laws for forces, inductances, and impedances. Indirectly these also support the validity of the scaling of the vector potential and all of the other quantities obtained from it
Cosmology and cluster halo scaling relations
Araya-Melo, Pablo A.; van de Weygaert, Rien; Jones, Bernard J. T.
2009-01-01
We explore the effects of dark matter and dark energy on the dynamical scaling properties of galaxy clusters. We investigate the cluster Faber-Jackson (FJ), Kormendy and Fundamental Plane (FP) relations between the mass, radius and velocity dispersion of cluster-sized haloes in cosmological N-body
Continuum Level Density in Complex Scaling Method
International Nuclear Information System (INIS)
Suzuki, R.; Myo, T.; Kato, K.
2005-01-01
A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique
Universal Scaling Relations in Scale-Free Structure Formation
Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.
2018-04-01
A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM∝M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters and even dark matter halos. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power law tail of dA/d lnΣ∝Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D∝R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM halos) tend to a ρ∝R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation and detailed "full physics" hydrodynamical simulations. We find that these power-laws are good first order descriptions in all cases.
Heritage and scale: settings, boundaries and relations
DEFF Research Database (Denmark)
Harvey, David
2015-01-01
of individuals and communities, towns and cities, regions, nations, continents or globally – becomes ever more important. Partly reflecting this crisis of the national container, researchers have sought opportunities both through processes of ‘downscaling’, towards community, family and even personal forms...... relations. This paper examines how heritage is produced and practised, consumed and experienced, managed and deployed at a variety of scales, exploring how notions of scale, territory and boundedness have a profound effect on the heritage process. Drawing on the work of Doreen Massey and others, the paper...
Scale relativity: from quantum mechanics to chaotic dynamics.
Nottale, L.
Scale relativity is a new approach to the problem of the origin of fundamental scales and of scaling laws in physics, which consists in generalizing Einstein's principle of relativity to the case of scale transformations of resolutions. We recall here how it leads one to the concept of fractal space-time, and to introduce a new complex time derivative operator which allows to recover the Schrödinger equation, then to generalize it. In high energy quantum physics, it leads to the introduction of a Lorentzian renormalization group, in which the Planck length is reinterpreted as a lowest, unpassable scale, invariant under dilatations. These methods are successively applied to two problems: in quantum mechanics, that of the mass spectrum of elementary particles; in chaotic dynamics, that of the distribution of planets in the Solar System.
Methods of scaling threshold color difference using printed samples
Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier
2012-01-01
A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.
Methods for Large-Scale Nonlinear Optimization.
1980-05-01
STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library
Temperature scaling method for Markov chains.
Crosby, Lonnie D; Windus, Theresa L
2009-01-22
The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.
Work related injuries and associated factors among small scale ...
African Journals Online (AJOL)
Objective: This study aims to assess the magnitude of work related injury and associated factors among small scale industrial workers in Mizan-Aman town, Bench Maji Zone, Southwest Ethiopia. Method: A cross-sectional study design was conducted from February to May, 2016. Data was collected using a structured face to ...
Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging
Directory of Open Access Journals (Sweden)
Svetlana V. Shinkareva
2013-01-01
Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.
Polarized atomic orbitals for linear scaling methods
Berghold, Gerd; Parrinello, Michele; Hutter, Jürg
2002-02-01
We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.
Large-scale tides in general relativity
Energy Technology Data Exchange (ETDEWEB)
Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)
2017-02-01
Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.
Conformal methods in general relativity
Valiente Kroon, Juan A
2016-01-01
This book offers a systematic exposition of conformal methods and how they can be used to study the global properties of solutions to the equations of Einstein's theory of gravity. It shows that combining these ideas with differential geometry can elucidate the existence and stability of the basic solutions of the theory. Introducing the differential geometric, spinorial and PDE background required to gain a deep understanding of conformal methods, this text provides an accessible account of key results in mathematical relativity over the last thirty years, including the stability of de Sitter and Minkowski spacetimes. For graduate students and researchers, this self-contained account includes useful visual models to help the reader grasp abstract concepts and a list of further reading, making this the perfect reference companion on the topic.
Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations
Mantz, A.; Allen, S. W.
2011-01-01
Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.
Test equating, scaling, and linking methods and practices
Kolen, Michael J
2014-01-01
This book provides an introduction to test equating, scaling, and linking, including those concepts and practical issues that are critical for developers and all other testing professionals. In addition to statistical procedures, successful equating, scaling, and linking involves many aspects of testing, including procedures to develop tests, to administer and score tests, and to interpret scores earned on tests. Test equating methods are used with many standardized tests in education and psychology to ensure that scores from multiple test forms can be used interchangeably. Test scaling is the process of developing score scales that are used when scores on standardized tests are reported. In test linking, scores from two or more tests are related to one another. Linking has received much recent attention, due largely to investigations of linking similarly named tests from different test publishers or tests constructed for different purposes. In recent years, researchers from the education, psychology, and...
Testing general relativity at cosmological scales: Implementation and parameter correlations
International Nuclear Information System (INIS)
Dossett, Jason N.; Ishak, Mustapha; Moldenhauer, Jacob
2011-01-01
The testing of general relativity at cosmological scales has become a possible and timely endeavor that is not only motivated by the pressing question of cosmic acceleration but also by the proposals of some extensions to general relativity that would manifest themselves at large scales of distance. We analyze here correlations between modified gravity growth parameters and some core cosmological parameters using the latest cosmological data sets including the refined Cosmic Evolution Survey 3D weak lensing. We provide the parametrized modified growth equations and their evolution. We implement known functional and binning approaches, and propose a new hybrid approach to evolve the modified gravity parameters in redshift (time) and scale. The hybrid parametrization combines a binned redshift dependence and a smooth evolution in scale avoiding a jump in the matter power spectrum. The formalism developed to test the consistency of current and future data with general relativity is implemented in a package that we make publicly available and call ISiTGR (Integrated Software in Testing General Relativity), an integrated set of modified modules for the publicly available packages CosmoMC and CAMB, including a modified version of the integrated Sachs-Wolfe-galaxy cross correlation module of Ho et al. and a new weak-lensing likelihood module for the refined Hubble Space Telescope Cosmic Evolution Survey weak gravitational lensing tomography data. We obtain parameter constraints and correlation coefficients finding that modified gravity parameters are significantly correlated with σ 8 and mildly correlated with Ω m , for all evolution methods. The degeneracies between σ 8 and modified gravity parameters are found to be substantial for the functional form and also for some specific bins in the hybrid and binned methods indicating that these degeneracies will need to be taken into consideration when using future high precision data.
Variable scaling method and Stark effect in hydrogen atom
International Nuclear Information System (INIS)
Choudhury, R.K.R.; Ghosh, B.
1983-09-01
By relating the Stark effect problem in hydrogen-like atoms to that of the spherical anharmonic oscillator we have found simple formulas for energy eigenvalues for the Stark effect. Matrix elements have been calculated using 0(2,1) algebra technique after Armstrong and then the variable scaling method has been used to find optimal solutions. Our numerical results are compared with those of Hioe and Yoo and also with the results obtained by Lanczos. (author)
Scale-Independent Relational Query Processing
2013-10-04
source options are also available, including Postgresql, MySQL , and SQLite. These mod- ern relational databases are generally very complex software systems...and Their Application to Data Stream Management. IGI Global, 2010. [68] George Reese. Database Programming with JDBC and Java , Second Edition. Ed. by
Microfluidic device, and related methods
Wong, Eric W. (Inventor)
2010-01-01
A method of making a microfluidic device is provided. The method features patterning a permeable wall on a substrate, and surrounding the permeable wall with a solid, non-permeable boundary structure to establish a microfluidic channel having a cross-sectional dimension less than 5,000 microns and a cross-sectional area at least partially filled with the permeable wall so that fluid flowing through the microfluidic channel at least partially passes through the permeable wall.
Special relativity at the quantum scale.
Directory of Open Access Journals (Sweden)
Pui K Lam
Full Text Available It has been suggested that the space-time structure as described by the theory of special relativity is a macroscopic manifestation of a more fundamental quantum structure (pre-geometry. Efforts to quantify this idea have come mainly from the area of abstract quantum logic theory. Here we present a preliminary attempt to develop a quantum formulation of special relativity based on a model that retains some geometric attributes. Our model is Feynman's "checker-board" trajectory for a 1-D relativistic free particle. We use this model to guide us in identifying (1 the quantum version of the postulates of special relativity and (2 the appropriate quantum "coordinates". This model possesses a useful feature that it admits an interpretation both in terms of paths in space-time and in terms of quantum states. Based on the quantum version of the postulates, we derive a transformation rule for velocity. This rule reduces to the Einstein's velocity-addition formula in the macroscopic limit and reveals an interesting aspect of time. The 3-D case, time-dilation effect, and invariant interval are also discussed in term of this new formulation. This is a preliminary investigation; some results are derived, while others are interesting observations at this point.
Special relativity at the quantum scale.
Lam, Pui K
2014-01-01
It has been suggested that the space-time structure as described by the theory of special relativity is a macroscopic manifestation of a more fundamental quantum structure (pre-geometry). Efforts to quantify this idea have come mainly from the area of abstract quantum logic theory. Here we present a preliminary attempt to develop a quantum formulation of special relativity based on a model that retains some geometric attributes. Our model is Feynman's "checker-board" trajectory for a 1-D relativistic free particle. We use this model to guide us in identifying (1) the quantum version of the postulates of special relativity and (2) the appropriate quantum "coordinates". This model possesses a useful feature that it admits an interpretation both in terms of paths in space-time and in terms of quantum states. Based on the quantum version of the postulates, we derive a transformation rule for velocity. This rule reduces to the Einstein's velocity-addition formula in the macroscopic limit and reveals an interesting aspect of time. The 3-D case, time-dilation effect, and invariant interval are also discussed in term of this new formulation. This is a preliminary investigation; some results are derived, while others are interesting observations at this point.
Spectral properties and scaling relations in off diagonally disordered chains
International Nuclear Information System (INIS)
Ure, J.E.; Majlis, N.
1987-07-01
We obtain the localization length L as a function of the energy E and the disorder width W for an off-diagonally disordered chain. This is done performing numerical simulations involving the continued fraction representations of the transfer matrix. The scaling relation L=W s is obtained with values of the exponent s in agreement with calculations of other authors. We also obtain the relation L ∼ |E| v for E → 0, and use it in the Herbert-Spencer-Thouless formula for L to describe the singularity of the density of states near E=0. We show that the slightest diagonal disorder obliterates this singularity. A practical method is presented to calculate the Green function by exploiting its continued fraction expansion. (author). 20 refs, 4 figs
Dissipative structures and related methods
Langhorst, Benjamin R; Chu, Henry S
2013-11-05
Dissipative structures include at least one panel and a cell structure disposed adjacent to the at least one panel having interconnected cells. A deformable material, which may comprise at least one hydrogel, is disposed within at least one interconnected cell proximate to the at least one panel. Dissipative structures may also include a cell structure having interconnected cells formed by wall elements. The wall elements may include a mesh formed by overlapping fibers having apertures formed therebetween. The apertures may form passageways between the interconnected cells. Methods of dissipating a force include disposing at least one hydrogel in a cell structure proximate to at least one panel, applying a force to the at least one panel, and forcing at least a portion of the at least one hydrogel through apertures formed in the cell structure.
Multilevel method for modeling large-scale networks.
Energy Technology Data Exchange (ETDEWEB)
Safro, I. M. (Mathematics and Computer Science)
2012-02-24
Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from
Commensurate scale relations and the Abelian correspondence principle
International Nuclear Information System (INIS)
Brodsky, S.J.
1998-06-01
Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scales, independent of the choice of intermediate renormalization scheme or other theoretical conventions. A prominent example is the generalized Crewther relation which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. Commensurate scale relations also provide an extension of the standard minimal subtraction scheme which is analytic in the quark masses, has non-ambiguous scale-setting properties, and inherits the physical properties of the effective charge α V (Q 2 ) defined from the heavy quark potential. The author also discusses a property of perturbation theory, the Abelian correspondence principle, which provides an analytic constraint on non-Abelian gauge theory for N C → 0
The Tunneling Method for Global Optimization in Multidimensional Scaling.
Groenen, Patrick J. F.; Heiser, Willem J.
1996-01-01
A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)
Ambiguous tests of general relativity on cosmological scales
International Nuclear Information System (INIS)
Zuntz, Joe; Baker, Tessa; Ferreira, Pedro G.; Skordis, Constantinos
2012-01-01
There are a number of approaches to testing General Relativity (GR) on linear scales using parameterized frameworks for modifying cosmological perturbation theory. It is sometimes assumed that the details of any given parameterization are unimportant if one uses it as a diagnostic for deviations from GR. In this brief report we argue that this is not necessarily so. First we show that adopting alternative combinations of modifications to the field equations significantly changes the constraints that one obtains. In addition, we show that using a parameterization with insufficient freedom significantly tightens the apparent theoretical constraints. Fundamentally we argue that it is almost never appropriate to consider modifications to the perturbed Einstein equations as being constraints on the effective gravitational constant, for example, in the same sense that solar system constraints are. The only consistent modifications are either those that grant near-total freedom, as in decomposition methods, or ones which map directly to a particular part of theory space
The Relation between Cosmological Redshift and Scale Factor for Photons
Energy Technology Data Exchange (ETDEWEB)
Tian, Shuxun, E-mail: tshuxun@mail.bnu.edu.cn [Department of Astronomy, Beijing Normal University, Beijing 100875 (China); Department of Physics, Wuhan University, Wuhan 430072 (China)
2017-09-10
The cosmological constant problem has become one of the most important ones in modern cosmology. In this paper, we try to construct a model that can avoid the cosmological constant problem and have the potential to explain the apparent late-time accelerating expansion of the universe in both luminosity distance and angular diameter distance measurement channels. In our model, the core is to modify the relation between cosmological redshift and scale factor for photons. We point out three ways to test our hypothesis: the supernova time dilation; the gravitational waves and its electromagnetic counterparts emitted by the binary neutron star systems; and the Sandage–Loeb effect. All of this method is feasible now or in the near future.
International Nuclear Information System (INIS)
Brodsky, S.J.; Lu, H.J.
1994-10-01
We derive commensurate scale relations which relate perturbatively calculable QCD observables to each other, including the annihilation ratio R e+ e - , the heavy quark potential, τ decay, and radiative corrections to structure function sum rules. For each such observable one can define an effective charge, such as α R (√s)/π ≡ R e+ e - (√s)/(3Σe q 2 )-1. The commensurate scale relation connecting the effective charges for observables A and B has the form α A (Q A ) α B (Q B )(1 + r A/Bπ / αB + hor-ellipsis), where the coefficient r A/B is independent of the number of flavors ∫ contributing to coupling renormalization, as in BLM scale-fixing. The ratio of scales Q A /Q B is unique at leading order and guarantees that the observables A and B pass through new quark thresholds at the same physical scale. In higher orders a different renormalization scale Q n* is assigned for each order n in the perturbative series such that the coefficients of the series are identical to that of a invariant theory. The commensurate scale relations and scales satisfy the renormalization group transitivity rule which ensures that predictions in PQCD are independent of the choice of an intermediate renormalization scheme C. In particular, scale-fixed predictions can be made without reference to theoretically constructed singular renormalization schemes such as MS. QCD can thus be tested in a new and precise way by checking that the effective charges of observables track both in their relative normalization and in their commensurate scale dependence. The commensurate scale relations which relate the radiative corrections to the annihilation ratio R e + e - to the radiative corrections for the Bjorken and Gross-Llewellyn Smith sum rules are particularly elegant and interesting
Balancing related methods for minimal realization of periodic systems
Varga, A.
1999-01-01
We propose balancing related numerically reliable methods to compute minimal realizations of linear periodic systems with time-varying dimensions. The first method belongs to the family of square-root methods with guaranteed enhanced computational accuracy and can be used to compute balanced minimal order realizations. An alternative balancing-free square-root method has the advantage of a potentially better numerical accuracy in case of poorly scaled original systems. The key numerical co...
Non-Abelian gauge field theory in scale relativity
International Nuclear Information System (INIS)
Nottale, Laurent; Celerier, Marie-Noeelle; Lehner, Thierry
2006-01-01
Gauge field theory is developed in the framework of scale relativity. In this theory, space-time is described as a nondifferentiable continuum, which implies it is fractal, i.e., explicitly dependent on internal scale variables. Owing to the principle of relativity that has been extended to scales, these scale variables can themselves become functions of the space-time coordinates. Therefore, a coupling is expected between displacements in the fractal space-time and the transformations of these scale variables. In previous works, an Abelian gauge theory (electromagnetism) has been derived as a consequence of this coupling for global dilations and/or contractions. We consider here more general transformations of the scale variables by taking into account separate dilations for each of them, which yield non-Abelian gauge theories. We identify these transformations with the usual gauge transformations. The gauge fields naturally appear as a new geometric contribution to the total variation of the action involving these scale variables, while the gauge charges emerge as the generators of the scale transformation group. A generalized action is identified with the scale-relativistic invariant. The gauge charges are the conservative quantities, conjugates of the scale variables through the action, which find their origin in the symmetries of the ''scale-space.'' We thus found in a geometric way and recover the expression for the covariant derivative of gauge theory. Adding the requirement that under the scale transformations the fermion multiplets and the boson fields transform such that the derived Lagrangian remains invariant, we obtain gauge theories as a consequence of scale symmetries issued from a geometric space-time description
Modelling across bioreactor scales: methods, challenges and limitations
DEFF Research Database (Denmark)
Gernaey, Krist
that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...
Relating quality of life to Glasgow outcome scale health states.
Kosty, Jennifer; Macyszyn, Luke; Lai, Kevin; McCroskery, James; Park, Hae-Ran; Stein, Sherman C
2012-05-01
There has recently been a call for the adoption of comparative effectiveness research (CER) and related research approaches for studying traumatic brain injury (TBI). These methods allow researchers to compare the effectiveness of different therapies in producing patient-oriented outcomes of interest. Heretofore, the only measures by which to compare such therapies have been mortality and rate of poor outcome. Better comparisons can be made if parametric, preference-based quality-of-life (QOL) values are available for intermediate outcomes, such as those described by the Glasgow Outcome Scale Extended (GOSE). Our objective was therefore to determine QOL for the health states described by the GOSE. We interviewed community members at least 18 years of age using the standard gamble method to assess QOL for descriptions of GOSE scores of 2-7 derived from the structured interview. Linear regression analysis was also performed to assess the effect of age, gender, and years of education on QOL. One hundred and one participants between the ages of 18 and 83 were interviewed (mean age 40 ± 19 years), including 55 men and 46 women. Functional impairment and QOL showed a strong inverse relationship, as assessed by both linear regression and the Spearman rank order coefficient. No consistent effect or age, gender, or years of education was seen. As expected, QOL decreased with functional outcome as described by the GOSE. The results of this study will provide the groundwork for future groups seeking to apply CER methods to clinical studies of TBI.
Experimental methods for laboratory-scale ensilage of lignocellulosic biomass
International Nuclear Information System (INIS)
Tanjore, Deepti; Richard, Tom L.; Marshall, Megan N.
2012-01-01
Anaerobic fermentation is a potential storage method for lignocellulosic biomass in biofuel production processes. Since biomass is seasonally harvested, stocks are often dried or frozen at laboratory scale prior to fermentation experiments. Such treatments prior to fermentation studies cause irreversible changes in the plant cells, influencing the initial state of biomass and thereby the progression of the fermentation processes itself. This study investigated the effects of drying, refrigeration, and freezing relative to freshly harvested corn stover in lab-scale ensilage studies. Particle sizes, as well as post-ensilage drying temperatures for compositional analysis, were tested to identify the appropriate sample processing methods. After 21 days of ensilage the lowest pH value (3.73 ± 0.03), lowest dry matter loss (4.28 ± 0.26 g. 100 g-1DM), and highest water soluble carbohydrate (WSC) concentrations (7.73 ± 0.26 g. 100 g-1DM) were observed in control biomass (stover ensiled within 12 h of harvest without any treatments). WSC concentration was significantly reduced in samples refrigerated for 7 days prior to ensilage (3.86 ± 0.49 g. 100 g −1 DM). However, biomass frozen prior to ensilage produced statistically similar results to the fresh biomass control, especially in treatments with cell wall degrading enzymes. Grinding to decrease particle size reduced the variance amongst replicates for pH values of individual reactors to a minor extent. Drying biomass prior to extraction of WSCs resulted in degradation of the carbohydrates and a reduced estimate of their concentrations. The methods developed in this study can be used to improve ensilage experiments and thereby help in developing ensilage as a storage method for biofuel production. -- Highlights: ► Laboratory-scale methods to assess the influence of ensilage biofuel production. ► Drying, freezing, and refrigeration of biomass influenced microbial fermentation. ► Freshly ensiled stover exhibited
Large-scale synthesis of YSZ nanopowder by Pechini method
Indian Academy of Sciences (India)
Administrator
structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...
Planck-scale-modified dispersion relations in FRW spacetime
Rosati, Giacomo; Amelino-Camelia, Giovanni; Marcianò, Antonino; Matassa, Marco
2015-12-01
In recent years, Planck-scale modifications of the dispersion relation have been attracting increasing interest also from the viewpoint of possible applications in astrophysics and cosmology, where spacetime curvature cannot be neglected. Nonetheless, the interplay between Planck-scale effects and spacetime curvature is still poorly understood, particularly in cases where curvature is not constant. These challenges have been so far postponed by relying on an ansatz, first introduced by Jacob and Piran. We propose here a general strategy of analysis of the effects of modifications of the dispersion relation in Friedmann-Robertson-Walker spacetimes, applicable both to cases where the relativistic equivalence of frames is spoiled ("preferred-frame scenarios") and to the alternative possibility of "DSR-relativistic theories," theories that are fully relativistic but with relativistic laws deformed so that the modified dispersion relation is observer independent. We show that the Jacob-Piran ansatz implicitly assumes that spacetime translations are not affected by the Planck scale, while under rather general conditions, the same Planck-scale quantum-spacetime structures producing modifications of the dispersion relation also affect translations. Through the explicit analysis of one of the effects produced by modifications of the dispersion relation, an effect amounting to Planck-scale corrections to travel times, we show that our concerns are not merely conceptual but rather can have significant quantitative implications.
An allometric scaling relation based on logistic growth of cities
International Nuclear Information System (INIS)
Chen, Yanguang
2014-01-01
Highlights: • An allometric scaling based on logistic process can be used to model urban growth. • The traditional allometry is based on exponential growth instead of logistic growth. • The exponential allometry represents a local scaling of urban growth. • The logistic allometry represents a global scaling of urban growth. • The exponential allometry is an approximation relation of the logistic allometry. - Abstract: The relationships between urban area and population size have been empirically demonstrated to follow the scaling law of allometric growth. This allometric scaling is based on exponential growth of city size and can be termed “exponential allometry”, which is associated with the concepts of fractals. However, both city population and urban area comply with the course of logistic growth rather than exponential growth. In this paper, I will present a new allometric scaling based on logistic growth to solve the above mentioned problem. The logistic growth is a process of replacement dynamics. Defining a pair of replacement quotients as new measurements, which are functions of urban area and population, we can derive an allometric scaling relation from the logistic processes of urban growth, which can be termed “logistic allometry”. The exponential allometric relation between urban area and population is the approximate expression of the logistic allometric equation when the city size is not large enough. The proper range of the allometric scaling exponent value is reconsidered through the logistic process. Then, a medium-sized city of Henan Province, China, is employed as an example to validate the new allometric relation. The logistic allometry is helpful for further understanding the fractal property and self-organized process of urban evolution in the right perspective
A NDVI assisted remote sensing image adaptive scale segmentation method
Zhang, Hong; Shen, Jinxiang; Ma, Yanmei
2018-03-01
Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.
Deposit and scale prevention methods in thermal sea water desalination
International Nuclear Information System (INIS)
Froehner, K.R.
1977-01-01
Introductory remarks deal with the 'fouling factor' and its influence on the overall heat transfer coefficient of msf evaporators. The composition of the matter dissolved in sea water and the thermal and chemical properties lead to formation of alkaline scale or even hard, sulphate scale on the heat exchanger tube walls and can hamper plant operation and economics seriously. Among the scale prevention methods are 1) pH control by acid dosing (decarbonation), 2) 'threshold treatment' by dosing of inhibitors of different kind, 3) mechanical cleaning by sponge rubber balls guided through the heat exchanger tubes, in general combined with methods no. 1 or 2, and 4) application of a scale crystals germ slurry (seeding). Mention is made of several other scale prevention proposals. The problems encountered with marine life (suspension, deposit, growth) in desalination plants are touched. (orig.) [de
Elements of a method to scale ignition reactor Tokamak
International Nuclear Information System (INIS)
Cotsaftis, M.
1984-08-01
Due to unavoidable uncertainties from present scaling laws when projected to thermonuclear regime, a method is proposed to minimize these uncertainties in order to figure out the main parameters of ignited tokamak. The method mainly consists in searching, if any, a domain in adapted parameters space which allows Ignition, but is the least sensitive to possible change in scaling laws. In other words, Ignition domain is researched which is the intersection of all possible Ignition domains corresponding to all possible scaling laws produced by all possible transports
Method of producing nano-scaled inorganic platelets
Zhamu, Aruna; Jang, Bor Z.
2012-11-13
The present invention provides a method of exfoliating a layered material (e.g., transition metal dichalcogenide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites.
Variational Multi-Scale method with spectral approximation of the sub-scales.
Dia, Ben Mansour; Chá con-Rebollo, Tomas
2015-01-01
A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base
Functional Independent Scaling Relation for ORR/OER Catalysts
DEFF Research Database (Denmark)
Christensen, Rune; Hansen, Heine Anton; Dickens, Colin F.
2016-01-01
reactions. Here, we show that the oxygen-oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data...... and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largely cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange-correlation functional...
Variational Multi-Scale method with spectral approximation of the sub-scales.
Dia, Ben Mansour
2015-01-07
A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.
Dual-scale Galerkin methods for Darcy flow
Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex
2018-02-01
The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.
VLSI scaling methods and low power CMOS buffer circuit
International Nuclear Information System (INIS)
Sharma Vijay Kumar; Pattanaik Manisha
2013-01-01
Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)
An allometric scaling relation based on logistic growth of cities
Chen, Yanguang
2014-08-01
The relationships between urban area and population size have been empirically demonstrated to follow the scaling law of allometric growth. This allometric scaling is based on exponential growth of city size and can be termed "exponential allometry", which is associated with the concepts of fractals. However, both city population and urban area comply with the course of logistic growth rather than exponential growth. In this paper, I will present a new allometric scaling based on logistic growth to solve the abovementioned problem. The logistic growth is a process of replacement dynamics. Defining a pair of replacement quotients as new measurements, which are functions of urban area and population, we can derive an allometric scaling relation from the logistic processes of urban growth, which can be termed "logistic allometry". The exponential allometric relation between urban area and population is the approximate expression of the logistic allometric equation when the city size is not large enough. The proper range of the allometric scaling exponent value is reconsidered through the logistic process. Then, a medium-sized city of Henan Province, China, is employed as an example to validate the new allometric relation. The logistic allometry is helpful for further understanding the fractal property and self-organized process of urban evolution in the right perspective.
Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.
Tomas, Jose M.; Oliver, Amparo
1999-01-01
Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)
A multi-scale method of mapping urban influence
Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters
2009-01-01
Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...
Violence-Related Attitudes and Beliefs: Scale Construction and Psychometrics
Brand, Pamela A.; Anastasio, Phyllis A.
2006-01-01
The 50-item Violence-Related Attitudes and Beliefs Scale (V-RABS) includes three subscales measuring possible causes of violent behavior (environmental influences, biological influences, and mental illness) and four subscales assessing possible controls of violent behavior (death penalty, punishment, prevention, and catharsis). Each subscale…
SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data
International Nuclear Information System (INIS)
Williams, Mark L.; Rearden, Bradley T.
2008-01-01
Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.
Relative scale and the strength and deformability of rock masses
Schultz, Richard A.
1996-09-01
The strength and deformation of rocks depend strongly on the degree of fracturing, which can be assessed in the field and related systematically to these properties. Appropriate Mohr envelopes obtained from the Rock Mass Rating (RMR) classification system and the Hoek-Brown criterion for outcrops and other large-scale exposures of fractured rocks show that rock-mass cohesive strength, tensile strength, and unconfined compressive strength can be reduced by as much as a factor often relative to values for the unfractured material. The rock-mass deformation modulus is also reduced relative to Young's modulus. A "cook-book" example illustrates the use of RMR in field applications. The smaller values of rock-mass strength and deformability imply that there is a particular scale of observation whose identification is critical to applying laboratory measurements and associated failure criteria to geologic structures.
Lagrangian space consistency relation for large scale structure
International Nuclear Information System (INIS)
Horn, Bart; Hui, Lam; Xiao, Xiao
2015-01-01
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space
The initial development of the Pregnancy-related Anxiety Scale.
Brunton, Robyn J; Dryer, Rachel; Saliba, Anthony; Kohlhoff, Jane
2018-05-30
Pregnancy-related anxiety is a distinct anxiety characterised by pregnancy-specific concerns. This anxiety is consistently associated with adverse birth outcomes, and obstetric and paediatric risk factors, associations generally not seen with other anxieties. The need exists for a psychometrically sound scale for this anxiety type. This study, therefore, reports on the initial development of the Pregnancy-related Anxiety Scale. The item pool was developed following a literature review and the formulation of a definition for pregnancy-related anxiety. An Expert Review Panel reviewed the definition, item pool and test specifications. Pregnant women were recruited online (N=671). Using a subsample (N=262, M=27.94, SD=4.99), fourteen factors were extracted using Principal Components Analysis accounting for 63.18% of the variance. Further refinement resulted in 11 distinct factors. Confirmatory Factor Analysis further tested the model with a second subsample (N=369, M=26.59, SD=4.76). After additional refinement, the resulting model was a good fit with nine factors (childbirth, appearance, attitudes towards childbirth, motherhood, acceptance, anxiety, medical, avoidance, and baby concerns). Internal consistency reliability was good with the majority of subscales exceeding α=.80. The Pregnancy-related Anxiety Scale is easy to administer with higher scores indicative of greater pregnancy-related anxiety. The inclusion of reverse-scored items is a potential limitation with poorer reliability evident for these factors. Although still in its development stage, the Pregnancy-related Anxiety Scale will eventually be useful both clinically (affording early intervention) and in research settings. Copyright © 2018 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
A Modified Conjugacy Condition and Related Nonlinear Conjugate Gradient Method
Directory of Open Access Journals (Sweden)
Shengwei Yao
2014-01-01
Full Text Available The conjugate gradient (CG method has played a special role in solving large-scale nonlinear optimization problems due to the simplicity of their very low memory requirements. In this paper, we propose a new conjugacy condition which is similar to Dai-Liao (2001. Based on this condition, the related nonlinear conjugate gradient method is given. With some mild conditions, the given method is globally convergent under the strong Wolfe-Powell line search for general functions. The numerical experiments show that the proposed method is very robust and efficient.
Surface Rupture Effects on Earthquake Moment-Area Scaling Relations
Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro
2017-09-01
Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.
Gamma Ray Tomographic Scan Method for Large Scale Industrial Plants
International Nuclear Information System (INIS)
Moon, Jin Ho; Jung, Sung Hee; Kim, Jong Bum; Park, Jang Geun
2011-01-01
The gamma ray tomography systems have been used to investigate a chemical process for last decade. There have been many cases of gamma ray tomography for laboratory scale work but not many cases for industrial scale work. Non-tomographic equipment with gamma-ray sources is often used in process diagnosis. Gamma radiography, gamma column scanning and the radioisotope tracer technique are examples of gamma ray application in industries. In spite of many outdoor non-gamma ray tomographic equipment, the most of gamma ray tomographic systems still remained as indoor equipment. But, as the gamma tomography has developed, the demand on gamma tomography for real scale plants also increased. To develop the industrial scale system, we introduced the gamma-ray tomographic system with fixed detectors and rotating source. The general system configuration is similar to 4 th generation geometry. But the main effort has been made to actualize the instant installation of the system for real scale industrial plant. This work would be a first attempt to apply the 4th generation industrial gamma tomographic scanning by experimental method. The individual 0.5-inch NaI detector was used for gamma ray detection by configuring circular shape around industrial plant. This tomographic scan method can reduce mechanical complexity and require a much smaller space than a conventional CT. Those properties make it easy to get measurement data for a real scale plant
Scaling Relations between Gas and Star Formation in Nearby Galaxies
Bigiel, Frank; Leroy, Adam; Walter, Fabian
2011-04-01
High resolution, multi-wavelength maps of a sizeable set of nearby galaxies have made it possible to study how the surface densities of H i, H2 and star formation rate (ΣHI, ΣH2, ΣSFR) relate on scales of a few hundred parsecs. At these scales, individual galaxy disks are comfortably resolved, making it possible to assess gas-SFR relations with respect to environment within galaxies. ΣH2, traced by CO intensity, shows a strong correlation with ΣSFR and the ratio between these two quantities, the molecular gas depletion time, appears to be constant at about 2 Gyr in large spiral galaxies. Within the star-forming disks of galaxies, ΣSFR shows almost no correlation with ΣHI. In the outer parts of galaxies, however, ΣSFR does scale with ΣHI, though with large scatter. Combining data from these different environments yields a distribution with multiple regimes in Σgas - ΣSFR space. If the underlying assumptions to convert observables to physical quantities are matched, even combined datasets based on different SFR tracers, methodologies and spatial scales occupy a well define locus in Σgas - ΣSFR space.
Scale factor measure method without turntable for angular rate gyroscope
Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua
2018-03-01
In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.
Boundary layers and scaling relations in natural thermal convection
Shishkina, Olga; Lohse, Detlef; Grossmann, Siegfried
2017-11-01
We analyse the boundary layer (BL) equations in natural thermal convection, which includes vertical convection (VC), where the fluid is confined between two differently heated vertical walls, horizontal convection (HC), where the fluid is heated at one part of the bottom plate and cooled at some other part, and Rayleigh-Benard convection (RBC). For BL dominated regimes we derive the scaling relations of the Nusselt and Reynolds numbers (Nu, Re) with the Rayleigh and Prandtl numbers (Ra, Pr). For VC the scaling relations are obtained directly from the BL equations, while for HC they are derived by applying the Grossmann-Lohse theory to the case of VC. In particular, for RBC with large Pr we derive Nu Pr0Ra1/3 and Re Pr-1Ra2/3. The work is supported by the Deutsche Forschungsgemeinschaft (DFG) under the Grant Sh 405/4 - Heisenberg fellowship.
Multiple time-scale methods in particle simulations of plasmas
International Nuclear Information System (INIS)
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
The Language Teaching Methods Scale: Reliability and Validity Studies
Okmen, Burcu; Kilic, Abdurrahman
2016-01-01
The aim of this research is to develop a scale to determine the language teaching methods used by English teachers. The research sample consisted of 300 English teachers who taught at Duzce University and in primary schools, secondary schools and high schools in the Provincial Management of National Education in the city of Duzce in 2013-2014…
A comparison of multidimensional scaling methods for perceptual mapping
Bijmolt, T.H.A.; Wedel, M.
Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare
Correlates of the Rosenberg Self-Esteem Scale Method Effects
Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan
2006-01-01
Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…
Scaling Relations of Starburst-driven Galactic Winds
Energy Technology Data Exchange (ETDEWEB)
Tanner, Ryan [Department of Chemistry and Physics, Augusta University, Augusta, GA 30912 (United States); Cecil, Gerald; Heitsch, Fabian, E-mail: rytanner@augusta.edu [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3255 (United States)
2017-07-10
Using synthetic absorption lines generated from 3D hydrodynamical simulations, we explore how the velocity of a starburst-driven galactic wind correlates with the star formation rate (SFR) and SFR density. We find strong correlations for neutral and low ionized gas, but no correlation for highly ionized gas. The correlations for neutral and low ionized gas only hold for SFRs below a critical limit set by the mass loading of the starburst, above which point the scaling relations flatten abruptly. Below this point the scaling relations depend on the temperature regime being probed by the absorption line, not on the mass loading. The exact scaling relation depends on whether the maximum or mean velocity of the absorption line is used. We find that the outflow velocity of neutral gas can be up to five times lower than the average velocity of ionized gas, with the velocity difference increasing for higher ionization states. Furthermore, the velocity difference depends on both the SFR and mass loading of the starburst. Thus, absorption lines of neutral or low ionized gas cannot easily be used as a proxy for the outflow velocity of the hot gas.
Universal Dark Halo Scaling Relation for the Dwarf Spheroidal Satellites
Hayashi, Kohei; Ishiyama, Tomoaki; Ogiya, Go; Chiba, Masashi; Inoue, Shigeki; Mori, Masao
2017-07-01
Motivated by a recently found interesting property of the dark halo surface density within a radius, {r}\\max , giving the maximum circular velocity, {V}\\max , we investigate it for dark halos of the Milky Way’s and Andromeda’s dwarf satellites based on cosmological simulations. We select and analyze the simulated subhalos associated with Milky-Way-sized dark halos and find that the values of their surface densities, {{{Σ }}}{V\\max }, are in good agreement with those for the observed dwarf spheroidal satellites even without employing any fitting procedures. Moreover, all subhalos on the small scales of dwarf satellites are expected to obey the universal relation, irrespective of differences in their orbital evolutions, host halo properties, and observed redshifts. Therefore, we find that the universal scaling relation for dark halos on dwarf galaxy mass scales surely exists and provides us with important clues for understanding fundamental properties of dark halos. We also investigate orbital and dynamical evolutions of subhalos to understand the origin of this universal dark halo relation and find that most subhalos evolve generally along the {r}\\max \\propto {V}\\max sequence, even though these subhalos have undergone different histories of mass assembly and tidal stripping. This sequence, therefore, should be the key feature for understanding the nature of the universality of {{{Σ }}}{V\\max }.
Scaling Relations of Starburst-driven Galactic Winds
International Nuclear Information System (INIS)
Tanner, Ryan; Cecil, Gerald; Heitsch, Fabian
2017-01-01
Using synthetic absorption lines generated from 3D hydrodynamical simulations, we explore how the velocity of a starburst-driven galactic wind correlates with the star formation rate (SFR) and SFR density. We find strong correlations for neutral and low ionized gas, but no correlation for highly ionized gas. The correlations for neutral and low ionized gas only hold for SFRs below a critical limit set by the mass loading of the starburst, above which point the scaling relations flatten abruptly. Below this point the scaling relations depend on the temperature regime being probed by the absorption line, not on the mass loading. The exact scaling relation depends on whether the maximum or mean velocity of the absorption line is used. We find that the outflow velocity of neutral gas can be up to five times lower than the average velocity of ionized gas, with the velocity difference increasing for higher ionization states. Furthermore, the velocity difference depends on both the SFR and mass loading of the starburst. Thus, absorption lines of neutral or low ionized gas cannot easily be used as a proxy for the outflow velocity of the hot gas.
Large scale obscuration and related climate effects open literature bibliography
International Nuclear Information System (INIS)
Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.
1994-05-01
Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ''Nuclear Winter Controversy'' in the early 1980's. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest
Large scale obscuration and related climate effects open literature bibliography
Energy Technology Data Exchange (ETDEWEB)
Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.
1994-05-01
Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.
Stepwise integral scaling method and its application to severe accident phenomena
International Nuclear Information System (INIS)
Ishii, M.; Zhang, G.
1993-10-01
Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses
Linear-scaling quantum mechanical methods for excited states.
Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua
2012-05-21
The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and
Regularization methods for ill-posed problems in multiple Hilbert scales
International Nuclear Information System (INIS)
Mazzieri, Gisela L; Spies, Ruben D
2012-01-01
Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)
Scaling Relations for Adsorption Energies on Doped Molybdenum Phosphide Surfaces
International Nuclear Information System (INIS)
Fields, Meredith; Tsai, Charlie; Chen, Leanne D.; Abild-Pedersen, Frank; Nørskov, Jens K.; Chan, Karen
2017-01-01
Molybdenum phosphide (MoP), a well-documented catalyst for applications ranging from hydrotreating reactions to electrochemical hydrogen evolution, has yet to be mapped from a more fundamental perspective, particularly in the context of transition-metal scaling relations. In this work, we use periodic density functional theory to extend linear scaling arguments to doped MoP surfaces and understand the behavior of the phosphorus active site. The derived linear relationships for hydrogenated C, N, and O species on a variety of doped surfaces suggest that phosphorus experiences a shift in preferred bond order depending on the degree of hydrogen substitution on the adsorbate molecule. This shift in phosphorus hybridization, dependent on the bond order of the adsorbate to the surface, can result in selective bond weakening or strengthening of chemically similar species. As a result, we discuss how this behavior deviates from transition-metal, sulfide, carbide, and nitride scaling relations, and we discuss potential applications in the context of electrochemical reduction reactions.
International Nuclear Information System (INIS)
Ishihara, Kenichi; Hamada, Takeshi; Meshii, Toshiyuki
2017-01-01
In this paper, a new method for scaling the crack tip stress distribution under small scale yielding condition was proposed and named as T-scaling method. This method enables to identify the different stress distributions for materials with different tensile properties but identical load in terms of K or J. Then by assuming that the temperature dependence of a material is represented as the stress-strain relationship temperature dependence, a method to predict the fracture load at an arbitrary temperature from the already known fracture load at a reference temperature was proposed. This method combined the T-scaling method and the knowledge “fracture stress for slip induced cleavage fracture is temperature independent.” Once the fracture load is predicted, fracture toughness J c at the temperature under consideration can be evaluated by running elastic-plastic finite element analysis. Finally, the above-mentioned framework to predict the J c temperature dependency of a material in the ductile-to-brittle temperature distribution was validated for 0.55% carbon steel JIS S55C. The proposed framework seems to have a possibility to solve the problem the master curve is facing in the relatively higher temperature region, by requiring only tensile tests. (author)
Fractional Nottale's Scale Relativity and emergence of complexified gravity
International Nuclear Information System (INIS)
EL-Nabulsi, Ahmad Rami
2009-01-01
Fractional calculus of variations has recently gained significance in studying weak dissipative and nonconservative dynamical systems ranging from classical mechanics to quantum field theories. In this paper, fractional Nottale's Scale Relativity (NSR) for an arbitrary fractal dimension is introduced within the framework of fractional action-like variational approach recently introduced by the author. The formalism is based on fractional differential operators that generalize the differential operators of conventional NSR but that reduces to the standard formalism in the integer limit. Our main aim is to build the fractional setting for the NSR dynamical equations. Many interesting consequences arise, in particular the emergence of complexified gravity and complex time.
Application of discrete scale invariance method on pipe rupture
International Nuclear Information System (INIS)
Rajkovic, M.; Mihailovic, Z.; Riznic, J.
2007-01-01
'Full text:' A process of material failure of a mechanical system in the form of cracks and microcracks, a catastrophic phenomenon of considerable technological and scientific importance, may be forecasted according to the recent advances in the theory of critical phenomena in statistical physics. Critical rupture scenario states that, in many concrete and composite heterogeneous materials under compression and materials with large distributed residual stresses, rupture is a genuine critical point, i.e., the culmination of a self-organization of damage and cracking characterized by power law signatures. The concept of discrete scale invariance leads to a complex critical exponent (or dimension) and may occur spontaneously in systems and materials developing rupture. It establishes, theoretically, the power law dependence of a measurable observable, such as the rate of acoustic emissions radiated during loading or rate of heat released during the process, upon the time to failure. However, the problem is the power law can be distinguished from other parametric functional forms such as an exponential only close to the critical time. In this paper we modify the functional renormalization method to include the noise elimination procedure and dimension reduction. The aim is to obtain the prediction of the critical rupture time only from the knowledge of the power law parameters at early times prior to rupture, and based on the assumption that the dynamics close to rupture is governed by the power law dependence of the temperature measured along the perimeter of the tube upon the time-to-failure. Such an analysis would not only enhance the precision of prediction related to the rupture mechanism but also significantly help in determining and predicting the leak rates. The prediction will be compared to experimental data on Zr-2.5%Nb made tubes. Note: The views expressed in the paper are those of the authors and do not necessary represents those of the commission. (author)
Maxwell iteration for the lattice Boltzmann method with diffusive scaling
Zhao, Weifeng; Yong, Wen-An
2017-03-01
In this work, we present an alternative derivation of the Navier-Stokes equations from Bhatnagar-Gross-Krook models of the lattice Boltzmann method with diffusive scaling. This derivation is based on the Maxwell iteration and can expose certain important features of the lattice Boltzmann solutions. Moreover, it will be seen to be much more straightforward and logically clearer than the existing approaches including the Chapman-Enskog expansion.
ALGORITHM FOR DYNAMIC SCALING RELATIONAL DATABASE IN CLOUDS
Directory of Open Access Journals (Sweden)
Alexander V. Boichenko
2014-01-01
Full Text Available This article analyzes the main methods of scalingdatabases (replication, sharding and their supportat the popular relational databases and NoSQLsolutions with different data models: document-oriented, key-value, column-oriented and graph.The article presents an algorithm for the dynamicscaling of a relational database (DB, that takesinto account the speciﬁcs of the different types of logic database model. This article was prepared with the support of RFBR (grant № 13-07-00749.
X-Ray Scaling Relations of Early-type Galaxies
Babyk, Iu. V.; McNamara, B. R.; Nulsen, P. E. J.; Hogan, M. T.; Vantyghem, A. N.; Russell, H. R.; Pulido, F. A.; Edge, A. C.
2018-04-01
X-ray luminosity, temperature, gas mass, total mass, and their scaling relations are derived for 94 early-type galaxies (ETGs) using archival Chandra X-ray Observatory observations. Consistent with earlier studies, the scaling relations, L X ∝ T 4.5±0.2, M ∝ T 2.4±0.2, and L X ∝ M 2.8±0.3, are significantly steeper than expected from self-similarity. This steepening indicates that their atmospheres are heated above the level expected from gravitational infall alone. Energetic feedback from nuclear black holes and supernova explosions are likely heating agents. The tight L X –T correlation for low-luminosity systems (i.e., below 1040 erg s‑1) are at variance with hydrodynamical simulations, which generally predict higher temperatures for low-luminosity galaxies. We also investigate the relationship between total mass and pressure, Y X = M g × T, finding M\\propto {Y}X0.45+/- 0.04. We explore the gas mass to total mass fraction in ETGs and find a range of 0.1%–1.0%. We find no correlation between the gas-to-total mass fraction with temperature or total mass. Higher stellar velocity dispersions and higher metallicities are found in hotter, brighter, and more massive atmospheres. X-ray core radii derived from β-model fitting are used to characterize the degree of core and cuspiness of hot atmospheres.
The linearly scaling 3D fragment method for large scale electronic structure calculations
Energy Technology Data Exchange (ETDEWEB)
Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)
2009-07-01
The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.
Methods of Scientific Research: Teaching Scientific Creativity at Scale
Robbins, Dennis; Ford, K. E. Saavik
2016-01-01
We present a scaling-up plan for AstroComNYC's Methods of Scientific Research (MSR), a course designed to improve undergraduate students' understanding of science practices. The course format and goals, notably the open-ended, hands-on, investigative nature of the curriculum are reviewed. We discuss how the course's interactive pedagogical techniques empower students to learn creativity within the context of experimental design and control of variables thinking. To date the course has been offered to a limited numbers of students in specific programs. The goals of broadly implementing MSR is to reach more students and early in their education—with the specific purpose of supporting and improving retention of students pursuing STEM careers. However, we also discuss challenges in preserving the effectiveness of the teaching and learning experience at scale.
Psychometric properties of the satisfaction with food-related Life Scale
DEFF Research Database (Denmark)
Schnettler, Berta; Miranda, Horacio; Sepúlveda, José
2013-01-01
with proportional attachment per city. Results: The results of the confirmatory factor analysis showed an adequate level of internal consistency and a good fit (root mean square error of approximation ¼ 0.071, goodness-of-fit index ¼ 0.95, adjusted goodness-of-fit index ¼ 0.92) to the SWFL data (1-dimensional......Objective: To evaluate the psychometric properties of the Satisfaction with Food-related Life (SWFL) scale and its relation to the Satisfaction with Life Scale (SWLS) in southern Chile. Methods: A survey was applied to a sample of 316 persons in the principal cities of southern Chile distributed...
BOX-COX REGRESSION METHOD IN TIME SCALING
Directory of Open Access Journals (Sweden)
ATİLLA GÖKTAŞ
2013-06-01
Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.
Precision Scaling Relations for Disk Galaxies in the Local Universe
Lapi, A.; Salucci, P.; Danese, L.
2018-05-01
We build templates of rotation curves as a function of the I-band luminosity via the mass modeling (by the sum of a thin exponential disk and a cored halo profile) of suitably normalized, stacked data from wide samples of local spiral galaxies. We then exploit such templates to determine fundamental stellar and halo properties for a sample of about 550 local disk-dominated galaxies with high-quality measurements of the optical radius R opt and of the corresponding rotation velocity V opt. Specifically, we determine the stellar M ⋆ and halo M H masses, the halo size R H and velocity scale V H, and the specific angular momenta of the stellar j ⋆ and dark matter j H components. We derive global scaling relationships involving such stellar and halo properties both for the individual galaxies in our sample and for their mean within bins; the latter are found to be in pleasing agreement with previous determinations by independent methods (e.g., abundance matching techniques, weak-lensing observations, and individual rotation curve modeling). Remarkably, the size of our sample and the robustness of our statistical approach allow us to attain an unprecedented level of precision over an extended range of mass and velocity scales, with 1σ dispersion around the mean relationships of less than 0.1 dex. We thus set new standard local relationships that must be reproduced by detailed physical models, which offer a basis for improving the subgrid recipes in numerical simulations, that provide a benchmark to gauge independent observations and check for systematics, and that constitute a basic step toward the future exploitation of the spiral galaxy population as a cosmological probe.
Global hydrobelts: improved reporting scale for water-related issues?
Meybeck, M.; Kummu, M.; Dürr, H. H.
2012-08-01
Questions related to water such as its availability, water needs or stress, or management, are mapped at various resolutions at the global scale. They are reported at many scales, mostly along political or continental boundaries. As such, they ignore the fundamental heterogeneity of the hydroclimate and the natural boundaries of the river basins. Here, we describe the continental landmasses according to eight global-scale hydrobelts strictly limited by river basins, defined at a 30' (0.5°) resolution. The belts were defined and delineated, based primarily on the annual average temperature (T) and runoff (q), to maximise interbelt differences and minimise intrabelt variability. The belts were further divided into 29 hydroregions based on continental limits. This new global puzzle defines homogeneous and near-contiguous entities with similar hydrological and thermal regimes, glacial and postglacial basin histories, endorheism distribution and sensitivity to climate variations. The Mid-Latitude, Dry and Subtropical belts have northern and southern analogues and a general symmetry can be observed for T and q between them. The Boreal and Equatorial belts are unique. The hydroregions (median size 4.7 Mkm2) contrast strongly, with the average q ranging between 6 and 1393 mm yr-1 and the average T between -9.7 and +26.3 °C. Unlike the hydroclimate, the population density between the North and South belts and between the continents varies greatly, resulting in pronounced differences between the belts with analogues in both hemispheres. The population density ranges from 0.7 to 0.8 p km-2 for the North American Boreal and some Australian hydroregions to 280 p km-2 for the Asian part of the Northern Mid-Latitude belt. The combination of population densities and hydroclimate features results in very specific expressions of water-related characteristics in each of the 29 hydroregions. Our initial tests suggest that hydrobelt and hydroregion divisions are often more
International Nuclear Information System (INIS)
Seguin, B.; Courault, D.; Guerif, M.
1994-01-01
Remotely sensed surface temperatures have proven useful for monitoring evapotranspiration (ET) rates and crop water use because of their direct relationship with sensible and latent energy exchange processes. Procedures for using the thermal infrared (IR) obtained with hand-held radiometers deployed at ground level are now well established and even routine for many agricultural research and management purposes. The availability of IR from meteorological satellites at scales from 1 km (NOAA-AVHRR) to 5 km (METEOSAT) permits extension of local, ground-based approaches to larger scale crop monitoring programs. Regional observations of surface minus air temperature (i.e., the stress degree day) and remote estimates of daily ET were derived from satellite data over sites in France, the Sahel, and North Africa and summarized here. Results confirm that similar approaches can be applied at local and regional scales despite differences in pixel size and heterogeneity. This article analyzes methods for obtaining these data and outlines the potential utility of satellite data for operational use at the regional scale. (author)
Schmengler, A. C.; Vlek, P. L. G.
2012-04-01
Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The
Single-field consistency relations of large scale structure
International Nuclear Information System (INIS)
Creminelli, Paolo; Noreña, Jorge; Simonović, Marko; Vernizzi, Filippo
2013-01-01
We derive consistency relations for the late universe (CDM and ΛCDM): relations between an n-point function of the density contrast δ and an (n+1)-point function in the limit in which one of the (n+1) momenta becomes much smaller than the others. These are based on the observation that a long mode, in single-field models of inflation, reduces to a diffeomorphism since its freezing during inflation all the way until the late universe, even when the long mode is inside the horizon (but out of the sound horizon). These results are derived in Newtonian gauge, at first and second order in the small momentum q of the long mode and they are valid non-perturbatively in the short-scale δ. In the non-relativistic limit our results match with [1]. These relations are a consequence of diffeomorphism invariance; they are not satisfied in the presence of extra degrees of freedom during inflation or violation of the Equivalence Principle (extra forces) in the late universe
Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems
Razzak, M. A.; Alam, M. Z.; Sharif, M. N.
2018-03-01
In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.
Large Scale Obscuration and Related Climate Effects Workshop: Proceedings
Energy Technology Data Exchange (ETDEWEB)
Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.
1994-05-01
A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).
Optimization of large-scale industrial systems : an emerging method
Energy Technology Data Exchange (ETDEWEB)
Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre
2006-07-01
This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.
New Models and Methods for the Electroweak Scale
Energy Technology Data Exchange (ETDEWEB)
Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac
International Nuclear Information System (INIS)
Woo, M.K.; Cunningham, J.R.
1990-01-01
In the convolution/superposition method of photon beam dose calculations, inhomogeneities are usually handled by using some form of scaling involving the relative electron densities of the inhomogeneities. In this paper the accuracy of density scaling as applied to primary electrons generated in photon interactions is examined. Monte Carlo calculations are compared with density scaling calculations for air and cork slab inhomogeneities. For individual primary photon kernels as well as for photon interactions restricted to a thin layer, the results can differ significantly, by up to 50%, between the two calculations. However, for realistic photon beams where interactions occur throughout the whole irradiated volume, the discrepancies are much less severe. The discrepancies for the kernel calculation are attributed to the scattering characteristics of the electrons and the consequent oversimplified modeling used in the density scaling method. A technique called the kernel integration technique is developed to analyze the general effects of air and cork inhomogeneities. It is shown that the discrepancies become significant only under rather extreme conditions, such as immediately beyond the surface after a large air gap. In electron beams all the primary electrons originate from the surface of the phantom and the errors caused by simple density scaling can be much more significant. Various aspects relating to the accuracy of density scaling for air and cork slab inhomogeneities are discussed
Development and Validation of a PTSD-Related Impairment Scale
2012-06-01
Social Adjustment Scale (SAS-SR) (58] Dyadic Adjustment Scale (DAS) [59] Life Stressors and Social Resources Inventory ( LISRES ) [60] 3...measure that gauges on- 200 Social Resources lnven- 2. Spouse/partner going life stressors and social resources tory ( LISRES ; Moos & 3. Finances as well...measures (e.g., ICF checklist, LISRES ; Moos, Penn, & Billings, 1988) may nor be practical or desirable in many healthcare settings or in large-scale
Multi-scale calculation based on dual domain material point method combined with molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-02-27
This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the
Accuracy of a digital weight scale relative to the nintendo wii in measuring limb load asymmetry.
Kumar, Ns Senthil; Omar, Baharudin; Joseph, Leonard H; Hamdan, Nor; Htwe, Ohnmar; Hamidun, Nursalbiyah
2014-08-01
[Purpose] The aim of the present study was to investigate the accuracy of a digital weight scale relative to the Wii in limb loading measurement during static standing. [Methods] This was a cross-sectional study conducted at a public university teaching hospital. The sample consisted of 24 participants (12 with osteoarthritis and 12 healthy) recruited through convenient sampling. Limb loading measurements were obtained using a digital weight scale and the Nintendo Wii in static standing with three trials under an eyes-open condition. The limb load asymmetry was computed as the symmetry index. [Results] The accuracy of measurement with the digital weight scale relative to the Nintendo Wii was analyzed using the receiver operating characteristic (ROC) curve and Kolmogorov-Smirnov test (K-S test). The area under the ROC curve was found to be 0.67. Logistic regression confirmed the validity of digital weight scale relative to the Nintendo Wii. The D statistics value from the K-S test was found to be 0.16, which confirmed that there was no significant difference in measurement between the equipment. [Conclusion] The digital weight scale is an accurate tool for measuring limb load asymmetry. The low price, easy availability, and maneuverability make it a good potential tool in clinical settings for measuring limb load asymmetry.
Laboratory-scale measurements of effective relative permeability for layered sands
Energy Technology Data Exchange (ETDEWEB)
Butts, M.G.; Korsgaard, S.
1996-12-31
Predictions of the impact of remediation or the extent of contamination resulting from spills of gasoline, solvents and other petroleum products, must often be made in complex geological environments. Such problems can be treated by introducing the concept of effective parameters that incorporate the effects of soil layering or other heterogeneities into a large-scale flow description. Studies that derive effective multiphase parameters are few, and approximations are required to treat the non-linear multiphase flow equations. The purpose of this study is to measure effective relative permeabilities for well-defined multi-layered soils at the laboratory scale. Relative permeabilities were determined for homogeneous and layered, unconsolidated sands using the method of Jones and Roszelle (1978). The experimental data show that endpoint relative permeabilities are important in defining the shape of the relative permeability curves, but these cannot be predicted by estimation methods base on capillary pressure data. The most significant feature of the measured effective relative permeability curves is that the entrapped (residual) oil saturation is significantly larger than the residual saturation of the individual layers. This observation agrees with previous theoretical predictions of large-scale entrapment Butts, 1993 and (1995). Enhanced entrapment in heterogeneous soils has several important implications for spill remediation, for example, the reduced efficiency of direct recovery. (au) 17 refs.
Laboratory-scale measurements of effective relative permeability for layered sands
International Nuclear Information System (INIS)
Butts, M.G.; Korsgaard, S.
1996-01-01
Predictions of the impact of remediation or the extent of contamination resulting from spills of gasoline, solvents and other petroleum products, must often be made in complex geological environments. Such problems can be treated by introducing the concept of effective parameters that incorporate the effects of soil layering or other heterogeneities into a large-scale flow description. Studies that derive effective multiphase parameters are few, and approximations are required to treat the non-linear multiphase flow equations. The purpose of this study is to measure effective relative permeabilities for well-defined multi-layered soils at the laboratory scale. Relative permeabilities were determined for homogeneous and layered, unconsolidated sands using the method of Jones and Roszelle (1978). The experimental data show that endpoint relative permeabilities are important in defining the shape of the relative permeability curves, but these cannot be predicted by estimation methods base on capillary pressure data. The most significant feature of the measured effective relative permeability curves is that the entrapped (residual) oil saturation is significantly larger than the residual saturation of the individual layers. This observation agrees with previous theoretical predictions of large-scale entrapment Butts, 1993 and (1995). Enhanced entrapment in heterogeneous soils has several important implications for spill remediation, for example, the reduced efficiency of direct recovery. (au) 17 refs
Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale
International Nuclear Information System (INIS)
Daily, Jeffrey A.
2015-01-01
The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore's law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or 'homologous') on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K
Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale
Energy Technology Data Exchange (ETDEWEB)
Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)
2015-05-01
The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores
International Nuclear Information System (INIS)
Park, Jin Beak
1995-02-01
Low-level radioactive waste management require the knowledge of the natures and quantities of radionuclides in the immobilized or packaged waste. U. S. NRC rules require programs that measure the concentrations of all relevant nuclides either directly or indirectly by relating difficult-to-measure radionuclides to other easy-to-measure radionuclides with application of scaling factors. Scaling factors previously developed through statistical approach can give only generic ones and have many difficult problem about sampling procedures. Generic scaling factors can not take into account for plant operation history. In this study, a method to predict plant-specific and operational history dependent scaling factors is developed. Realistic and detailed approach are taken to find scaling factors at reactor coolant. This approach begin with fission product release mechanisms and fundamental release properties of fuel-source nuclide such as fission product and transuranic nuclide. Scaling factors at various waste streams are derived from the predicted reactor coolant scaling factors with the aid of radionuclide retention and build up model. This model make use of radioactive material balance within the radioactive waste processing systems. Scaling factors at reactor coolant and waste streams which can include the effects of plant operation history have been developed according to input parameters of plant operation history
Electron beam absorption in solid and in water phantoms: depth scaling and energy-range relations
International Nuclear Information System (INIS)
Grosswendt, B.; Roos, M.
1989-01-01
In electron dosimetry energy parameters are used with values evaluated from ranges in water. The electron ranges in water may be deduced from ranges measured in solid phantoms. Several procedures recommended by national and international organisations differ both in the scaling of the ranges and in the energy-range relations for water. Using the Monte Carlo method the application of different procedures for electron energies below 10 MeV is studied for different phantom materials. It is shown that deviations in the range scaling and in the energy-range relations for water may accumulate to give energy errors of several per cent. In consequence energy-range relations are deduced for several solid phantom materials which enable a single-step energy determination. (author)
New SCALE-4 features related to cross-section processing
International Nuclear Information System (INIS)
Petrie, L.M.; Landers, N.F.; Greene, N.M.; Parks, C.V.
1991-01-01
The SCALE code system has a standardized scheme for processing problem-dependent cross section from problem-independent waste libraries. Some improvements and new capabilities in the processing scheme have been incorporated into the new Version 4 release of the SCALE system. The new features include the capability to consider annular cylindrical and spherical unit cells, and improved Dancoff factor formulation, and changes to the NITAWL-II module to perform resonance self-shielding with reference to infinite dilute values. A review of these major changes in the cross-section processing scheme for SCALE-4 is presented in this paper
Plants with useful traits and related methods
Mackenzie, Sally Ann; De la Rosa Santamaria, Roberto
2016-10-25
The present invention provides methods for obtaining plants that exhibit useful traits by transient suppression of the MSH1 gene of the plants. Methods for identifying genetic loci that provide for useful traits in plants and plants produced with those loci are also provided. In addition, plants that exhibit the useful traits, parts of the plants including seeds, and products of the plants are provided as well as methods of using the plants.
Method-related estimates of sperm vitality.
Cooper, Trevor G; Hellenkemper, Barbara
2009-01-01
Comparison of methods that estimate viability of human spermatozoa by monitoring head membrane permeability revealed that wet preparations (whether using positive or negative phase-contrast microscopy) generated significantly higher percentages of nonviable cells than did air-dried eosin-nigrosin smears. Only with the latter method did the sum of motile (presumed live) and stained (presumed dead) preparations never exceed 100%, making this the method of choice for sperm viability estimates.
Regional scale ecological risk assessment: using the relative risk model
National Research Council Canada - National Science Library
Landis, Wayne G
2005-01-01
...) in the performance of regional-scale ecological risk assessments. The initial chapters present the methodology and the critical nature of the interaction between risk assessors and decision makers...
Relating Actor Analysis Methods to Policy Problems
Van der Lei, T.E.
2009-01-01
For a policy analyst the policy problem is the starting point for the policy analysis process. During this process the policy analyst structures the policy problem and makes a choice for an appropriate set of methods or techniques to analyze the problem (Goeller 1984). The methods of the policy
A Method of Vector Map Multi-scale Representation Considering User Interest on Subdivision Gird
Directory of Open Access Journals (Sweden)
YU Tong
2016-12-01
Full Text Available Compared with the traditional spatial data model and method, global subdivision grid show a great advantage in the organization and expression of massive spatial data. In view of this, a method of vector map multi-scale representation considering user interest on subdivision gird is proposed. First, the spatial interest field is built using a large number POI data to describe the spatial distribution of the user interest in geographic information. Second, spatial factor is classified and graded, and its representation scale range can be determined. Finally, different levels of subdivision surfaces are divided based on GeoSOT subdivision theory, and the corresponding relation of subdivision level and scale is established. According to the user interest of subdivision surfaces, the spatial feature can be expressed in different degree of detail. It can realize multi-scale representation of spatial data based on user interest. The experimental results show that this method can not only satisfy general-to-detail and important-to-secondary space cognitive demands of users, but also achieve better multi-scale representation effect.
Using scaling relations to understand trends in the catalytic activity of transition metals
International Nuclear Information System (INIS)
Jones, G; Bligaard, T; Abild-Pedersen, F; Noerskov, J K
2008-01-01
A method is developed to estimate the potential energy diagram for a full catalytic reaction for a range of late transition metals on the basis of a calculation (or an experimental determination) for a single metal. The method, which employs scaling relations between adsorption energies, is illustrated by calculating the potential energy diagram for the methanation reaction and ammonia synthesis for 11 different metals on the basis of results calculated for Ru. It is also shown that considering the free energy diagram for the reactions, under typical industrial conditions, provides additional insight into reactivity trends
Large-scale structure observables in general relativity
International Nuclear Information System (INIS)
Jeong, Donghui; Schmidt, Fabian
2015-01-01
We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)
Computational methods for criticality safety analysis within the scale system
International Nuclear Information System (INIS)
Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.
1986-01-01
The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs
Test methods of total dose effects in very large scale integrated circuits
International Nuclear Information System (INIS)
He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng
2004-01-01
A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU
Energy Technology Data Exchange (ETDEWEB)
Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)
2014-06-01
The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.
Relating Lagrangian passive scalar scaling exponents to Eulerian scaling exponents in turbulence
Schmitt , François G
2005-01-01
Intermittency is a basic feature of fully developed turbulence, for both velocity and passive scalars. Intermittency is classically characterized by Eulerian scaling exponent of structure functions. The same approach can be used in a Lagrangian framework to characterize the temporal intermittency of the velocity and passive scalar concentration of a an element of fluid advected by a turbulent intermittent field. Here we focus on Lagrangian passive scalar scaling exponents, and discuss their p...
Features of the method of large-scale paleolandscape reconstructions
Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina
2017-04-01
The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the
The 'Falling Box' method in general relativity
International Nuclear Information System (INIS)
Gladush, V.D.
1998-01-01
The problems of justification, generalization, and applicability of the 'falling box' method to obtained some exact solutions of the vacuum Einstein equations are investigated. The 'physical' inference of the Reissner-Nordstrom-de Sitter and Kerr metrics is shown. Explanation is given for the well-known relativistic phenomenon which consists in that gravity is created by the double density of the electrical field energy
Mbow, C.; Brandt, M.; Fensholt, R.; Ouedraogo, I.; Tagesson, T.
2015-12-01
Thematic gaps in land degradation trends in the SahelTrend in land degradation has been the most contended issue for arid and semi-arid regions. In the Sahel, depending to scale of analysis and methods and data used, the trend documented have not been consistent across authors and science disciplines. The assessment of land degradation and the quantification of its effects on land productivity have been assessed for many decades, but little agreement has been gained on the magnitude and direction in the Sahel. This lack of consistency amid science outputs can be related to many methodological underpinnings and data used for various scales of analysis. Assessing biophysical trends on the ground requires long-term ground-based data collection to evaluate and better understand the mechanisms behind land dynamics. The Sahel is seen as greening by many authors? Is that greening geographically consistent? These questions enquire the importance of scale analysis and related drivers. The questions addressed are not only factors explaining loss of tree cover but also regeneration of degraded land. The picture used is the heuristic cycle model to assess loss and damages vs gain and improvements of various land use practices. The presentation will address the following aspects - How much we know from satellite data after 40 years of remote sensing analysis over the Sahel? That section discuss agreement and divergences of evidences and differentiated interpretation of land degradation in the Sahel. - The biophysical factors that are relevant for tracking land degradation in the Sahel. Aspects such detangling human to climate factors and biophysical factors behind land dynamics will be presented - Introduce some specific cases of driver of land architecture transition under the combined influence of climate and human factor. - Based on the above we will conclude with some key recommendations on how to improve land degradation assessment in the Arid region of the Sahel.
Relations between overturning length scales at the Spanish planetary boundary layer
López, Pilar; Cano, José L.
2016-04-01
We analyze the behavior of the maximum Thorpe displacement (dT)max and the Thorpe scale LTat the atmospheric boundary layer (ABL), extending previous research with new data and improving our studies related to the novel use of the Thorpe method applied to ABL. The maximum Thorpe displacements vary between -900 m and 950 m for the different field campaigns. The maximum Thorpe displacement is always greater under convective conditions than under stable ones, independently of its sign. The Thorpe scale LT ranges between 0.2 m and 680 m for the different data sets which cover different stratified mixing conditions (turbulence shear-driven and convective regions). The Thorpe scale does not exceed several tens of meters under stable and neutral stratification conditions related to instantaneous density gradients. In contrast, under convective conditions, Thorpe scales are relatively large, they exceed hundreds of meters which may be related to convective bursts. We analyze the relation between (dT)max and the Thorpe scale LT and we deduce that they verify a power law. We also deduce that there is a difference in exponents of the power laws for convective conditions and shear-driven conditions. These different power laws could identify overturns created under different mechanisms. References Cuxart, J., Yagüe, C., Morales, G., Terradellas, E., Orbe, J., Calvo, J., Fernández, A., Soler, M., Infante, C., Buenestado, P., Espinalt, Joergensen, H., Rees, J., Vilà, J., Redondo, J., Cantalapiedra, I. and Conangla, L.: Stable atmospheric boundary-layer experiment in Spain (Sables 98). A report, Boundary-Layer Meteorology, 96, 337-370, 2000. Dillon, T. M.: Vertical Overturns: A Comparison of Thorpe and Ozmidov Length Scales, J. Geophys. Res., 87(C12), 9601-9613, 1982. Itsweire, E. C.: Measurements of vertical overturns in stably stratified turbulent flow, Phys. Fluids, 27(4), 764-766, 1984. Kitade, Y., Matsuyama, M. and Yoshida, J.: Distribution of overturn induced by internal
THE EVOLUTION OF BLACK HOLE SCALING RELATIONS IN GALAXY MERGERS
International Nuclear Information System (INIS)
Johansson, Peter H.; Burkert, Andreas; Naab, Thorsten
2009-01-01
We study the evolution of black holes (BHs) on the M BH -σ and M BH -M bulge planes as a function of time in disk galaxies undergoing mergers. We begin the simulations with the progenitor BH masses being initially below (Δlog M BH,i ∼ -2), on (Δlog M BH,i ∼ 0), and above (Δlog M BH,i ∼ 0.5) the observed local relations. The final relations are rapidly established after the final coalescence of the galaxies and their BHs. Progenitors with low initial gas fractions (f gas = 0.2) starting below the relations evolve onto the relations (Δlog M BH,f ∼ -0.18), progenitors on the relations stay there (Δlog M BH,f ∼ 0), and finally progenitors above the relations evolve toward the relations, but still remain above them (Δlog M BH,f ∼ 0.35). Mergers in which the progenitors have high initial gas fractions (f gas = 0.8) evolve above the relations in all cases (Δlog M BH,f ∼ 0.5). We find that the initial gas fraction is the prime source of scatter in the observed relations, dominating over the scatter arising from the evolutionary stage of the merger remnants. The fact that BHs starting above the relations do not evolve onto the relations indicates that our simulations rule out the scenario in which overmassive BHs evolve onto the relations through gas-rich mergers. By implication our simulations thus disfavor the picture in which supermassive BHs develop significantly before their parent bulges.
Urban energy consumption and related carbon emission estimation: a study at the sector scale
Lu, Weiwei; Chen, Chen; Su, Meirong; Chen, Bin; Cai, Yanpeng; Xing, Tao
2013-12-01
With rapid economic development and energy consumption growth, China has become the largest energy consumer in the world. Impelled by extensive international concern, there is an urgent need to analyze the characteristics of energy consumption and related carbon emission, with the objective of saving energy, reducing carbon emission, and lessening environmental impact. Focusing on urban ecosystems, the biggest energy consumer, a method for estimating energy consumption and related carbon emission was established at the urban sector scale in this paper. Based on data for 1996-2010, the proposed method was applied to Beijing in a case study to analyze the consumption of different energy resources (i.e., coal, oil, gas, and electricity) and related carbon emission in different sectors (i.e., agriculture, industry, construction, transportation, household, and service sectors). The results showed that coal and oil contributed most to energy consumption and carbon emission among different energy resources during the study period, while the industrial sector consumed the most energy and emitted the most carbon among different sectors. Suggestions were put forward for energy conservation and emission reduction in Beijing. The analysis of energy consumption and related carbon emission at the sector scale is helpful for practical energy saving and emission reduction in urban ecosystems.
Giant molecular cloud scaling relations: the role of the cloud definition
Khoperskov, S. A.; Vasiliev, E. O.; Ladeyschikov, D. A.; Sobolev, A. M.; Khoperskov, A. V.
2016-01-01
We investigate the physical properties of molecular clouds in disc galaxies with different morphologies: a galaxy without prominent structure, a spiral barred galaxy and a galaxy with flocculent structure. Our N-body/hydrodynamical simulations take into account non-equilibrium H2 and CO chemical kinetics, self-gravity, star formation and feedback processes. For the simulated galaxies, the scaling relations of giant molecular clouds, or so-called Larson's relations, are studied for two types of cloud definition (or extraction method): the first is based on total column density position-position (PP) data sets and the second is indicated by the CO (1-0) line emission used in position-position-velocity (PPV) data. We find that the cloud populations obtained using both cloud extraction methods generally have similar physical parameters, except that for the CO data the mass spectrum of clouds has a tail with low-mass objects M ˜ 103-104 M⊙. Owing toa varying column density threshold, the power-law indices in the scaling relations are significantly changed. In contrast, the relations are invariant to the CO brightness temperature threshold. Finally, we find that the mass spectra of clouds for PPV data are almost insensitive to the galactic morphology, whereas the spectra for PP data demonstrate significant variation.
Capacitor assembly and related method of forming
Zhang, Lili; Tan, Daniel Qi; Sullivan, Jeffrey S.
2017-12-19
A capacitor assembly is disclosed. The capacitor assembly includes a housing. The capacitor assembly further includes a plurality of capacitors disposed within the housing. Furthermore, the capacitor assembly includes a thermally conductive article disposed about at least a portion of a capacitor body of the capacitors, and in thermal contact with the capacitor body. Moreover, the capacitor assembly also includes a heat sink disposed within the housing and in thermal contact with at least a portion of the housing and the thermally conductive article such that the heat sink is configured to remove heat from the capacitor in a radial direction of the capacitor assembly. Further, a method of forming the capacitor assembly is also presented.
Flexible energetic materials and related methods
Energy Technology Data Exchange (ETDEWEB)
Heaps, Ronald J.
2018-03-06
Energetic compositions and methods of forming components from the compositions are provided. In one embodiment, a composition includes aluminum, molybdenum trioxide, potassium perchlorate, and a binder. In one embodiment, the binder may include a silicone material. The materials may be mixed with a solvent, such as xylene, de-aired, shaped and cured to provide a self-supporting structure. In one embodiment, one or more reinforcement members may be added to provide additional strength to the structure. For example, a weave or mat of carbon fiber material may be added to the mixture prior to curing. In one embodiment, blade casting techniques may be used to form a structure. In another embodiment, a structure may be formed using 3-dimensional printing techniques.
Short scales to assess cannabis-related problems: a review of psychometric properties
Directory of Open Access Journals (Sweden)
Klempova Danica
2008-12-01
Full Text Available Abstract Aims The purpose of this paper is to summarize the psychometric properties of four short screening scales to assess problematic forms of cannabis use: Severity of Dependence Scale (SDS, Cannabis Use Disorders Identification Test (CUDIT, Cannabis Abuse Screening Test (CAST and Problematic Use of Marijuana (PUM. Methods A systematic computer-based literature search was conducted within the databases of PubMed, PsychINFO and Addiction Abstracts. A total of 12 publications reporting measures of reliability or validity were identified: 8 concerning SDS, 2 concerning CUDIT and one concerning CAST and PUM. Studies spanned adult and adolescent samples from general and specific user populations in a number of countries worldwide. Results All screening scales tended to have moderate to high internal consistency (Cronbach's α ranging from .72 to .92. Test-retest reliability and item total correlation have been reported for SDS with acceptable results. Results of validation studies varied depending on study population and standards used for validity assessment, but generally sensitivity, specificity and predictive power are satisfactory. Standard diagnostic cut-off points that can be generalized to different populations do not exist for any scale. Conclusion Short screening scales to assess dependence and other problems related to the use of cannabis seem to be a time and cost saving opportunity to estimate overall prevalences of cannabis-related negative consequences and to identify at-risk persons prior to using more extensive diagnostic instruments. Nevertheless, further research is needed to assess the performance of the tests in different populations and in comparison to broader criteria of cannabis-related problems other than dependence.
International Nuclear Information System (INIS)
Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro
2015-01-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to ''address uncertainties and increase confidence in the projected, full-scale mixing performance and operations'' in the Waste Treatment and Immobilization Plant (WTP).
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).
Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration
Gasda, S. E.
2009-04-23
Large-scale implementation of geological CO2 sequestration requires quantification of risk and leakage potential. One potentially important leakage pathway for the injected CO2 involves existing oil and gas wells. Wells are particularly important in North America, where more than a century of drilling has created millions of oil and gas wells. Models of CO 2 injection and leakage will involve large uncertainties in parameters associated with wells, and therefore a probabilistic framework is required. These models must be able to capture both the large-scale CO 2 plume associated with the injection and the small-scale leakage problem associated with localized flow along wells. Within a typical simulation domain, many hundreds of wells may exist. One effective modeling strategy combines both numerical and analytical models with a specific set of simplifying assumptions to produce an efficient numerical-analytical hybrid model. The model solves a set of governing equations derived by vertical averaging with assumptions of a macroscopic sharp interface and vertical equilibrium. These equations are solved numerically on a relatively coarse grid, with an analytical model embedded to solve for wellbore flow occurring at the sub-gridblock scale. This vertical equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid refinement for sub-scale features. Through a series of benchmark problems, we show that VESA compares well with traditional numerical simulations and to a semi-analytical model which applies to appropriately simple systems. We believe that the VESA model provides the necessary accuracy and efficiency for applications of risk analysis in many CO2 sequestration problems. © 2009 Springer Science+Business Media B.V.
Iwata, Masaki; Otaki, Joji M
2016-02-01
Complex butterfly wing color patterns are coordinated throughout a wing by unknown mechanisms that provide undifferentiated immature scale cells with positional information for scale color. Because there is a reasonable level of correspondence between the color pattern element and scale size at least in Junonia orithya and Junonia oenone, a single morphogenic signal may contain positional information for both color and size. However, this color-size relationship has not been demonstrated in other species of the family Nymphalidae. Here, we investigated the distribution patterns of scale size in relation to color pattern elements on the hindwings of the peacock pansy butterfly Junonia almana, together with other nymphalid butterflies, Vanessa indica and Danaus chrysippus. In these species, we observed a general decrease in scale size from the basal to the distal areas, although the size gradient was small in D. chrysippus. Scales of dark color in color pattern elements, including eyespot black rings, parafocal elements, and submarginal bands, were larger than those of their surroundings. Within an eyespot, the largest scales were found at the focal white area, although there were exceptional cases. Similarly, ectopic eyespots that were induced by physical damage on the J. almana background area had larger scales than in the surrounding area. These results are consistent with the previous finding that scale color and size coordinate to form color pattern elements. We propose a ploidy hypothesis to explain the color-size relationship in which the putative morphogenic signal induces the polyploidization (genome amplification) of immature scale cells and that the degrees of ploidy (gene dosage) determine scale color and scale size simultaneously in butterfly wings. Copyright © 2015 Elsevier Ltd. All rights reserved.
A new method for estimating carbon dioxide emissions from transportation at fine spatial scales
Energy Technology Data Exchange (ETDEWEB)
Shu Yuqin [School of Geographical Science, South China Normal University, Guangzhou 510631 (China); Lam, Nina S N; Reams, Margaret, E-mail: gis_syq@126.com, E-mail: nlam@lsu.edu, E-mail: mreams@lsu.edu [Department of Environmental Sciences, Louisiana State University, Baton Rouge, 70803 (United States)
2010-10-15
Detailed estimates of carbon dioxide (CO{sub 2}) emissions at fine spatial scales are useful to both modelers and decision makers who are faced with the problem of global warming and climate change. Globally, transport related emissions of carbon dioxide are growing. This letter presents a new method based on the volume-preserving principle in the areal interpolation literature to disaggregate transportation-related CO{sub 2} emission estimates from the county-level scale to a 1 km{sup 2} grid scale. The proposed volume-preserving interpolation (VPI) method, together with the distance-decay principle, were used to derive emission weights for each grid based on its proximity to highways, roads, railroads, waterways, and airports. The total CO{sub 2} emission value summed from the grids within a county is made to be equal to the original county-level estimate, thus enforcing the volume-preserving property. The method was applied to downscale the transportation-related CO{sub 2} emission values by county (i.e. parish) for the state of Louisiana into 1 km{sup 2} grids. The results reveal a more realistic spatial pattern of CO{sub 2} emission from transportation, which can be used to identify the emission 'hot spots'. Of the four highest transportation-related CO{sub 2} emission hotspots in Louisiana, high-emission grids literally covered the entire East Baton Rouge Parish and Orleans Parish, whereas CO{sub 2} emission in Jefferson Parish (New Orleans suburb) and Caddo Parish (city of Shreveport) were more unevenly distributed. We argue that the new method is sound in principle, flexible in practice, and the resultant estimates are more accurate than previous gridding approaches.
A Scale Development for Teacher Competencies on Cooperative Learning Method
Kocabas, Ayfer; Erbil, Deniz Gokce
2017-01-01
Cooperative learning method is a learning method studied both in Turkey and in the world for long years as an active learning method. Although cooperative learning method takes place in training programs, it cannot be implemented completely in the direction of its principles. The results of the researches point out that teachers have problems with…
REVISITING SCALING RELATIONS FOR GIANT RADIO HALOS IN GALAXY CLUSTERS
Energy Technology Data Exchange (ETDEWEB)
Cassano, R.; Brunetti, G.; Venturi, T.; Kale, R. [INAF/IRA, via Gobetti 101, I-40129 Bologna (Italy); Ettori, S. [INAF/Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Giacintucci, S. [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States); Pratt, G. W. [Laboratoire AIM, IRFU/Service dAstrophysique-CEA/DSM-CNRS-Université Paris Diderot, Bât. 709, CEA-Saclay, F-91191 Gif-sur-Yvette Cedex (France); Dolag, K. [University Observatory Munich, Scheinerstr. 1, D-81679 Munich (Germany); Markevitch, M. [Astrophysics Science Division, NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States)
2013-11-10
Many galaxy clusters host megaparsec-scale radio halos, generated by ultrarelativistic electrons in the magnetized intracluster medium. Correlations between the synchrotron power of radio halos and the thermal properties of the hosting clusters were established in the last decade, including the connection between the presence of a halo and cluster mergers. The X-ray luminosity and redshift-limited Extended GMRT Radio Halo Survey provides a rich and unique dataset for statistical studies of the halos. We uniformly analyze the radio and X-ray data for the GMRT cluster sample, and use the new Planck Sunyaev-Zel'dovich (SZ) catalog to revisit the correlations between the power of radio halos and the thermal properties of galaxy clusters. We find that the radio power at 1.4 GHz scales with the cluster X-ray (0.1-2.4 keV) luminosity computed within R{sub 500} as P{sub 1.4}∼L{sup 2.1±0.2}{sub 500}. Our bigger and more homogenous sample confirms that the X-ray luminous (L{sub 500} > 5 × 10{sup 44} erg s{sup –1}) clusters branch into two populations—radio halos lie on the correlation, while clusters without radio halos have their radio upper limits well below that correlation. This bimodality remains if we excise cool cores from the X-ray luminosities. We also find that P{sub 1.4} scales with the cluster integrated SZ signal within R{sub 500}, measured by Planck, as P{sub 1.4}∼Y{sup 2.05±0.28}{sub 500}, in line with previous findings. However, contrary to previous studies that were limited by incompleteness and small sample size, we find that 'SZ-luminous' Y{sub 500} > 6 × 10{sup –5} Mpc{sup 2} clusters show a bimodal behavior for the presence of radio halos, similar to that in the radio-X-ray diagram. Bimodality of both correlations can be traced to clusters dynamics, with radio halos found exclusively in merging clusters. These results confirm the key role of mergers for the origin of giant radio halos, suggesting that they trigger the
The Debye light scattering equation’s scaling relation reveals the purity of synthetic dendrimers
Energy Technology Data Exchange (ETDEWEB)
Tseng, Hui-Yu; Chen, Hsiao-Ping [National Chung Cheng University, Department of Chemistry and Biochemistry (China); Tang, Yi-Hsuan [Kaohsiung Medical University, Department of Medicinal and Applied Chemistry (China); Chen, Hui-Ting [Kaohsiung Medical University, Department of Fragrance and Cosmetic Science (China); Kao, Chai-Lin, E-mail: clkao@kmu.edu.tw [Kaohsiung Medical University, Department of Medicinal and Applied Chemistry (China); Wang, Shau-Chun, E-mail: chescw@ccu.edu.tw [National Chung Cheng University, Department of Chemistry and Biochemistry (China)
2016-03-15
Spherical dendrimer structures cannot be structurally modeled using conventional polymer models of random coil or rod-like configurations during the calibration of the static light scattering (LS) detectors used to determine the molecular weight (M.W.) of a dendrimer or directly assess the purity of a synthetic compound. In this paper, we used the Debye equation-based scaling relation, which predicts that the static LS intensity per unit concentration is linearly proportional to the M.W. of a synthetic dendrimer in a dilute solution, as a tool to examine the purity of high-generational compounds and to monitor the progress of dendrimer preparations. Without using expensive equipment, such as nuclear magnetic resonance or mass spectrometry, this method only required an affordable flow injection set-up with an LS detector. Solutions of the purified dendrimers, including the poly(amidoamine) (PAMAM) dendrimer and its fourth to seventh generation pyridine derivatives with size range of 5–9 nm, were used to establish the scaling relation with high linearity. The use of artificially impure mixtures of six or seven generations revealed significant deviations from linearity. The raw synthesized products of the pyridine-modified PAMAM dendrimer, which included incompletely reacted dendrimers, were also examined to gauge the reaction progress. As a reaction toward a particular generational derivative of the PAMAM dendrimers proceeded over time, deviations from the linear scaling relation decreased. The difference between the polydispersity index of the incompletely converted products and that of the pure compounds was only about 0.01. The use of the Debye equation-based scaling relation, therefore, is much more useful than the polydispersity index for monitoring conversion processes toward an indicated functionality number in a given preparation.Graphical abstract.
The Debye light scattering equation’s scaling relation reveals the purity of synthetic dendrimers
International Nuclear Information System (INIS)
Tseng, Hui-Yu; Chen, Hsiao-Ping; Tang, Yi-Hsuan; Chen, Hui-Ting; Kao, Chai-Lin; Wang, Shau-Chun
2016-01-01
Spherical dendrimer structures cannot be structurally modeled using conventional polymer models of random coil or rod-like configurations during the calibration of the static light scattering (LS) detectors used to determine the molecular weight (M.W.) of a dendrimer or directly assess the purity of a synthetic compound. In this paper, we used the Debye equation-based scaling relation, which predicts that the static LS intensity per unit concentration is linearly proportional to the M.W. of a synthetic dendrimer in a dilute solution, as a tool to examine the purity of high-generational compounds and to monitor the progress of dendrimer preparations. Without using expensive equipment, such as nuclear magnetic resonance or mass spectrometry, this method only required an affordable flow injection set-up with an LS detector. Solutions of the purified dendrimers, including the poly(amidoamine) (PAMAM) dendrimer and its fourth to seventh generation pyridine derivatives with size range of 5–9 nm, were used to establish the scaling relation with high linearity. The use of artificially impure mixtures of six or seven generations revealed significant deviations from linearity. The raw synthesized products of the pyridine-modified PAMAM dendrimer, which included incompletely reacted dendrimers, were also examined to gauge the reaction progress. As a reaction toward a particular generational derivative of the PAMAM dendrimers proceeded over time, deviations from the linear scaling relation decreased. The difference between the polydispersity index of the incompletely converted products and that of the pure compounds was only about 0.01. The use of the Debye equation-based scaling relation, therefore, is much more useful than the polydispersity index for monitoring conversion processes toward an indicated functionality number in a given preparation.Graphical abstract
Ecosystem assessment methods for cumulative effects at the regional scale
International Nuclear Information System (INIS)
Hunsaker, C.T.
1989-01-01
Environmental issues such as nonpoint-source pollution, acid rain, reduced biodiversity, land use change, and climate change have widespread ecological impacts and require an integrated assessment approach. Since 1978, the implementing regulations for the National Environmental Policy Act (NEPA) have required assessment of potential cumulative environmental impacts. Current environmental issues have encouraged ecologists to improve their understanding of ecosystem process and function at several spatial scales. However, management activities usually occur at the local scale, and there is little consideration of the potential impacts to the environmental quality of a region. This paper proposes that regional ecological risk assessment provides a useful approach for assisting scientists in accomplishing the task of assessing cumulative impacts. Critical issues such as spatial heterogeneity, boundary definition, and data aggregation are discussed. Examples from an assessment of acidic deposition effects on fish in Adirondack lakes illustrate the importance of integrated data bases, associated modeling efforts, and boundary definition at the regional scale
International Nuclear Information System (INIS)
Lee, Sang Il
1992-02-01
A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR
Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang
2017-12-01
Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.
Scaling Methods to Measure Psychopathology in Persons with Intellectual Disabilities
Matson, Johnny L.; Belva, Brian C.; Hattier, Megan A.; Matson, Michael L.
2012-01-01
Psychopathology prior to the last four decades was generally viewed as a set of problems and disorders that did not occur in persons with intellectual disabilities (ID). That notion now seems very antiquated. In no small part, a revolutionary development of scales worldwide has occurred for the assessment of emotional problems in persons with ID.…
The Large-Scale Structure of Scientific Method
Kosso, Peter
2009-01-01
The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…
Newton Methods for Large Scale Problems in Machine Learning
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Moortgat, J.; Amooie, M. A.; Soltanian, M. R.
2016-12-01
Problems in hydrogeology and hydrocarbon reservoirs generally involve the transport of solutes in a single solvent phase (e.g., contaminants or dissolved injection gas), or the flow of multiple phases that may or may not exchange mass (e.g., brine, NAPL, oil, gas). Often, flow is viscously and gravitationally unstable due to mobility and density contrasts within a phase or between phases. Such instabilities have been studied in detail for single-phase incompressible fluids and for two-phase immiscible flow, but to a lesser extent for multiphase multicomponent compressible flow. The latter is the subject of this presentation. Robust phase stability analyses and phase split calculations, based on equations of state, determine the mass exchange between phases and the resulting phase behavior, i.e., phase densities, viscosities, and volumes. Higher-order finite element methods and fine grids are used to capture the small-scale onset of flow instabilities. A full matrix of composition dependent coefficients is considered for each Fickian diffusive phase flux. Formation heterogeneity can have a profound impact and is represented by realistic geostatistical models. Qualitatively, fingering in multiphase compositional flow is different from single-phase problems because 1) phase mobilities depend on rock wettability through relative permeabilities, and 2) the initial density and viscosity ratios between phases may change due to species transfer. To quantify mixing rates in different flow regimes and for varying degrees of miscibility and medium heterogeneities, we define the spatial variance, scalar dissipation rate, dilution index, skewness, and kurtosis of the molar density of introduced species. Molar densities, unlike compositions, include compressibility effects. The temporal evolution of these measures shows that, while transport at the small-scale (cm) is described by the classical advection-diffusion-dispersion relations, scaling at the macro-scale (> 10 m) shows
Maher, G.D.; Hulshoff, S.J.
2014-01-01
The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain
Directory of Open Access Journals (Sweden)
Džudović Radmila M.
2010-01-01
Full Text Available The autoprotolysis constant and relative acidity scale of water were determined by applying the coulometric-potentiometric method and a hydrogen/palladium (H2/Pd generator anode. In the described procedure for the evaluation of autoprotolysis constant, a strong base generated coulometrically at the platinum cathode in situ in the electrolytic cell, in presence of sodium perchlorate as the supporting electrolyte, is titrated with hydrogen ions obtained by the anodic oxidation of hydrogen dissolved in palladium electrode. The titration was carried out with a glass-SCE electrode pair at 25.0±0.1°C. The value obtained pKw = 13.91 ± 0.06 is in agreement with literature data. The range of acidity scale of water is determined from the difference between the halfneutralization potentials of electrogenerated perchloric acid and that of sodium hydroxide in a sodium perchlorate medium. The halfneutralization potentials were measured using both a glass-SCE and a (H2/Pdind-SCE electrode pairs. A wider range of relative acidity scale of water was obtained with the glass-SCE electrode pair.
Özen, Hamit; Turan, Selahattin
2017-01-01
This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…
Scale dependency of American marten (Martes americana) habitat relations [Chapter 12
Andrew J. Shirk; Tzeidle N. Wasserman; Samuel A. Cushman; Martin G. Raphael
2012-01-01
Animals select habitat resources at multiple spatial scales; therefore, explicit attention to scale-dependency when modeling habitat relations is critical to understanding how organisms select habitat in complex landscapes. Models that evaluate habitat variables calculated at a single spatial scale (e.g., patch, home range) fail to account for the effects of...
Scale Sensitivity and Question Order in the Contingent Valuation Method
Andersson, Henrik; Svensson, Mikael
2010-01-01
This study examines the effect on respondents' willingness to pay to reduce mortality risk by the order of the questions in a stated preference study. Using answers from an experiment conducted on a Swedish sample where respondents' cognitive ability was measured and where they participated in a contingent valuation survey, it was found that scale sensitivity is strongest when respondents are asked about a smaller risk reduction first ('bottom-up' approach). This contradicts some previous evi...
Managing Small-Scale Fisheries : Alternative Directions and Methods
International Development Research Centre (IDRC) Digital Library (Canada)
Managing Small-scale Fisheries va plus loin que le champ d'application de la gestion classique des pêches pour aborder d'autres concepts, outils, méthodes et ... Les gestionnaires des pêches, tant du secteur public que du secteur privé, les chargés de cours et les étudiants en gestion des pêches, les organisations et les ...
Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration
Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.
2009-01-01
equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid
Energy Technology Data Exchange (ETDEWEB)
Lenormand, R.; Thiele, M.R. [Institut Francais du Petrole, Rueil Malmaison (France)
1997-08-01
The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities
Social network extraction based on Web: 1. Related superficial methods
Khairuddin Matyuso Nasution, Mahyuddin
2018-01-01
Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.
Mancillas, Brisa; Ávila-Reese, Vladimir; Rodríguez-Puebla, Aldo; Valls-Gabaud, David
2017-06-01
Several pieces of evidence suggest that disk formation is the generic process of assembly of galaxies, while the spheroidal component arises from the merging/interactions of disks as well as from their secular evolution. To understand galaxy formation and evolution, a cosmological framework is required. The current cosmological paradigm is summarized in the so-called Λ-cold dark matter model (ΛCDM). The statistical connection between the masses of the observed galaxies and those of the simulated CDM halos in large volumes leads us to the galaxy-halo mass relation, which summarizes the main astrophysical processes of galaxy formation and evolution (gas heating and cooling, SF, SN- and AGN-driven feedback, etc.). An important question is how this relation constrained by semi-empirical methods (e.g., Rodriguez-Puebla et al. 2014) is "projected" into the disk galaxy scaling relations and other galaxy correlations. To explore this question, we generate a synthetic catalog of thousands of disk/halo systems by means of an extended Mo, Mao & White (1998) model, and by using as input the baryonic-to-halo mass relation, fbar(Mh), of local disk galaxy as recently constrained by Calette et al. (2015).
Self-assembling membranes and related methods thereof
Capito, Ramille M; Azevedo, Helena S; Stupp, Samuel L
2013-08-20
The present invention relates to self-assembling membranes. In particular, the present invention provides self-assembling membranes configured for securing and/or delivering bioactive agents. In some embodiments, the self-assembling membranes are used in the treatment of diseases, and related methods (e.g., diagnostic methods, research methods, drug screening).
Interior Point Methods for Large-Scale Nonlinear Programming
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2005-01-01
Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005
Methods for testing of geometrical down-scaled rotor blades
DEFF Research Database (Denmark)
Branner, Kim; Berring, Peter
further developed since then. Structures in composite materials are generally difficult and time consuming to test for fatigue resistance. Therefore, several methods for testing of blades have been developed and exist today. Those methods are presented in [1]. Current experimental test performed on full...
A novel fruit shape classification method based on multi-scale analysis
Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin
2005-11-01
Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.
International Nuclear Information System (INIS)
Liu, D.
2011-01-01
Increasing environmental concerns and discharge limitations have imposed additional challenges in treating process waters. Thus, the concept of 'Green Chemistry' was proposed and green scale inhibitors became a focus of water treatment technology. Finding some economical and environmentally friendly inhibitors is one of the major research focuses nowadays. In this dissertation, the inhibition performance of different phosphonates as CaCO 3 scale inhibitors in simulated cooling water was evaluated. Homo-, co-, and ter-polymers were also investigated for their performance as Ca-phosphonate inhibitors. Addition of polymers as inhibitors with phosphonates could reduce Ca-phosphonate precipitation and enhance the inhibition efficiency for CaCO 3 scale. The synergistic effect of poly-aspartic acid (PASP) and Poly-epoxy-succinic acid (PESA) on inhibition of scaling has been studied using both static and dynamic methods. Results showed that the anti-scaling performance of PASP combined with PESA was superior to that of PASP or PESA alone for CaCO 3 , CaSO 4 and BaSO 4 scale. The influence of dosage, temperature and Ca 2+ concentration was also investigated in simulated cooling water circuit. Moreover, SEM analysis demonstrated the modification of crystalline morphology in the presence of PASP and PESA. In this work, we also investigated the respective inhibition effectiveness of copper and zinc ions for scaling in drinking water by the method of Rapid Controlled Precipitation (RCP). The results indicated that the zinc ion and copper ion were high efficient inhibitors of low concentration, and the analysis of SEM and IR showed that copper and zinc ions could affect the calcium carbonate germination and change the crystal morphology. Moreover, the influence of temperature and dissolved CO 2 on the scaling potential of a mineral water (Salvetat) in the presence of copper and zinc ions was studied by laboratory experiments. An ideal scale inhibitor should be a solid form
The MIMIC Method with Scale Purification for Detecting Differential Item Functioning
Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien
2009-01-01
This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…
Mathematical programming methods for large-scale topology optimization problems
DEFF Research Database (Denmark)
Rojas Labanda, Susana
for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...
Laboratory-scale evaluations of alternative plutonium precipitation methods
International Nuclear Information System (INIS)
Martella, L.L.; Saba, M.T.; Campbell, G.K.
1984-01-01
Plutonium(III), (IV), and (VI) carbonate; plutonium(III) fluoride; plutonium(III) and (IV) oxalate; and plutonium(IV) and (VI) hydroxide precipitation methods were evaluated for conversion of plutonium nitrate anion-exchange eluate to a solid, and compared with the current plutonium peroxide precipitation method used at Rocky Flats. Plutonium(III) and (IV) oxalate, plutonium(III) fluoride, and plutonium(IV) hydroxide precipitations were the most effective of the alternative conversion methods tested because of the larger particle-size formation, faster filtration rates, and the low plutonium loss to the filtrate. These were found to be as efficient as, and in some cases more efficient than, the peroxide method. 18 references, 14 figures, 3 tables
SCALE--A Conceptual and Transactional Method of Legal Study.
Johnson, Darrell B.
1985-01-01
Southwestern University School of Law's two-year, intensive, year-round program, the Southwestern Conceptual Approach to Legal Education, which emphasizes hypothetical problems as teaching tools rather than the case-book method, is described. (MSE)
PHIBSS: Unified Scaling Relations of Gas Depletion Time and Molecular Gas Fractions
Tacconi, L. J.; Genzel, R.; Saintonge, A.; Combes, F.; García-Burillo, S.; Neri, R.; Bolatto, A.; Contini, T.; Förster Schreiber, N. M.; Lilly, S.; Lutz, D.; Wuyts, S.; Accurso, G.; Boissier, J.; Boone, F.; Bouché, N.; Bournaud, F.; Burkert, A.; Carollo, M.; Cooper, M.; Cox, P.; Feruglio, C.; Freundlich, J.; Herrera-Camus, R.; Juneau, S.; Lippa, M.; Naab, T.; Renzini, A.; Salome, P.; Sternberg, A.; Tadaki, K.; Übler, H.; Walter, F.; Weiner, B.; Weiss, A.
2018-02-01
This paper provides an update of our previous scaling relations between galaxy-integrated molecular gas masses, stellar masses, and star formation rates (SFRs), in the framework of the star formation main sequence (MS), with the main goal of testing for possible systematic effects. For this purpose our new study combines three independent methods of determining molecular gas masses from CO line fluxes, far-infrared dust spectral energy distributions, and ∼1 mm dust photometry, in a large sample of 1444 star-forming galaxies between z = 0 and 4. The sample covers the stellar mass range log(M */M ⊙) = 9.0–11.8, and SFRs relative to that on the MS, δMS = SFR/SFR(MS), from 10‑1.3 to 102.2. Our most important finding is that all data sets, despite the different techniques and analysis methods used, follow the same scaling trends, once method-to-method zero-point offsets are minimized and uncertainties are properly taken into account. The molecular gas depletion time t depl, defined as the ratio of molecular gas mass to SFR, scales as (1 + z)‑0.6 × (δMS)‑0.44 and is only weakly dependent on stellar mass. The ratio of molecular to stellar mass μ gas depends on (1+z{)}2.5× {(δ {MS})}0.52× {({M}* )}-0.36, which tracks the evolution of the specific SFR. The redshift dependence of μ gas requires a curvature term, as may the mass dependences of t depl and μ gas. We find no or only weak correlations of t depl and μ gas with optical size R or surface density once one removes the above scalings, but we caution that optical sizes may not be appropriate for the high gas and dust columns at high z. Based on observations of an IRAM Legacy Program carried out with the NOEMA, operated by the Institute for Radio Astronomy in the Millimetre Range (IRAM), which is funded by a partnership of INSU/CNRS (France), MPG (Germany), and IGN (Spain).
Continuum level density of a coupled-channel system in the complex scaling method
International Nuclear Information System (INIS)
Suzuki, Ryusuke; Kato, Kiyoshi; Kruppa, Andras; Giraud, Bertrand G.
2008-01-01
We study the continuum level density (CLD) in the formalism of the complex scaling method (CSM) for coupled-channel systems. We apply the formalism to the 4 He=[ 3 H+p]+[ 3 He+n] coupled-channel cluster model where there are resonances at low energy. Numerical calculations of the CLD in the CSM with a finite number of L 2 basis functions are consistent with the exact result calculated from the S-matrix by solving coupled-channel equations. We also study channel densities. In this framework, the extended completeness relation (ECR) plays an important role. (author)
Jiang, Lijian; Efendiev, Yalchin; Ginting, Victor
2010-01-01
In this paper, we discuss a numerical multiscale approach for solving wave equations with heterogeneous coefficients. Our interest comes from geophysics applications and we assume that there is no scale separation with respect to spatial variables. To obtain the solution of these multiscale problems on a coarse grid, we compute global fields such that the solution smoothly depends on these fields. We present a Galerkin multiscale finite element method using the global information and provide a convergence analysis when applied to solve the wave equations. We investigate the relation between the smoothness of the global fields and convergence rates of the global Galerkin multiscale finite element method for the wave equations. Numerical examples demonstrate that the use of global information renders better accuracy for wave equations with heterogeneous coefficients than the local multiscale finite element method. © 2010 IMACS.
Jiang, Lijian
2010-08-01
In this paper, we discuss a numerical multiscale approach for solving wave equations with heterogeneous coefficients. Our interest comes from geophysics applications and we assume that there is no scale separation with respect to spatial variables. To obtain the solution of these multiscale problems on a coarse grid, we compute global fields such that the solution smoothly depends on these fields. We present a Galerkin multiscale finite element method using the global information and provide a convergence analysis when applied to solve the wave equations. We investigate the relation between the smoothness of the global fields and convergence rates of the global Galerkin multiscale finite element method for the wave equations. Numerical examples demonstrate that the use of global information renders better accuracy for wave equations with heterogeneous coefficients than the local multiscale finite element method. © 2010 IMACS.
Themeßl, N.; Hekker, S.; Southworth, J.; Beck, P. G.; Pavlovski, K.; Tkachenko, A.; Angelou, G. C.; Ball, W. H.; Barban, C.; Corsaro, E.; Elsworth, Y.; Handberg, R.; Kallinger, T.
2018-05-01
The internal structures and properties of oscillating red-giant stars can be accurately inferred through their global oscillation modes (asteroseismology). Based on 1460 days of Kepler observations we perform a thorough asteroseismic study to probe the stellar parameters and evolutionary stages of three red giants in eclipsing binary systems. We present the first detailed analysis of individual oscillation modes of the red-giant components of KIC 8410637, KIC 5640750 and KIC 9540226. We obtain estimates of their asteroseismic masses, radii, mean densities and logarithmic surface gravities by using the asteroseismic scaling relations as well as grid-based modelling. As these red giants are in double-lined eclipsing binaries, it is possible to derive their independent dynamical masses and radii from the orbital solution and compare it with the seismically inferred values. For KIC 5640750 we compute the first spectroscopic orbit based on both components of this system. We use high-resolution spectroscopic data and light curves of the three systems to determine up-to-date values of the dynamical stellar parameters. With our comprehensive set of stellar parameters we explore consistencies between binary analysis and asteroseismic methods, and test the reliability of the well-known scaling relations. For the three red giants under study, we find agreement between dynamical and asteroseismic stellar parameters in cases where the asteroseismic methods account for metallicity, temperature and mass dependence as well as surface effects. We are able to attain agreement from the scaling laws in all three systems if we use Δνref, emp = 130.8 ± 0.9 μHz instead of the usual solar reference value.
[Construction of a physiological aging scale for healthy people based on a modified Delphi method].
Long, Yao; Zhou, Xuan; Deng, Pengfei; Liao, Xiong; Wu, Lei; Zhou, Jianming; Huang, Helang
2016-04-01
To build a physiological aging scale for healthy people. We collected age-related physiologic items through literature screening and expert interview. Two rounds of Delphi were implemented. The importance, feasibility and the degree of authority for the physiological index system were graded. Using analytic hierarchy process, we determined the weight of dimensions and items. Using Delphy mothod, 17 physiological and other professional experts offered the results as follow: coefficient of expert authorities Cr was 0.86±0.03, coordination coefficients for the first and second round were 0.264(χ2=229.691, Paging scale for healthy people included 3 dimensions, namely physical form, feeling movement and functional status. Each dimension had 8 items. The weight coefficients for the 3 dimensions were 0.54, 0.16, and 0.30, respectively. The Cronbach's α coefficient of the scale was 0.893, the reliability was 0.796, and the variance of the common factor was 58.17%. The improved Delphi method or physiological aging scale is satisfied, which can provide reference for the evaluation of aging.
Kernel methods for large-scale genomic data analysis
Xing, Eric P.; Schaid, Daniel J.
2015-01-01
Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743
A multi-scale network method for two-phase flow in porous media
Energy Technology Data Exchange (ETDEWEB)
Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick
2017-08-01
Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces within each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.
A multi-scale network method for two-phase flow in porous media
International Nuclear Information System (INIS)
Khayrat, Karim; Jenny, Patrick
2017-01-01
Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces within each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.
Investigating salt frost scaling by using statistical methods
DEFF Research Database (Denmark)
Hasholt, Marianne Tange; Clemmensen, Line Katrine Harder
2010-01-01
A large data set comprising data for 118 concrete mixes on mix design, air void structure, and the outcome of freeze/thaw testing according to SS 13 72 44 has been analysed by use of statistical methods. The results show that with regard to mix composition, the most important parameter...
Identifying food-related life style segments by a cross-culturally valid scaling device
DEFF Research Database (Denmark)
Brunsø, Karen; Grunert, Klaus G.
1994-01-01
-related life style in a cross-culturally valid way. To this end, we have col-lected a pool of 202 items, collected data in three countries, and have con-structed scales based on cross-culturally stable patterns. These scales have then been subjected to a number of tests of reliability and vali-dity. We have...... then applied the set of scales to a fourth country, Germany, based on a representative sample of 1000 respondents. The scales had, with a fe exceptions, moderately good reliabilities. A cluster ana-ly-sis led to the identification of 5 segments, which differed on all 23 scales....
LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM
Higgins, G.H.; Crane, W.W.T.
1959-05-19
A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)
International Nuclear Information System (INIS)
Brodsky, Stanley J.
1998-01-01
Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scale, such as the ''generalized Crewther relation'', which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. All non-conformal effects are absorbed by fixing the ratio of the respective momentum transfer and energy scales. In the case of fixed-point theories, commensurate scale relations relate both the ratio of couplings and the ratio of scales as the fixed point is approached. The relations between the observables are independent of the choice of intermediate renormalization scheme or other theoretical conventions. Commensurate scale relations also provide an extension of the standard minimal subtraction scheme, which is analytic in the quark masses, has non-ambiguous scale-setting properties, and inherits the physical properties of the effective charge α V (Q 2 ) defined from the heavy quark potential. The application of the analytic scheme to the calculation of quark-mass-dependent QCD corrections to the Z width is also reviewed
From fuel cells to batteries: Synergies, scales and simulation methods
Bessler, Wolfgang G.
2011-01-01
The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...
Directory of Open Access Journals (Sweden)
Gilson Morales
2010-12-01
Full Text Available This study aimed to create EI Scal, an environmental impact assessment scal, related to construction materials used in the reinforced concrete structure production. The main reason for that was based on the need to classify the environmental impact levels through indicators to assess the damage level process. The scale allowed converting information to estimate the environmental impact caused. Indicators were defined trough the requirements and classification criteria of impact aspects considering the eco-design theory. Moreover, the scale allowed classifying the materials and processes environmental impact through four score categories which resulted in a single final impact score. It was concluded that the EI scale could be cheap, accessible, and relevant tool for environmental impact controlling and reduction, allowing the planning and material specification to minimize the construction negative effects caused in the environment.
Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu
2017-06-01
Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing
Scaling relations in elastic scattering cross sections between multiply charged ions and hydrogen
International Nuclear Information System (INIS)
Rodriguez, V.D.
1991-01-01
Differential elastic scattering cross sections of bare ions from hydrogen are calculated using the eikonal approximation. The results satisfy a scaling relation involving the scattering angle, the ion charge and a factor related to the ion mass. A semiclassical explanation in terms of a distant collision hypothesis for small scattering angle is proposed. A unified picture of related scaling rules found in direct processes is discussed. (author)
Vnukov, A. A.; Shershnev, M. B.
2018-01-01
The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.
Veltchev, Todor; Donkov, Sava; Stanchev, Orlin
2017-07-01
We present a method to derive the density scaling relation ∝ L^{-α} in regions of star formation or in their turbulent vicinities from straightforward binning of the column-density distribution (N-pdf). The outcome of the method is studied for three types of N-pdf: power law (7/5≤α≤5/3), lognormal (0.7≲α≲1.4) and combination of lognormals. In the last case, the method of Stanchev et al. (2015) was also applied for comparison and a very weak (or close to zero) correlation was found. We conclude that the considered `binning approach' reflects rather the local morphology of the N-pdf with no reference to the physical conditions in a considered region. The rough consistency of the derived slopes with the widely adopted Larson's (1981) value α˜1.1 is suggested to support claims that the density-size relation in molecular clouds is indeed an artifact of the observed N-pdf.
Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.
Echinaka, Yuki; Ozeki, Yukiyasu
2016-10-01
The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.
Biosensors in the small scale: methods and technology trends.
Senveli, Sukru U; Tigli, Onur
2013-03-01
This study presents a review on biosensors with an emphasis on recent developments in the field. A brief history accompanied by a detailed description of the biosensor concepts is followed by rising trends observed in contemporary micro- and nanoscale biosensors. Performance metrics to quantify and compare different detection mechanisms are presented. A comprehensive analysis on various types and subtypes of biosensors are given. The fields of interest within the scope of this review are label-free electrical, mechanical and optical biosensors as well as other emerging and popular technologies. Especially, the latter half of the last decade is reviewed for the types, methods and results of the most prominently researched detection mechanisms. Tables are provided for comparison of various competing technologies in the literature. The conclusion part summarises the noteworthy advantages and disadvantages of all biosensors reviewed in this study. Furthermore, future directions that the micro- and nanoscale biosensing technologies are expected to take are provided along with the immediate outlook.
Large-scale atomic calculations using variational methods
Energy Technology Data Exchange (ETDEWEB)
Joensson, Per
1995-01-01
Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p{sup 2}P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs.
Large-scale atomic calculations using variational methods
International Nuclear Information System (INIS)
Joensson, Per.
1995-01-01
Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p 2 P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs
Gilson Morales; Antonio Edésio Jungles; Sheila Elisa Scheidemantel Klein; Juliana Guarda
2010-01-01
This study aimed to create EI Scal, an environmental impact assessment scal, related to construction materials used in the reinforced concrete structure production. The main reason for that was based on the need to classify the environmental impact levels through indicators to assess the damage level process. The scale allowed converting information to estimate the environmental impact caused. Indicators were defined trough the requirements and classification criteria of impact aspects consid...
Counting hard-to-count populations: the network scale-up method for public health
Bernard, H Russell; Hallett, Tim; Iovita, Alexandrina; Johnsen, Eugene C; Lyerla, Rob; McCarty, Christopher; Mahy, Mary; Salganik, Matthew J; Saliuk, Tetiana; Scutelniciuc, Otilia; Shelley, Gene A; Sirinirund, Petchsri; Weir, Sharon
2010-01-01
Estimating sizes of hidden or hard-to-reach populations is an important problem in public health. For example, estimates of the sizes of populations at highest risk for HIV and AIDS are needed for designing, evaluating and allocating funding for treatment and prevention programmes. A promising approach to size estimation, relatively new to public health, is the network scale-up method (NSUM), involving two steps: estimating the personal network size of the members of a random sample of a total population and, with this information, estimating the number of members of a hidden subpopulation of the total population. We describe the method, including two approaches to estimating personal network sizes (summation and known population). We discuss the strengths and weaknesses of each approach and provide examples of international applications of the NSUM in public health. We conclude with recommendations for future research and evaluation. PMID:21106509
Conformal Symmetry as a Template:Commensurate Scale Relations and Physical Renormalization Schemes
International Nuclear Information System (INIS)
Brodsky, Stanley J.
1999-01-01
Commensurate scale relations are perturbative QCD predictions which relate observable to observable at fixed relative scale, such as the ''generalized Crewther relation'', which connects the Bjorken and Gross-Llewellyn Smith deep inelastic scattering sum rules to measurements of the e + e - annihilation cross section. We show how conformal symmetry provides a template for such QCD predictions, providing relations between observables which are present even in theories which are not scale invariant. All non-conformal effects are absorbed by fixing the ratio of the respective momentum transfer and energy scales. In the case of fixed-point theories, commensurate scale relations relate both the ratio of couplings and the ratio of scales as the fixed point is approached. In the case of the α V scheme defined from heavy quark interactions, virtual corrections due to fermion pairs are analytically incorporated into the Gell-Mann Low function, thus avoiding the problem of explicitly computing and resuming quark mass corrections related to the running of the coupling. Applications to the decay width of the Z boson, the BFKL pomeron, and virtual photon scattering are discussed
A large-scale benchmark of gene prioritization methods.
Guala, Dimitri; Sonnhammer, Erik L L
2017-04-21
In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.
International Nuclear Information System (INIS)
Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; Nørskov, Jens K.
2017-01-01
Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying these methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.
Weak Lensing Calibrated M-T Scaling Relation of Galaxy Groups in the COSMOS Field
Kettula, K.; Finoguenov, A.; Massey, R.; Rhodes, J.; Hoekstra, H.; Taylor, J.; Spinelli, P.; Tanaka, M.; Ilbert, O.; Capak, P.; McCracken, H.; Koekemoer, A.
2013-01-01
The scaling between X-ray observables and mass for galaxy clusters and groups is instrumental for cluster-based cosmology and an important probe for the thermodynamics of the intracluster gas. We calibrate a scaling relation between the weak lensing mass and X-ray spectroscopic temperature for 10
Suicide-Related Experiences Among Blacks: An Empirical Test of a Suicide Potential Scale
Wenz, Friedrich V.
1978-01-01
Developing a Suicide Potential Scale for a number of socially differentiated, stratified census tract populations in a northern city, this paper argues that scores on this scale are related to actual suicidal behavior. These data support the position that variation in suicide among blacks is mainly determined by economic status. (Author)
Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...
National Research Council Canada - National Science Library
Ormerod, Alayne
2003-01-01
...: Workplace and Gender Relations Survey (2002 WGR). This report describes advances from previous surveys and presents results on scale development as obtained from 19,960 respondents to this survey...
Who is Distressed Applying the Diabetes Related Distress Scale in a Diabetes Clinic
2017-06-09
59 MDW /SGVU SUBJECT: Professional Presentation Approval 7APR 2017 1. Your paper, entitled Who is Distressed? Applying the Diabetes -Related Distress...Scale in A Diabetes Clinic presented at/published to American Diabetes Association 2017 Meeting, San Francisco, CA (National Conference), 9-16 June...as a publication/presentation, a new 59 MOW Form 3039 must be submitted for review and approval.) Using the Diabetes -Related Distress Scale in
Mitigation of Power frequency Magnetic Fields. Using Scale Invariant and Shape Optimization Methods
Energy Technology Data Exchange (ETDEWEB)
Salinas, Ener; Yueqiang Liu; Daalder, Jaap; Cruz, Pedro; Antunez de Souza, Paulo Roberto Jr; Atalaya, Juan Carlos; Paula Marciano, Fabianna de; Eskinasy, Alexandre
2006-10-15
The present report describes the development and application of two novel methods for implementing mitigation techniques of magnetic fields at power frequencies. The first method makes use of scaling rules for electromagnetic quantities, while the second one applies a 2D shape optimization algorithm based on gradient methods. Before this project, the first method had already been successfully applied (by some of the authors of this report) to electromagnetic designs involving pure conductive Material (e.g. copper, aluminium) which implied a linear formulation. Here we went beyond this approach and tried to develop a formulation involving ferromagnetic (i.e. non-linear) Materials. Surprisingly, we obtained good equivalent replacement for test-transformers by varying the input current. In spite of the validity of this equivalence constrained to regions not too close to the source, the results can still be considered useful, as most field mitigation techniques are precisely developed for reducing the magnetic field in regions relatively far from the sources. The shape optimization method was applied in this project to calculate the optimal geometry of a pure conductive plate to mitigate the magnetic field originated from underground cables. The objective function was a weighted combination of magnetic energy at the region of interest and dissipated heat at the shielding Material. To our surprise, shapes of complex structure, difficult to interpret (and probably even harder to anticipate) were the results of the applied process. However, the practical implementation (using some approximation of these shapes) gave excellent experimental mitigation factors.
Length scales in glass-forming liquids and related systems: a review
International Nuclear Information System (INIS)
Karmakar, Smarajit; Dasgupta, Chandan; Sastry, Srikanth
2016-01-01
The central problem in the study of glass-forming liquids and other glassy systems is the understanding of the complex structural relaxation and rapid growth of relaxation times seen on approaching the glass transition. A central conceptual question is whether one can identify one or more growing length scale(s) associated with this behavior. Given the diversity of molecular glass-formers and a vast body of experimental, computational and theoretical work addressing glassy behavior, a number of ideas and observations pertaining to growing length scales have been presented over the past few decades, but there is as yet no consensus view on this question. In this review, we will summarize the salient results and the state of our understanding of length scales associated with dynamical slow down. After a review of slow dynamics and the glass transition, pertinent theories of the glass transition will be summarized and a survey of ideas relating to length scales in glassy systems will be presented. A number of studies have focused on the emergence of preferred packing arrangements and discussed their role in glassy dynamics. More recently, a central object of attention has been the study of spatially correlated, heterogeneous dynamics and the associated length scale, studied in computer simulations and theoretical analysis such as inhomogeneous mode coupling theory. A number of static length scales have been proposed and studied recently, such as the mosaic length scale discussed in the random first-order transition theory and the related point-to-set correlation length. We will discuss these, elaborating on key results, along with a critical appraisal of the state of the art. Finally we will discuss length scales in driven soft matter, granular fluids and amorphous solids, and give a brief description of length scales in aging systems. Possible relations of these length scales with those in glass-forming liquids will be discussed. (review article)
Universal scaling relations for the energies of many-electron Hooke atoms
Odriazola, A.; Solanpää, J.; Kylänpää, I.; González, A.; Räsänen, E.
2017-04-01
A three-dimensional harmonic oscillator consisting of N ≥2 Coulomb-interacting charged particles, often called a (many-electron) Hooke atom, is a popular model in computational physics for, e.g., semiconductor quantum dots and ultracold ions. Starting from Thomas-Fermi theory, we show that the ground-state energy of such a system satisfies a nontrivial relation: Eg s=ω N4 /3fg s(β N1 /2) , where ω is the oscillator strength, β is the ratio between Coulomb and oscillator characteristic energies, and fg s is a universal function. We perform extensive numerical calculations to verify the applicability of the relation. In addition, we show that the chemical potentials and addition energies also satisfy approximate scaling relations. In all cases, analytic expressions for the universal functions are provided. The results have predictive power in estimating the key ground-state properties of the system in the large-N limit, and can be used in the development of approximative methods in electronic structure theory.
The development and psychometric analysis of the Chinese HIV-Related Fatigue Scale.
Li, Su-Yin; Wu, Hua-Shan; Barroso, Julie
2016-04-01
To develop a Chinese version of the human immunodeficiency virus-related Fatigue Scale and examine its reliability and validity. Fatigue is found in more than 70% of people infected with human immunodeficiency virus. However, a scale to assess fatigue in human immunodeficiency virus-positive people has not yet been developed for use in Chinese-speaking countries. A methodologic study involving instrument development and psychometric evaluation was used. The human immunodeficiency virus-related Fatigue Scale was examined through a two-step procedure: (1) translation and back translation and (2) psychometric analysis. A sample of 142 human immunodeficiency virus-positive patients was recruited from the Infectious Disease Outpatient Clinic in central Taiwan. Their fatigue data were analysed with Cronbach's α for internal consistency. Two weeks later, the data of a random sample of 28 patients from the original 142 were analysed for test-retest reliability. The correlation between the World Health Organization Quality of Life Assessment-Human Immunodeficiency Virus and the Chinese version of the human immunodeficiency virus-related Fatigue Scale was analysed for concurrent validity. The Chinese version of the human immunodeficiency virus-related Fatigue Scale scores of human immunodeficiency virus-positive patients with highly active antiretroviral therapy and those without were compared to demonstrate construct validity. The internal consistency and test-retest reliability of the Chinese version of the human immunodeficiency virus-related Fatigue Scale were 0·97 and 0·686, respectively. In regard to concurrent validity, a negative correlation was found between the scores of the Chinese version of the human immunodeficiency virus-related Fatigue Scale and the World Health Organization Quality of Life Assessment-Human Immunodeficiency Virus. Additionally, the Chinese version of the human immunodeficiency virus-related Fatigue Scale could be used to effectively
Wang, Xianbin; Chen, Wei; Wang, Zhihong; Zhang, Xixiang; Yue, Weisheng; Lai, Zhiping
2015-01-01
Embodiments of the present disclosure provide for materials that include a pre-designed patterned, porous membrane (e.g., micro- and/or nano-scale patterned), structures or devices that include a pre-designed patterned, porous membrane, methods of making pre-designed patterned, porous membranes, methods of separation, and the like.
Wang, Xianbin
2015-01-22
Embodiments of the present disclosure provide for materials that include a pre-designed patterned, porous membrane (e.g., micro- and/or nano-scale patterned), structures or devices that include a pre-designed patterned, porous membrane, methods of making pre-designed patterned, porous membranes, methods of separation, and the like.
Directory of Open Access Journals (Sweden)
Lei Ma
2016-09-01
Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.
Michaelides, Michalis P; Koutsogiorgi, Chrystalla; Panayiotou, Georgia
2016-01-01
Rosenberg's Self-Esteem Scale is a balanced, 10-item scale designed to be unidimensional; however, research has repeatedly shown that its factorial structure is contaminated by method effects due to item wording. Beyond the substantive self-esteem factor, 2 additional factors linked to the positive and negative wording of items have been theoretically specified and empirically supported. Initial evidence has revealed systematic relations of the 2 method factors with variables expressing approach and avoidance motivation. This study assessed the fit of competing confirmatory factor analytic models for the Rosenberg Self-Esteem Scale using data from 2 samples of adult participants in Cyprus. Models that accounted for both positive and negative wording effects via 2 latent method factors had better fit compared to alternative models. Measures of experiential avoidance, social anxiety, and private self-consciousness were associated with the method factors in structural equation models. The findings highlight the need to specify models with wording effects for a more accurate representation of the scale's structure and support the hypothesis of method factors as response styles, which are associated with individual characteristics related to avoidance motivation, behavioral inhibition, and anxiety.
An eigenfunction method for reconstruction of large-scale and high-contrast objects.
Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P
2007-07-01
A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.
Modeling aboveground tree woody biomass using national-scale allometric methods and airborne lidar
Chen, Qi
2015-08-01
Estimating tree aboveground biomass (AGB) and carbon (C) stocks using remote sensing is a critical component for understanding the global C cycle and mitigating climate change. However, the importance of allometry for remote sensing of AGB has not been recognized until recently. The overarching goals of this study are to understand the differences and relationships among three national-scale allometric methods (CRM, Jenkins, and the regional models) of the Forest Inventory and Analysis (FIA) program in the U.S. and to examine the impacts of using alternative allometry on the fitting statistics of remote sensing-based woody AGB models. Airborne lidar data from three study sites in the Pacific Northwest, USA were used to predict woody AGB estimated from the different allometric methods. It was found that the CRM and Jenkins estimates of woody AGB are related via the CRM adjustment factor. In terms of lidar-biomass modeling, CRM had the smallest model errors, while the Jenkins method had the largest ones and the regional method was between. The best model fitting from CRM is attributed to its inclusion of tree height in calculating merchantable stem volume and the strong dependence of non-merchantable stem biomass on merchantable stem biomass. This study also argues that it is important to characterize the allometric model errors for gaining a complete understanding of the remotely-sensed AGB prediction errors.
On one method of realization of commutation relation algebra
International Nuclear Information System (INIS)
Sveshnikov, K.A.
1983-01-01
Method for constructing the commulation relation representations based on the purely algebraic construction of joined algebraic representation with specially selected composition law has been suggested9 Purely combinatorial construction realizing commulation relations representation has been obtained proceeding from formal equivalence of operatopr action on vector and adding a simbol to a sequences of symbols. The above method practically has the structure of calculating algorithm, which assigns some rule of ''word'' formation of an initial set of ''letters''. In other words, a computer language with definite relations between words (an analogy between quantum mechanics and computer linguistics has been applied)
Method of producing carbon coated nano- and micron-scale particles
Perry, W. Lee; Weigle, John C; Phillips, Jonathan
2013-12-17
A method of making carbon-coated nano- or micron-scale particles comprising entraining particles in an aerosol gas, providing a carbon-containing gas, providing a plasma gas, mixing the aerosol gas, the carbon-containing gas, and the plasma gas proximate a torch, bombarding the mixed gases with microwaves, and collecting resulting carbon-coated nano- or micron-scale particles.
Kong, Hyung-Sik; Lee, Kang-Sook; Yim, Eun-Shil; Lee, Seon-Young; Cho, Hyun-Young; Lee, Bin Na; Park, Jee Young
2013-10-21
The purpose of this study was to identify the risk factors of metabolic syndrome (MS) and to analyze the relationship between the risk factors of MS and medical cost of major diseases related to MS in Korean workers, according to the scale of the enterprise. Data was obtained from annual physical examinations, health insurance qualification and premiums, and health insurance benefits of 4,094,217 male and female workers who underwent medical examinations provided by the National Health Insurance Corporation in 2009. Logistic regression analyses were used to the identify risk factors of MS and multiple regression was used to find factors associated with medical expenditures due to major diseases related to MS. The study found that low-income workers were more likely to work in small-scale enterprises. The prevalence rate of MS in males and females, respectively, was 17.2% and 9.4% in small-scale enterprises, 15.9% and 8.9% in medium-scale enterprises, and 15.9% and 5.5% in large-scale enterprises. The risks of MS increased with age, lower income status, and smoking in small-scale enterprise workers. The medical costs increased in workers with old age and past smoking history. There was also a gender difference in the pattern of medical expenditures related to MS. Health promotion programs to manage metabolic syndrome should be developed to focus on workers who smoke, drink, and do little exercise in small scale enterprises.
Thogmartin, W.E.; Knutson, M.G.
2007-01-01
Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.
SCALE FACTOR DETERMINATION METHOD OF ELECTRO-OPTICAL MODULATOR IN FIBER-OPTIC GYROSCOPE
Directory of Open Access Journals (Sweden)
A. S. Aleynik
2016-05-01
Full Text Available Subject of Research. We propose a method for dynamic measurement of half-wave voltage of electro-optic modulator as part of a fiber optic gyroscope. Excluding the impact of the angular acceleration on measurement of the electro-optical coefficient is achieved through the use of homodyne demodulation method that allows a division of the Sagnac phase shift signal and an auxiliary signal for measuring the electro-optical coefficient in the frequency domain. Method. The method essence reduces to decomposition of step of digital serrodyne modulation in two parts with equal duration. The first part is used for quadrature modulation signals. The second part comprises samples of the auxiliary signal used to determine the value of the scale factor of the modulator. Modeling is done in standalone model, and as part of a general model of the gyroscope. The applicability of the proposed method is investigated as well as its qualitative and quantitative characteristics: absolute and relative accuracy of the electro-optic coefficient, the stability of the method to the effects of angular velocities and accelerations, method resistance to noise in actual devices. Main Results. The simulation has showed the ability to measure angular velocity changing under the influence of angular acceleration, acting on the device, and simultaneous measurement of electro-optical coefficient of the phase modulator without interference between these processes. Practical Relevance. Featured in the paper the ability to eliminate the influence of the angular acceleration on the measurement accuracy of the electro-optical coefficient of the phase modulator will allow implementing accurate measurement algorithms for fiber optic gyroscopes resistant to a significant acceleration in real devices.
Thomas C. Brown; George L. Peterson
2009-01-01
The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...
Constructing sites at a large scale - towards new design (education) methods
DEFF Research Database (Denmark)
Braae, Ellen Marie; Tietjen, Anne
2010-01-01
of the design disciplines within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective the formulation of the design problem or brief becomes an integrated part of the design process......Since the 1990s the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. The current paradigm of planning by projects reinforces the role....... This paper discusses new design (education) methods based on a relational concept of urban sites and design processes using the actor-network-theory as theoretical frame....
New parametrization for the scale dependent growth function in general relativity
International Nuclear Information System (INIS)
Dent, James B.; Dutta, Sourish; Perivolaropoulos, Leandros
2009-01-01
We study the scale-dependent evolution of the growth function δ(a,k) of cosmological perturbations in dark energy models based on general relativity. This scale dependence is more prominent on cosmological scales of 100h -1 Mpc or larger. We derive a new scale-dependent parametrization which generalizes the well-known Newtonian approximation result f 0 (a)≡(dlnδ 0 /dlna)=Ω(a) γ (γ=(6/11) for ΛCDM) which is a good approximation on scales less than 50h -1 Mpc. Our generalized parametrization is of the form f(a)=(f 0 (a)/1+ξ(a,k)), where ξ(a,k)=(3H 0 2 Ω 0m )/(ak 2 ). We demonstrate that this parametrization fits the exact result of a full general relativistic evaluation of the growth function up to horizon scales for both ΛCDM and dynamical dark energy. In contrast, the scale independent parametrization does not provide a good fit on scales beyond 5% of the horizon scale (k≅0.01h -1 Mpc).
Dual linear structured support vector machine tracking method via scale correlation filter
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Scaling relations for soliton compression and dispersive-wave generation in tapered optical fibers
DEFF Research Database (Denmark)
Lægsgaard, Jesper
2018-01-01
In this paper, scaling relations for soliton compression in tapered optical fibers are derived and discussed. The relations allow simple and semi-accurate estimates of the compression point and output noise level, which is useful, for example, for tunable dispersive-wave generation with an agile ...
Scaling relations between structure and rheology of ageing casein particle gels
Mellema, M.
2000-01-01
Mellema, M. (Michel), Scaling relations between structure and rheology of ageing casein particle gels , PhD Thesis, Wageningen University, 150 + 10 pages, references by chapter, English and Dutch summaries (2000).
The relation between (colloidal)
Scale relation in logσ - logε diagrams for Zry-4
International Nuclear Information System (INIS)
Cuniberti, A.M.; Picasso, A.C.
1991-01-01
The stress relaxation assay allows access to information about plastic behaviour of the corresponding material. This work describes a stress relaxation test carried out on polycrystalline Zry-4 at 293 K to verify the existence of a scale relation related to the plastic state equation. (Author) [es
Mackey, Aaron J; Pearson, William R
2004-10-01
Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.
DISK GALAXY SCALING RELATIONS IN THE SFI++: INTRINSIC SCATTER AND APPLICATIONS
International Nuclear Information System (INIS)
Saintonge, Amelie; Spekkens, Kristine
2011-01-01
We study the scaling relations between the luminosities, sizes, and rotation velocities of disk galaxies in the SFI++, with a focus on the size-luminosity (RL) and size-rotation velocity (RV) relations. Using isophotal radii instead of disk scale lengths as a size indicator, we find relations that are significantly tighter than previously reported: the correlation coefficients of the template RL and RV relations are r = 0.97 and r= 0.85, respectively, which rival that of the more widely studied LV (Tully-Fisher) relation. The scatter in the SFI++ RL relation is 2.5-4 times smaller than previously reported for various samples, which we attribute to the reliability of isophotal radii relative to disk scale lengths. After carefully accounting for all measurement errors, our scaling relation error budgets are consistent with a constant intrinsic scatter in the LV and RV relations for velocity widths log W ∼> 2.4, with evidence for increasing intrinsic scatter below this threshold. The scatter in the RL relation is consistent with constant intrinsic scatter that is biased by incompleteness at the low-L end. Possible applications of the unprecedentedly tight SFI++ RV and RL relations are investigated. Just like the Tully-Fisher relation, the RV relation can be used as a distance indicator: we derive distances to galaxies with primary Cepheid distances that are accurate to 25%, and reverse the problem to measure a Hubble constant H 0 = 72 ± 7 km s -1 Mpc -1 . Combining the small intrinsic scatter of our RL relation (ε int = 0.034 ± 0.001log [h -1 kpc]) with a simple model for disk galaxy formation, we find an upper limit in the range of disk spin parameters that is a factor of ∼7 smaller than that of the halo spin parameters predicted by cosmological simulations. This likely implies that the halos hosting Sc galaxies have a much narrower distribution of spin parameters than previously thought.
International Nuclear Information System (INIS)
Efrosinin, V.P.; Zaikin, D.A.
1983-01-01
We study the possible reasons for the disagreement between the estimates of the pion-nucleon sigma term obtained by the method of dispersion relations with extrapolation to the Cheng-Dashen point and by other methods which do not involve this extrapolation. One reason for the disagreement may be the nonanalyticity of the πN amplitude in the variable t for ν = 0. We propose a method for estimating the sigma term using the threshold data for the πN amplitude, in which the effect of this nonanalyticity is minimized. We discuss the relation between scale invariance violation and chiral symmetry breaking and give the corresponding estimate of the sigma term. The two estimates are similar (42 and 34 MeV) and are in agreement when the uncertainties of the two methods are taken into consideration
A low Fermi scale from a simple gaugino-scalar mass relation
Energy Technology Data Exchange (ETDEWEB)
Bruemmer, F. [International School for Advanced Studies, Trieste (Italy); Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Buchmueller, W. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-11-15
In supersymmetric extensions of the Standard Model, the Fermi scale of electroweak symmetry breaking is determined by the pattern of supersymmetry breaking. We present an example, motivated by a higher-dimensional GUT model, where a particular mass relation between the gauginos, third-generation squarks and Higgs fields of the MSSM leads to a Fermi scale smaller than the soft mass scale. This is in agreement with the measured Higgs boson mass. The {mu} parameter is generated independently of supersymmetry breaking, however the {mu} problem becomes less acute due to the little hierarchy between the soft mass scale and the Fermi scale as we argue. The resulting superparticle mass spectra depend on the localization of quark and lepton fields in higher dimensions. In one case, the squarks of the first two generations as well as the gauginos and higgsinos can be in the range of the LHC. Alternatively, only the higgsinos may be accessible at colliders. The lightest superparticle is the gravitino.
Energy Technology Data Exchange (ETDEWEB)
Saliwanchik, B. R.; et al.
2015-01-22
We describe a method for measuring the integrated Comptonization (Y (SZ)) of clusters of galaxies from measurements of the Sunyaev-Zel'dovich (SZ) effect in multiple frequency bands and use this method to characterize a sample of galaxy clusters detected in the South Pole Telescope (SPT) data. We use a Markov Chain Monte Carlo method to fit a β-model source profile and integrate Y (SZ) within an angular aperture on the sky. In simulated observations of an SPT-like survey that include cosmic microwave background anisotropy, point sources, and atmospheric and instrumental noise at typical SPT-SZ survey levels, we show that we can accurately recover β-model parameters for inputted clusters. We measure Y (SZ) for simulated semi-analytic clusters and find that Y (SZ) is most accurately determined in an angular aperture comparable to the SPT beam size. We demonstrate the utility of this method to measure Y (SZ) and to constrain mass scaling relations using X-ray mass estimates for a sample of 18 galaxy clusters from the SPT-SZ survey. Measuring Y (SZ) within a 0.'75 radius aperture, we find an intrinsic log-normal scatter of 21% ± 11% in Y (SZ) at a fixed mass. Measuring Y (SZ) within a 0.3 Mpc projected radius (equivalent to 0.'75 at the survey median redshift z = 0.6), we find a scatter of 26% ± 9%. Prior to this study, the SPT observable found to have the lowest scatter with mass was cluster detection significance. We demonstrate, from both simulations and SPT observed clusters that Y (SZ) measured within an aperture comparable to the SPT beam size is equivalent, in terms of scatter with cluster mass, to SPT cluster detection significance.
Working memory performance inversely predicts spontaneous delta and theta-band scaling relations.
Euler, Matthew J; Wiltshire, Travis J; Niermeyer, Madison A; Butner, Jonathan E
2016-04-15
Electrophysiological studies have strongly implicated theta-band activity in human working memory processes. Concurrently, work on spontaneous, non-task-related oscillations has revealed the presence of long-range temporal correlations (LRTCs) within sub-bands of the ongoing EEG, and has begun to demonstrate their functional significance. However, few studies have yet assessed the relation of LRTCs (also called scaling relations) to individual differences in cognitive abilities. The present study addressed the intersection of these two literatures by investigating the relation of narrow-band EEG scaling relations to individual differences in working memory ability, with a particular focus on the theta band. Fifty-four healthy adults completed standardized assessments of working memory and separate recordings of their spontaneous, non-task-related EEG. Scaling relations were quantified in each of the five classical EEG frequency bands via the estimation of the Hurst exponent obtained from detrended fluctuation analysis. A multilevel modeling framework was used to characterize the relation of working memory performance to scaling relations as a function of general scalp location in Cartesian space. Overall, results indicated an inverse relationship between both delta and theta scaling relations and working memory ability, which was most prominent at posterior sensors, and was independent of either spatial or individual variability in band-specific power. These findings add to the growing literature demonstrating the relevance of neural LRTCs for understanding brain functioning, and support a construct- and state-dependent view of their functional implications. Copyright © 2016 Elsevier B.V. All rights reserved.
Voeykov, S. V.; Afraimovich, E. L.; Kosogorov, E. A.; Perevalova, N. P.; Zhivetiev, I. V.
We worked out a new method for estimation of relative amplitude dI I of total electron content TEC variations corresponding to medium-scale 30-300 km traveling ionospheric disturbances MS TIDs Daily and latitudinal dependences of dI I and dI I probability distributions are obtained for 52 days of 1999-2005 with different level of geomagnetic activity Statistical estimations were obtained for the analysis of 10 6 series of TEC with 2 3-hour duration To obtain statistically significant results three latitudinal regions were chosen North America high-latitudinal region 50-80 r N 200-300 r E 59 GPS receivers North America mid-latitudinal region 20-50 r N 200-300 r E 817 receivers equatorial belt -20 20 r N 0-360 r E 76 receivers We found that average daily value of the relative amplitude of TEC variations dI I changes from 0 3 to 10 proportionally to the value of geomagnetic index Kp This dependence is strong at high latitudes dI I 0 37 cdot Kp 1 5 and it is some weaker at mid latitudes dI I 0 2 cdot Kp 0 35 At the equator belt we found the weakest dependence dI I on the geomagnetic activity level dI I 0 1 cdot Kp 0 6 The most important and the most interesting result of our work is that during geomagnetic quiet conditions the relative amplitude of TEC variations at night considerably exceeds daily values by 3-5 times at equatorial and at high latitudes and by 2 times at mid latitudes But during strong magnetic storms the relative amplitude dI I at high
Workshop report on large-scale matrix diagonalization methods in chemistry theory institute
Energy Technology Data Exchange (ETDEWEB)
Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S. [eds.
1996-10-01
The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems as well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of
Coronado, Pluvio J; Borrego, Rafael Sánchez; Palacios, Santiago; Ruiz, Miguel A; Rejas, Javier
2015-03-01
The Cervantes Scale is a specific health-related quality of life questionnaire that was originally developed in Spanish to be used in Spain for women through and beyond menopause. It contains 31 items and is time-consuming. The aim of this study was to produce an abridged version with the same dimensional structure and with similar psychometric properties. A representative sample of 516 postmenopausal women (mean [SD] age, 57 [4.31] y) seen in outpatient gynecology clinics and extracted from an observational cross-sectional study was used. Item analysis, internal consistency reliability, item-total and item-dimension correlations, and item correlation with the 12-item Medical Outcomes Study Short Form Health Survey Version 2.0 were studied. Dimensional and full-model confirmatory factor analyses were used to check structure stability. A threefold cross-validation method was used to obtain stable estimates by means of multigroup analysis. The scale was reduced to a 16-item version, the Cervantes Short-Form Scale, containing four main dimensions (Menopause and Health, Psychological, Sexuality, and Couple Relations), with the first dimension composed of three subdimensions (Vasomotor Symptoms, Health, and Aging). Goodness-of-fit statistics were better than those of the extended version (χ(2)/df = 2.493; adjusted goodness-of-fit index, 0.802; parsimony comparative fit index, 0.749; root mean standard error of approximation, 0.054). Internal consistency was good (Cronbach's α = 0.880). Correlations between the extended and the reduced dimensions were high and significant in all cases (P < 0.001; r values ranged from 0.90 for Sexuality to 0.969 for Vasomotor Symptoms). The Cervantes Scale can be reduced to a 16-item abridged version (Cervantes Short-Form Scale) that maintains the original dimensional structure and psychometric properties. At 51% of the original length, this version can be administered faster, making it especially suitable for routine medical practice.
Rapid high temperature field test method for evaluation of geothermal calcite scale inhibitors
Energy Technology Data Exchange (ETDEWEB)
Asperger, R.G.
1982-08-01
A test method is described which allows the rapid field testing of calcite scale inhibitors in high- temperature geothermal brines. Five commercial formulations, chosen on the basis of laboratory screening tests, were tested in brines with low total dissolved solids at ca 500 F. Four were found to be effective; of these, 2 were found to be capable of removing recently deposited scale. One chemical was tested in the full-flow brine line for 6 wks. It was shown to stop a severe surface scaling problem at the well's control valve, thus proving the viability of the rapid test method. (12 refs.)
Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography
DEFF Research Database (Denmark)
Müller, P.; Hiller, Jochen; Dai, Y.
2015-01-01
X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...
International Nuclear Information System (INIS)
Botelho, David A.; Faccini, Jose L.H.
2002-01-01
The main topic in this paper is a new device being considered to improve nuclear reactor safety employing the natural circulation. A scaled experiment used to demonstrate the performance of the device is also described. We also applied a similarity analysis method for single and two-phase natural convection loop flow to the IEN CCN experiment and to an APEX like experiment to verify the degree of similarity relative to a full-scale prototype like the AP600. Most of the CCN similarity numbers that represent important single and two-phase similarity conditions are comparable to the APEX like loop non-dimensional numbers calculated employing the same methodology. Despite the much smaller geometric, pressure, and power scales, we conclude that the IEN CCN has single and two-phase natural circulation similarity numbers that represent fairly well the full-scale prototype. even lacking most complementary primary and safety systems, this IEN circuit provided a much valid experience to develop human, experimental, and analytical resources, besides its utilization as a training tool. (author)
The Pore-scale modeling of multiphase flows in reservoir rocks using the lattice Boltzmann method
Mu, Y.; Baldwin, C. H.; Toelke, J.; Grader, A.
2011-12-01
Digital rock physics (DRP) is a new technology to compute the physical and fluid flow properties of reservoir rocks. In this approach, pore scale images of the porous rock are obtained and processed to create highly accurate 3D digital rock sample, and then the rock properties are evaluated by advanced numerical methods at the pore scale. Ingrain's DRP technology is a breakthrough for oil and gas companies that need large volumes of accurate results faster than the current special core analysis (SCAL) laboratories can normally deliver. In this work, we compute the multiphase fluid flow properties of 3D digital rocks using D3Q19 immiscible LBM with two relaxation times (TRT). For efficient implementation on GPU, we improved and reformulated color-gradient model proposed by Gunstensen and Rothmann. Furthermore, we only use one-lattice with the sparse data structure: only allocate memory for pore nodes on GPU. We achieved more than 100 million fluid lattice updates per second (MFLUPS) for two-phase LBM on single Fermi-GPU and high parallel efficiency on Multi-GPUs. We present and discuss our simulation results of important two-phase fluid flow properties, such as capillary pressure and relative permeabilities. We also investigate the effects of resolution and wettability on multiphase flows. Comparison of direct measurement results with the LBM-based simulations shows practical ability of DRP to predict two-phase flow properties of reservoir rock.
Directory of Open Access Journals (Sweden)
Dibble Clare J
2009-11-01
Full Text Available Abstract Background Screening new lignocellulosic biomass pretreatments and advanced enzyme systems at process relevant conditions is a key factor in the development of economically viable lignocellulosic ethanol. Shake flasks, the reaction vessel commonly used for screening enzymatic saccharifications of cellulosic biomass, do not provide adequate mixing at high-solids concentrations when shaking is not supplemented with hand mixing. Results We identified roller bottle reactors (RBRs as laboratory-scale reaction vessels that can provide adequate mixing for enzymatic saccharifications at high-solids biomass loadings without any additional hand mixing. Using the RBRs, we developed a method for screening both pretreated biomass and enzyme systems at process-relevant conditions. RBRs were shown to be scalable between 125 mL and 2 L. Results from enzymatic saccharifications of five biomass pretreatments of different severities and two enzyme preparations suggest that this system will work well for a variety of biomass substrates and enzyme systems. A study of intermittent mixing regimes suggests that mass transfer limitations of enzymatic saccharifications at high-solids loadings are significant but can be mitigated with a relatively low amount of mixing input. Conclusion Effective initial mixing to promote good enzyme distribution and continued, but not necessarily continuous, mixing is necessary in order to facilitate high biomass conversion rates. The simplicity and robustness of the bench-scale RBR system, combined with its ability to accommodate numerous reaction vessels, will be useful in screening new biomass pretreatments and advanced enzyme systems at high-solids loadings.
Landslide susceptibility mapping on a global scale using the method of logistic regression
Directory of Open Access Journals (Sweden)
L. Lin
2017-08-01
Full Text Available This paper proposes a statistical model for mapping global landslide susceptibility based on logistic regression. After investigating explanatory factors for landslides in the existing literature, five factors were selected for model landslide susceptibility: relative relief, extreme precipitation, lithology, ground motion and soil moisture. When building the model, 70 % of landslide and nonlandslide points were randomly selected for logistic regression, and the others were used for model validation. To evaluate the accuracy of predictive models, this paper adopts several criteria including a receiver operating characteristic (ROC curve method. Logistic regression experiments found all five factors to be significant in explaining landslide occurrence on a global scale. During the modeling process, percentage correct in confusion matrix of landslide classification was approximately 80 % and the area under the curve (AUC was nearly 0.87. During the validation process, the above statistics were about 81 % and 0.88, respectively. Such a result indicates that the model has strong robustness and stable performance. This model found that at a global scale, soil moisture can be dominant in the occurrence of landslides and topographic factor may be secondary.
Yapuncich, Gabriel S; Boyer, Doug M
2014-01-01
The articular facets of interosseous joints must transmit forces while maintaining relatively low stresses. To prevent overloading, joints that transmit higher forces should therefore have larger facet areas. The relative contributions of body mass and muscle-induced forces to joint stress are unclear, but generate opposing hypotheses. If mass-induced forces dominate, facet area should scale with positive allometry to body mass. Alternatively, muscle-induced forces should cause facets to scale isometrically with body mass. Within primates, both scaling patterns have been reported for articular surfaces of the femoral and humeral heads, but more distal elements are less well studied. Additionally, examination of complex articular surfaces has largely been limited to linear measurements, so that ‘true area' remains poorly assessed. To re-assess these scaling relationships, we examine the relationship between body size and articular surface areas of the talus. Area measurements were taken from microCT scan-generated surfaces of all talar facets from a comprehensive sample of extant euarchontan taxa (primates, treeshrews, and colugos). Log-transformed data were regressed on literature-derived log-body mass using reduced major axis and phylogenetic least squares regressions. We examine the scaling patterns of muscle mass and physiological cross-sectional area (PCSA) to body mass, as these relationships may complicate each model. Finally, we examine the scaling pattern of hindlimb muscle PCSA to talar articular surface area, a direct test of the effect of mass-induced forces on joint surfaces. Among most groups, there is an overall trend toward positive allometry for articular surfaces. The ectal (= posterior calcaneal) facet scales with positive allometry among all groups except ‘sundatherians', strepsirrhines, galagids, and lorisids. The medial tibial facet scales isometrically among all groups except lemuroids. Scaling coefficients are not correlated with sample
Rotarius, Timothy; Wan, Thomas T H; Liberman, Aaron
2007-01-01
Research plays a critical role throughout virtually every conduit of the health services industry. The key terms of research, public relations, and organizational interests are discussed. Combining public relations as a strategic methodology with the organizational concern as a factor, a typology of four different research methods emerges. These four health marketing research methods are: investigative, strategic, informative, and verification. The implications of these distinct and contrasting research methods are examined.
A method of orbital analysis for large-scale first-principles simulations
International Nuclear Information System (INIS)
Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke
2014-01-01
An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )
Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.
Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin
2018-03-02
Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.
Cone beam CT dose reduction in prostate radiotherapy using Likert scale methods.
Langmack, Keith A; Newton, Louise A; Jordan, Suzanne; Smith, Ruth
2016-01-01
To use a Likert scale method to optimize image quality (IQ) for cone beam CT (CBCT) soft-tissue matching for image-guided radiotherapy of the prostate. 23 males with local/locally advanced prostate cancer had the CBCT IQ assessed using a 4-point Likert scale (4 = excellent, no artefacts; 3 = good, few artefacts; 2 = poor, just able to match; 1 = unsatisfactory, not able to match) at three levels of exposure. The lateral separations of the subjects were also measured. The Friedman test and Wilcoxon signed-rank tests were used to determine if the IQ was associated with the exposure level. We used the point-biserial correlation and a χ(2) test to investigate the relationship between the separation and IQ. The Friedman test showed that the IQ was related to exposure (p = 2 × 10(-7)) and the Wilcoxon signed-rank test demonstrated that the IQ decreased as exposure decreased (all p-values <0.005). We did not find a correlation between the IQ and the separation (correlation coefficient 0.045), but for separations <35 cm, it was possible to use the lowest exposure parameters studied. We can reduce exposure factors to 80% of those supplied with the system without hindering the matching process for all patients. For patients with lateral separations <35 cm, the exposure factors can be reduced further to 64% of the original values. Likert scales are a useful tool for measuring IQ in the optimization of CBCT IQ for soft-tissue matching in radiotherapy image guidance applications.
Relative valuation of alternative methods of tax avoidance
Inger, Kerry Katharine
2012-01-01
This paper examines the relative valuation of alternative methods of tax avoidance. Prior studies find that firm value is positively associated with overall measures of tax avoidance; I extend this research by providing evidence that investors distinguish between methods of tax reduction in their valuation of tax avoidance. The impact of tax avoidance on firm value is a function of tax risk, permanence of tax savings, tax planning costs, implicit taxes and contrasts in disclosures of tax re...
Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods
Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.
2011-01-01
Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.
A spatial method to calculate small-scale fisheries effort in data poor scenarios.
Johnson, Andrew Frederick; Moreno-Báez, Marcia; Giron-Nava, Alfredo; Corominas, Julia; Erisman, Brad; Ezcurra, Exequiel; Aburto-Oropeza, Octavio
2017-01-01
To gauge the collateral impacts of fishing we must know where fishing boats operate and how much they fish. Although small-scale fisheries land approximately the same amount of fish for human consumption as industrial fleets globally, methods of estimating their fishing effort are comparatively poor. We present an accessible, spatial method of calculating the effort of small-scale fisheries based on two simple measures that are available, or at least easily estimated, in even the most data-poor fisheries: the number of boats and the local coastal human population. We illustrate the method using a small-scale fisheries case study from the Gulf of California, Mexico, and show that our measure of Predicted Fishing Effort (PFE), measured as the number of boats operating in a given area per day adjusted by the number of people in local coastal populations, can accurately predict fisheries landings in the Gulf. Comparing our values of PFE to commercial fishery landings throughout the Gulf also indicates that the current number of small-scale fishing boats in the Gulf is approximately double what is required to land theoretical maximum fish biomass. Our method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This new method provides an important first step towards estimating the fishing effort of small-scale fleets globally.
Development of polygon elements based on the scaled boundary finite element method
International Nuclear Information System (INIS)
Chiong, Irene; Song Chongmin
2010-01-01
We aim to extend the scaled boundary finite element method to construct conforming polygon elements. The development of the polygonal finite element is highly anticipated in computational mechanics as greater flexibility and accuracy can be achieved using these elements. The scaled boundary polygonal finite element will enable new developments in mesh generation, better accuracy from a higher order approximation and better transition elements in finite element meshes. Polygon elements of arbitrary number of edges and order have been developed successfully. The edges of an element are discretised with line elements. The displacement solution of the scaled boundary finite element method is used in the development of shape functions. They are shown to be smooth and continuous within the element, and satisfy compatibility and completeness requirements. Furthermore, eigenvalue decomposition has been used to depict element modes and outcomes indicate the ability of the scaled boundary polygonal element to express rigid body and constant strain modes. Numerical tests are presented; the patch test is passed and constant strain modes verified. Accuracy and convergence of the method are also presented and the performance of the scaled boundary polygonal finite element is verified on Cook's swept panel problem. Results show that the scaled boundary polygonal finite element method outperforms a traditional mesh and accuracy and convergence are achieved from fewer nodes. The proposed method is also shown to be truly flexible, and applies to arbitrary n-gons formed of irregular and non-convex polygons.
Santos, Alda; Castanheira, Filipa; Chambel, Maria José; Amarante, Michael Vieira; Costa, Carlos
2017-07-01
This study validates the Portuguese version of the psychological effects of the relational job characteristics scale among hospital nurses in Portugal and Brazil. Increasing attention has been given to the social dimension of work, following the transition to a service economy. Nevertheless, and despite the unquestionable relational characteristics of nursing work, scarce research has been developed among nurses under a relational job design framework. Moreover, it is important to develop instruments that study the effects of relational job characteristics among nurses. We followed Messick's framework for scale validation, comprising the steps regarding the response process and internal structure, as well as relationships with other variables (work engagement and burnout). Statistical analysis included exploratory factor analysis and confirmatory factor analysis. The psychological effects of the relational job characteristics scale provided evidence of good psychometric properties with Portuguese and Brazilian hospital nurses. Also, the psychological effects of the relational job characteristics are associated with nurses' work-related well-being: positively with work engagement and negatively concerning burnout. Hospitals that foster the relational characteristics of nursing work are contributing to their nurses' work-related well-being, which may be reflected in the quality of care and patient safety. © 2017 John Wiley & Sons Ltd.
Scaling relation between earthquake magnitude and the departure time from P wave similar growth
Noda, Shunta; Ellsworth, William L.
2016-01-01
We introduce a new scaling relation between earthquake magnitude (M) and a characteristic of initial P wave displacement. By examining Japanese K-NET data averaged in bins partitioned by Mw and hypocentral distance, we demonstrate that the P wave displacement briefly displays similar growth at the onset of rupture and that the departure time (Tdp), which is defined as the time of departure from similarity of the absolute displacement after applying a band-pass filter, correlates with the final M in a range of 4.5 ≤ Mw ≤ 7. The scaling relation between Mw and Tdp implies that useful information on the final M can be derived while the event is still in progress because Tdp occurs before the completion of rupture. We conclude that the scaling relation is important not only for earthquake early warning but also for the source physics of earthquakes.
Large-scale circulation departures related to wet episodes in north-east Brazil
Sikdar, Dhirendra N.; Elsner, James B.
1987-01-01
Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season is divided into dry and wet periods; the FGGE and geostationary satellite data was averaged; and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLPs have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.
Large-scale circulation departures related to wet episodes in northeast Brazil
Sikdar, D. N.; Elsner, J. B.
1985-01-01
Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season season is devided into dry and wet periods, the FGGE and geostationary satellite data was averaged and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLP's have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.
An improved method to characterise the modulation of small-scale turbulent by large-scale structures
Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta
2015-11-01
A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures
Krasny-Pacini, A; Pauly, F; Hiebel, J; Godon, S; Isner-Horobeti, M-E; Chevignard, M
2017-07-01
Goal Attainment Scaling (GAS) is a method for writing personalized evaluation scales to quantify progress toward defined goals. It is useful in rehabilitation but is hampered by the experience required to adequately "predict" the possible outcomes relating to a particular goal before treatment and the time needed to describe all 5 levels of the scale. Here we aimed to investigate the feasibility of using GAS in a clinical setting of a pediatric spasticity clinic with a shorter method, the "3-milestones" GAS (goal setting with 3 levels and goal rating with the classical 5 levels). Secondary aims were to (1) analyze the types of goals children's therapists set for botulinum toxin treatment and (2) compare the score distribution (and therefore the ability to predict outcome) by goal type. Therapists were trained in GAS writing and prepared GAS scales in the regional spasticity-management clinic they attended with their patients and families. The study included all GAS scales written during a 2-year period. GAS score distribution across the 5 GAS levels was examined to assess whether the therapist could reliably predict outcome and whether the 3-milestones GAS yielded similar distributions as the original GAS method. In total, 541 GAS scales were written and showed the expected score distribution. Most scales (55%) referred to movement quality goals and fewer (29%) to family goals and activity domains. The 3-milestones GAS method was feasible within the time constraints of the spasticity clinic and could be used by local therapists in cooperation with the hospital team. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Asperger, R.G.
1986-09-01
A new test method is described that allows the rapid field testing of calcium carbonate scale inhibitors at 500/sup 0/F (260/sup 0/C). The method evolved from use of a full-flow test loop on a well with a mass flow rate of about 1 x 10/sup 6/ lbm/hr (126 kg/s). It is a simple, effective way to evaluate the effectiveness of inhibitors under field conditions. Five commercial formulations were chosen for field evaluation on the basis of nonflowing, laboratory screening tests at 500/sup 0/F (260/sup 0/C). Four of these formulations from different suppliers controlled calcium carbonate scale deposition as measured by the test method. Two of these could dislodge recently deposited scale that had not age-hardened. Performance-profile diagrams, which were measured for these four effective inhibitors, show the concentration interrelationship between brine calcium and inhibitor concentrations at which the formulations will and will not stop scale formation in the test apparatus. With these diagrams, one formulation was chosen for testing on the full-flow brine line. The composition was tested for 6 weeks and showed a dramatic decrease in the scaling occurring at the flow-control valve. This scaling was about to force a shutdown of a major, long-term flow test being done for reservoir economic evaluations. The inhibitor stopped the scaling, and the test was performed without interruption.
Fongkaew, Warunee; Viseskul, Nongkran; Suksatit, Benjamas; Settheekul, Saowaluck; Chontawan, Ratanawadee; Grimes, Richard M; Grimes, Deanna E
2014-01-01
HIV/AIDS-related stigma has been linked to poor adherence resulting in drug resistance and the failure to control HIV. This study used both quantitative and qualitative methods to examine stigma and its relationship to adherence in 30 HIV-infected Thai youth aged 14 to 21 years. Stigma was measured using the HIV stigma scale and its 4 subscales, and adherence was measured using a visual analog scale. Stigma and adherence were also examined by in-depth interviews. The interviews were to determine whether verbal responses would match the scale's results. The mean score of stigma perception from the overall scale and its 4 subscales ranged from 2.14 to 2.45 on a scale of 1 to 4, indicating moderate levels of stigma. The mean adherence score was .74. The stigma scale and its subscales did not correlate with the adherence. Totally, 17 of the respondents were interviewed. Contrary to the quantitative results, the interviewees reported that the stigma led to poor adherence because the fear of disclosure often caused them to miss medication doses. The differences between the quantitative and the qualitative results highlight the importance of validating psychometric scales when they are translated and used in other cultures.
A method for high accuracy determination of equilibrium relative humidity
DEFF Research Database (Denmark)
Jensen, O.M.
2012-01-01
This paper treats a new method for measuring equilibrium relative humidity and equilibrium dew-point temperature of a material sample. The developed measuring device is described – a Dew-point Meter – which by means of so-called Dynamic Dew-point Analysis permits quick and very accurate...
Integrating Expressive Methods in a Relational-Psychotherapy
Directory of Open Access Journals (Sweden)
Richard G. Erskine
2011-06-01
Full Text Available Therapeutic Involvement is an integral part of all effective psychotherapy.This article is written to illustrate the concept of Therapeutic Involvement in working within a therapeutic relationship – within the transference -- and with active expressive and experiential methods to resolve traumatic experiences, relational disturbances and life shaping decisions.
Task-Management Method Using R-Tree Spatial Cloaking for Large-Scale Crowdsourcing
Directory of Open Access Journals (Sweden)
Yan Li
2017-12-01
Full Text Available With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In this paper, we propose the use of an R-tree spatial cloaking-based task-assignment method for large-scale spatial crowdsourcing. We use an estimated R-tree based on the requested crowdsourcing tasks to reduce the crowdsourcing server-side inserting cost and enable the scalability. By using Minimum Bounding Rectangle (MBR-based spatial anonymous data without exact position data, this method preserves the location privacy of participants in a simple way. In our experiment, we showed that our proposed method is faster than the current method, and is very efficient when the scale is increased.
International Nuclear Information System (INIS)
Comerford, Julia M.; Moustakas, Leonidas A.; Natarajan, Priyamvada
2010-01-01
Scaling relations of observed galaxy cluster properties are useful tools for constraining cosmological parameters as well as cluster formation histories. One of the key cosmological parameters, σ 8 , is constrained using observed clusters of galaxies, although current estimates of σ 8 from the scaling relations of dynamically relaxed galaxy clusters are limited by the large scatter in the observed cluster mass-temperature (M-T) relation. With a sample of eight strong lensing clusters at 0.3 8 , but combining the cluster concentration-mass relation with the M-T relation enables the inclusion of unrelaxed clusters as well. Thus, the resultant gains in the accuracy of σ 8 measurements from clusters are twofold: the errors on σ 8 are reduced and the cluster sample size is increased. Therefore, the statistics on σ 8 determination from clusters are greatly improved by the inclusion of unrelaxed clusters. Exploring cluster scaling relations further, we find that the correlation between brightest cluster galaxy (BCG) luminosity and cluster mass offers insight into the assembly histories of clusters. We find preliminary evidence for a steeper BCG luminosity-cluster mass relation for strong lensing clusters than the general cluster population, hinting that strong lensing clusters may have had more active merging histories.
Developing an Assessment Method of Active Aging: University of Jyvaskyla Active Aging Scale.
Rantanen, Taina; Portegijs, Erja; Kokko, Katja; Rantakokko, Merja; Törmäkangas, Timo; Saajanaho, Milla
2018-01-01
To develop an assessment method of active aging for research on older people. A multiphase process that included drafting by an expert panel, a pilot study for item analysis and scale validity, a feedback study with focus groups and questionnaire respondents, and a test-retest study. Altogether 235 people aged 60 to 94 years provided responses and/or feedback. We developed a 17-item University of Jyvaskyla Active Aging Scale with four aspects in each item (goals, ability, opportunity, and activity; range 0-272). The psychometric and item properties are good and the scale assesses a unidimensional latent construct of active aging. Our scale assesses older people's striving for well-being through activities pertaining to their goals, abilities, and opportunities. The University of Jyvaskyla Active Aging Scale provides a quantifiable measure of active aging that may be used in postal questionnaires or interviews in research and practice.
Method of producing exfoliated graphite, flexible graphite, and nano-scaled graphene platelets
Zhamu, Aruna; Shi, Jinjun; Guo, Jiusheng; Jang, Bor Z.
2010-11-02
The present invention provides a method of exfoliating a layered material (e.g., graphite and graphite oxide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of graphite, graphite oxide, or a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites. Nano-scaled graphene platelets are much lower-cost alternatives to carbon nano-tubes or carbon nano-fibers.
Worldwide F(ST) estimates relative to five continental-scale populations.
Steele, Christopher D; Court, Denise Syndercombe; Balding, David J
2014-11-01
We estimate the population genetics parameter FST (also referred to as the fixation index) from short tandem repeat (STR) allele frequencies, comparing many worldwide human subpopulations at approximately the national level with continental-scale populations. FST is commonly used to measure population differentiation, and is important in forensic DNA analysis to account for remote shared ancestry between a suspect and an alternative source of the DNA. We estimate FST comparing subpopulations with a hypothetical ancestral population, which is the approach most widely used in population genetics, and also compare a subpopulation with a sampled reference population, which is more appropriate for forensic applications. Both estimation methods are likelihood-based, in which FST is related to the variance of the multinomial-Dirichlet distribution for allele counts. Overall, we find low FST values, with posterior 97.5 percentiles estimates, and are also about half the magnitude of STR-based estimates from population genetics surveys that focus on distinct ethnic groups rather than a general population. Our findings support the use of FST up to 3% in forensic calculations, which corresponds to some current practice.
Directory of Open Access Journals (Sweden)
Ni An
2017-04-01
Full Text Available When modeling the soil/atmosphere interaction, it is of paramount importance to determine the net radiation flux. There are two common calculation methods for this purpose. Method 1 relies on use of air temperature, while Method 2 relies on use of both air and soil temperatures. Nowadays, there has been no consensus on the application of these two methods. In this study, the half-hourly data of solar radiation recorded at an experimental embankment are used to calculate the net radiation and long-wave radiation at different time-scales (half-hourly, hourly, and daily using the two methods. The results show that, compared with Method 2 which has been widely adopted in agronomical, geotechnical and geo-environmental applications, Method 1 is more feasible for its simplicity and accuracy at shorter time-scale. Moreover, in case of longer time-scale, daily for instance, less variations of net radiation and long-wave radiation are obtained, suggesting that no detailed soil temperature variations can be obtained. In other words, shorter time-scales are preferred in determining net radiation flux.
A new method to determine large scale structure from the luminosity distance
International Nuclear Information System (INIS)
Romano, Antonio Enea; Chiang, Hsu-Wen; Chen, Pisin
2014-01-01
The luminosity distance can be used to determine the properties of large scale structure around the observer. To this purpose we develop a new inversion method to map luminosity distance to a Lemaitre–Tolman–Bondi (LTB) metric based on the use of the exact analytical solution for Einstein equations. The main advantages of this approach are an improved numerical accuracy and stability, an exact analytical setting of the initial conditions for the differential equations which need to be solved and the validity for any sign of the functions determining the LTB geometry. Given the fully analytical form of the differential equations, this method also simplifies the calculation of the red-shift expansion around the apparent horizon point where the numerical solution becomes unstable. We test the method by inverting the supernovae Ia luminosity distance function corresponding to the best fit ΛCDM model. We find that only a limited range of initial conditions is compatible with observations, or a transition from red to blue-shift can occur at relatively low red-shift. Despite LTB solutions without a cosmological constant have been shown not to be compatible with all different set of available observational data, those studies normally fit data assuming a special functional ansatz for the inhomogeneity profile, which often depend only on few parameters. Inversion methods on the contrary are able to fully explore the freedom in fixing the functions which determine a LTB solution. Another important possible application is not about LTB solutions as cosmological models, but rather as tools to study the effects on the observations made by a generic observer located in an inhomogeneous region of the Universe where a fully non perturbative treatment involving exact solutions of Einstein equations is required. (paper)
On the mass-coupling relation of multi-scale quantum integrable models
Energy Technology Data Exchange (ETDEWEB)
Bajnok, Zoltán; Balog, János [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary); Ito, Katsushi [Department of Physics, Tokyo Institute of Technology,2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 (Japan); Satoh, Yuji [Institute of Physics, University of Tsukuba,1-1-1 Tennodai, Tsukuba, Ibaraki 305-8571 (Japan); Tóth, Gábor Zsolt [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary)
2016-06-13
We determine exactly the mass-coupling relation for the simplest multi-scale quantum integrable model, the homogenous sine-Gordon model with two independent mass-scales. We first reformulate its perturbed coset CFT description in terms of the perturbation of a projected product of minimal models. This representation enables us to identify conserved tensor currents on the UV side. These UV operators are then mapped via form factor perturbation theory to operators on the IR side, which are characterized by their form factors. The relation between the UV and IR operators is given in terms of the sought-for mass-coupling relation. By generalizing the Θ sum rule Ward identity we are able to derive differential equations for the mass-coupling relation, which we solve in terms of hypergeometric functions. We check these results against the data obtained by numerically solving the thermodynamic Bethe Ansatz equations, and find a complete agreement.
Tomas, Jose M.; Oliver, Amparo; Galiana, Laura; Sancho, Patricia; Lila, Marisol
2013-01-01
Several investigators have interpreted method effects associated with negatively worded items in a substantive way. This research extends those studies in different ways: (a) it establishes the presence of methods effects in further populations and particular scales, and (b) it examines the possible relations between a method factor associated…
Methods and systems relating to an augmented virtuality environment
Nielsen, Curtis W; Anderson, Matthew O; McKay, Mark D; Wadsworth, Derek C; Boyce, Jodie R; Hruska, Ryan C; Koudelka, John A; Whetten, Jonathan; Bruemmer, David J
2014-05-20
Systems and methods relating to an augmented virtuality system are disclosed. A method of operating an augmented virtuality system may comprise displaying imagery of a real-world environment in an operating picture. The method may further include displaying a plurality of virtual icons in the operating picture representing at least some assets of a plurality of assets positioned in the real-world environment. Additionally, the method may include displaying at least one virtual item in the operating picture representing data sensed by one or more of the assets of the plurality of assets and remotely controlling at least one asset of the plurality of assets by interacting with a virtual icon associated with the at least one asset.
International Nuclear Information System (INIS)
Gulov, A.V.; Skalozub, V.V.
2000-01-01
In the Yukawa model with two different mass scales the renormalization group equation is used to obtain relations between scattering amplitudes at low energies. Considering fermion-fermion scattering as an example, a basic one-loop renormalization group relation is derived which gives possibility to reduce the problem to the scattering of light particles on the external field substituting a heavy virtual state. Applications of the results to problem of searching new physics beyond the Standard Model are discussed [ru
The Work-Related Quality of Life Scale for Higher Education Employees
Edwards, Julian A.; Van Laar, Darren; Easton, Simon; Kinman, Gail
2009-01-01
Previous research suggests that higher education employees experience comparatively high levels of job stress. A range of instruments, both generic and job-specific, has been used to measure stressors and strains in this occupational context. The Work-related Quality of Life (WRQoL) scale is a measure designed to capture perceptions of the working…
Planck early results. XII. Cluster Sunyaev-Zeldovich optical scaling relations
DEFF Research Database (Denmark)
Poutanen, T.; Natoli, P.; Polenta, G.
2011-01-01
We present the Sunyaev-Zeldovich (SZ) signal-to-richness scaling relation (Y500 - N200) for the MaxBCG cluster catalogue. Employing a multi-frequency matched filter on the Planck sky maps, we measure the SZ signal for each cluster by adapting the filter according to weak-lensing calibrated mass-r...
The spatial extent of rainfall events and its relation to precipitation scaling
Lochbihler, K.U.; Lenderink, Geert; Siebesma, A.P.
2017-01-01
Observations show that subdaily precipitation extremes increase with dew point temperature at a rate exceeding the Clausius-Clapeyron (CC) relation. The understanding of this so-called super CC scaling is still incomplete, and observations of convective cell properties could provide important
DEFF Research Database (Denmark)
Rapetti Serra, David Angelo
Using a data set of 238 cluster detections drawn from the ROSAT All-Sky Survey and X-ray follow-up observations from the Chandra X-ray Observatory and/or ROSAT for 94 of those clusters we obtain tight constraints on dark energy, both luminosity-mass and temperature-mass scaling relations, neutrin...
Intrinsic symmetry of the scaling laws and generalized relations for critical indices
International Nuclear Information System (INIS)
Plechko, V.N.
1982-01-01
It is shown that the scating taws for criticat induces can be expressed as a consequence of a simple symmetry principle. Heuristic relations for critical induces of generalizing scaling laws for the case of arbitrary order parameters are presented, which manifestiy have a symmetric form and include the standard scalling laws as a particular case
How covalence breaks adsorption-energy scaling relations and solvation restores them
DEFF Research Database (Denmark)
Vallejo, Federico Calle; Krabbe, Alexander; García Lastra, Juan Maria
2017-01-01
It is known that breaking the scaling relations between the adsorption energies of *O, *OH, and *OOH is paramount in catalyzing more efficiently the reduction of O2 in fuel cells and its evolution in electrolyzers. Taking metalloporphyrins as a case study, we evaluate here the adsorption energies...
Scaling of lifting forces in relation to object size in whole body lifting
Kingma, I.; van Dieen, J.H.; Toussaint, H.M.
2005-01-01
Subjects prepare for a whole body lifting movement by adjusting their posture and scaling their lifting forces to the expected object weight. The expectancy is based on visual and haptic size cues. This study aimed to find out whether lifting force overshoots related to object size cues disappear or
An empirical velocity scale relation for modelling a design of large mesh pelagic trawl
Ferro, R.S.T.; Marlen, van B.; Hansen, K.E.
1996-01-01
Physical models of fishing nets are used in fishing technology research at scales of 1:40 or smaller. As with all modelling involving fluid flow, a set of rules is required to determine the geometry of the model and its velocity relative to the water. Appropriate rules ensure that the model is
A family of conjugate gradient methods for large-scale nonlinear equations
Directory of Open Access Journals (Sweden)
Dexiang Feng
2017-09-01
Full Text Available Abstract In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.
A family of conjugate gradient methods for large-scale nonlinear equations.
Feng, Dexiang; Sun, Min; Wang, Xueyong
2017-01-01
In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.
Galerkin projection methods for solving multiple related linear systems
Energy Technology Data Exchange (ETDEWEB)
Chan, T.F.; Ng, M.; Wan, W.L.
1996-12-31
We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.
Pain point system scale (PPSS: a method for postoperative pain estimation in retrospective studies
Directory of Open Access Journals (Sweden)
Gkotsi A
2012-11-01
Full Text Available Anastasia Gkotsi,1 Dimosthenis Petsas,2 Vasilios Sakalis,3 Asterios Fotas,3 Argyrios Triantafyllidis,3 Ioannis Vouros,3 Evangelos Saridakis,2 Georgios Salpiggidis,3 Athanasios Papathanasiou31Department of Experimental Physiology, Aristotle University of Thessaloniki, Thessaloniki, Greece; 2Department of Anesthesiology, 3Department of Urology, Hippokration General Hospital, Thessaloniki, GreecePurpose: Pain rating scales are widely used for pain assessment. Nevertheless, a new tool is required for pain assessment needs in retrospective studies.Methods: The postoperative pain episodes, during the first postoperative day, of three patient groups were analyzed. Each pain episode was assessed by a visual analog scale, numerical rating scale, verbal rating scale, and a new tool – pain point system scale (PPSS – based on the analgesics administered. The type of analgesic was defined based on the authors’ clinic protocol, patient comorbidities, pain assessment tool scores, and preadministered medications by an artificial neural network system. At each pain episode, each patient was asked to fill the three pain scales. Bartlett’s test and Kaiser–Meyer–Olkin criterion were used to evaluate sample sufficiency. The proper scoring system was defined by varimax rotation. Spearman’s and Pearson’s coefficients assessed PPSS correlation to the known pain scales.Results: A total of 262 pain episodes were evaluated in 124 patients. The PPSS scored one point for each dose of paracetamol, three points for each nonsteroidal antiinflammatory drug or codeine, and seven points for each dose of opioids. The correlation between the visual analog scale and PPSS was found to be strong and linear (rho: 0.715; P <0.001 and Pearson: 0.631; P < 0.001.Conclusion: PPSS correlated well with the known pain scale and could be used safely in the evaluation of postoperative pain in retrospective studies.Keywords: pain scale, retrospective studies, pain point system
Quantification of organ motion based on an adaptive image-based scale invariant feature method
Energy Technology Data Exchange (ETDEWEB)
Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)
2013-11-15
Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT
NGC 1275: An Outlier of the Black Hole-Host Scaling Relations
Directory of Open Access Journals (Sweden)
Eleonora Sani
2018-02-01
Full Text Available The active galaxy NGC 1275 lies at the center of the Perseus cluster of galaxies, being an archetypal BH-galaxy system that is supposed to fit well with the MBH-host scaling relations obtained for quiescent galaxies. Since it harbors an obscured AGN, only recently our group has been able to estimate its black hole mass. Here our aim is to pinpoint NGC 1275 on the less dispersed scaling relations, namely the MBH-σ⋆ and MBH−Lbul planes. Starting from our previous work (Ricci et al., 2017a, we estimate that NGC 1275 falls well outside the intrinsic dispersion of the MBH-σ⋆ plane being 1.2 dex (in black hole mass displaced with respect to the scaling relations. We then perform a 2D morphological decomposition analysis on Spitzer/IRAC images at 3.6 μm and find that, beyond the bright compact nucleus that dominates the central emission, NGC 1275 follows a de Vaucouleurs profile with no sign of significant star formation nor clear merger remnants. Nonetheless, its displacement on the MBH−L3.6,bul plane with respect to the scaling relation is as high as observed in the MBH-σ⋆. We explore various scenarios to interpret such behaviors, of which the most realistic one is the evolutionary pattern followed by NGC 1275 to approach the scaling relation. We indeed speculate that NGC 1275 might be a specimen for those galaxies in which the black holes adjusted to its host.
Development and psychometric testing of the Nursing Workplace Relational Environment Scale (NWRES).
Duddle, Maree; Boughton, Maureen
2009-03-01
The aim of this study was to develop and test the psychometric properties of the Nursing Workplace Relational Environment Scale (NWRES). A positive relational environment in the workplace is characterised by a sense of connectedness and belonging, support and cooperation among colleagues, open communication and effectively managed conflict. A poor relational environment in the workplace may contribute to job dissatisfaction and early turnover of staff. Quantitative survey. A three-stage process was used to design and test the NWRES. In Stage 1, an extensive literature review was conducted on professional working relationships and the nursing work environment. Three key concepts; collegiality, workplace conflict and job satisfaction were identified and defined. In Stage 2, a pool of items was developed from the dimensions of each concept and formulated into a 35-item scale which was piloted on a convenience sample of 31 nurses. In Stage 3, the newly refined 28-item scale was administered randomly to a convenience sample of 150 nurses. Psychometric testing was conducted to establish the construct validity and reliability of the scale. Exploratory factor analysis resulted in a 22-item scale. The factor analysis indicated a four-factor structure: collegial behaviours, relational atmosphere, outcomes of conflict and job satisfaction which explained 68.12% of the total variance. Cronbach's alpha coefficient for the NWRES was 0.872 and the subscales ranged from 0.781-0.927. The results of the study confirm the reliability and validity of the NWRES. Replication of this study with a larger sample is indicated to determine relationships among the subscales. The results of this study have implications for health managers in terms of understanding the impact of the relational environment of the workplace on job satisfaction and retention.
Mathematic modeling of the method of measurement relative dielectric permeability
Plotnikova, I. V.; Chicherina, N. V.; Stepanov, A. B.
2018-05-01
The method of measuring relative permittivity’s and the position of the interface between layers of a liquid medium is considered in the article. An electric capacitor is a system consisting of two conductors that are separated by a dielectric layer. It is mathematically proven that at any given time it is possible to obtain the values of the relative permittivity in the layers of the liquid medium and to determine the level of the interface between the layers of the two-layer liquid. The estimation of measurement errors is made.
Wave-particle duality through an extended model of the scale relativity theory
International Nuclear Information System (INIS)
Ioannou, P D; Nica, P; Agop, M; Paun, V; Vizureanu, P
2008-01-01
Considering that the chaotic effect of associated wave packet on the particle itself results in movements on the fractal (continuous and non-differentiable) curves of fractal dimension D F , wave-particle duality through an extension of the scale relativity theory is given. It results through an equation of motion for the complex speed field, that in a fractal fluid, the convection, dissipation and dispersion are reciprocally compensating at any scale (differentiable or non-differentiable). From here, for an irrotational movement, a generalized Schroedinger equation is obtained. The absence of dispersion implies a generalized Navier-Stokes type equation, whereas, for the irrotational movement and the fractal dimension, D F = 2, the usual Schroedinger equation results. The absence of dissipation implies a generalized Korteweg-de Vries type equation. In such conjecture, at the differentiable scale, the duality is achieved through the flowing regimes of the fractal fluid, i.e. the wave character by means of the non-quasi-autonomous flowing regime and the particle character by means of the quasi-autonomous flowing regime. These flowing regimes are separated by '0.7 structure'. At the non-differentiable scale, a fractal potential acts as an energy accumulator and controls through the coherence the duality. The correspondence between the differentiable and non-differentiable scales implies a Cantor space-time. Moreover, the wave-particle duality implies at any scale a fractal.
Scale-Dependent Assessment of Relative Disease Resistance to Plant Pathogens
Directory of Open Access Journals (Sweden)
Peter Skelsey
2014-03-01
Full Text Available Phenotyping trials may not take into account sufficient spatial context to infer quantitative disease resistance of recommended varieties in commercial production settings. Recent ecological theory—the dispersal scaling hypothesis—provides evidence that host heterogeneity and scale of host heterogeneity interact in a predictable and straightforward manner to produce a unimodal (“humpbacked” distribution of epidemic outcomes. This suggests that the intrinsic artificiality (scale and design of experimental set-ups may lead to spurious conclusions regarding the resistance of selected elite cultivars, due to the failure of experimental efforts to accurately represent disease pressure in real agricultural situations. In this model-based study we investigate the interaction of host heterogeneity and scale as a confounding factor in the inference from ex-situ assessment of quantitative disease resistance to commercial production settings. We use standard modelling approaches in plant disease epidemiology and a number of different agronomic scenarios. Model results revealed that the interaction of heterogeneity and scale is a determinant of relative varietal performance under epidemic conditions. This is a previously unreported phenomenon that could provide a new basis for informing the design of future phenotyping platforms, and optimising the scale at which quantitative disease resistance is assessed.
Truong, N.; Rasia, E.; Mazzotta, P.; Planelles, S.; Biffi, V.; Fabjan, D.; Beck, A. M.; Borgani, S.; Dolag, K.; Gaspari, M.; Granato, G. L.; Murante, G.; Ragone-Figueroa, C.; Steinborn, L. K.
2018-03-01
We analyse cosmological hydrodynamical simulations of galaxy clusters to study the X-ray scaling relations between total masses and observable quantities such as X-ray luminosity, gas mass, X-ray temperature, and YX. Three sets of simulations are performed with an improved version of the smoothed particle hydrodynamics GADGET-3 code. These consider the following: non-radiative gas, star formation and stellar feedback, and the addition of feedback by active galactic nuclei (AGN). We select clusters with M500 > 1014 M⊙E(z)-1, mimicking the typical selection of Sunyaev-Zeldovich samples. This permits to have a mass range large enough to enable robust fitting of the relations even at z ˜ 2. The results of the analysis show a general agreement with observations. The values of the slope of the mass-gas mass and mass-temperature relations at z = 2 are 10 per cent lower with respect to z = 0 due to the applied mass selection, in the former case, and to the effect of early merger in the latter. We investigate the impact of the slope variation on the study of the evolution of the normalization. We conclude that cosmological studies through scaling relations should be limited to the redshift range z = 0-1, where we find that the slope, the scatter, and the covariance matrix of the relations are stable. The scaling between mass and YX is confirmed to be the most robust relation, being almost independent of the gas physics. At higher redshifts, the scaling relations are sensitive to the inclusion of AGNs which influences low-mass systems. The detailed study of these objects will be crucial to evaluate the AGN effect on the ICM.
Bethmann, F.
2011-03-22
Theoretical considerations and empirical regressions show that, in the magnitude range between 3 and 5, local magnitude, ML, and moment magnitude, Mw, scale 1:1. Previous studies suggest that for smaller magnitudes this 1:1 scaling breaks down. However, the scatter between ML and Mw at small magnitudes is usually large and the resulting scaling relations are therefore uncertain. In an attempt to reduce these uncertainties, we first analyze the ML versus Mw relation based on 195 events, induced by the stimulation of a geothermal reservoir below the city of Basel, Switzerland. Values of ML range from 0.7 to 3.4. From these data we derive a scaling of ML ~ 1:5Mw over the given magnitude range. We then compare peak Wood-Anderson amplitudes to the low-frequency plateau of the displacement spectra for six sequences of similar earthquakes in Switzerland in the range of 0:5 ≤ ML ≤ 4:1. Because effects due to the radiation pattern and to the propagation path between source and receiver are nearly identical at a particular station for all events in a given sequence, the scatter in the data is substantially reduced. Again we obtain a scaling equivalent to ML ~ 1:5Mw. Based on simulations using synthetic source time functions for different magnitudes and Q values estimated from spectral ratios between downhole and surface recordings, we conclude that the observed scaling can be explained by attenuation and scattering along the path. Other effects that could explain the observed magnitude scaling, such as a possible systematic increase of stress drop or rupture velocity with moment magnitude, are masked by attenuation along the path.
International Nuclear Information System (INIS)
Pellarin, D.J.; Bickford, D.F.
1985-01-01
This report describes the test equipment and methods, and documents the results of the first large-scale MCC-1 experiments in the Large Scale Leach Test Facility (LSLTF). Two experiments were performed using 1-ft-long samples sectioned from the middle of canister MS-11. The leachant used in the experiments was ultrapure deionized water - an aggressive and well characterized leachant providing high sensitivity for liquid sample analyses. All the original test plan objectives have been successfully met. Equipment and procedures have been developed for large-sample-size leach testing. The statistical reliability of the method has been determined, and ''bench mark'' data developed to relate small scale leach testing to full size waste forms. The facility is unique, and provides sampling reliability and flexibility not possible in smaller laboratory scale tests. Future use of this facility should simplify and accelerate the development of leaching models and repository specific data. The factor of less than 3 for leachability, corresponding to a 200,000/1 increase in sample volume, enhances the credibility of small scale test data which precedes this work, and supports the ability of the DWPF waste form to meet repository criteria
Effect of primordial non-Gaussianities on galaxy clusters scaling relations
Trindade, A. M. M.; da Silva, Antonio
2017-07-01
Galaxy clusters are a valuable source of cosmological information. Their formation and evolution depends on the underlying cosmology and on the statistical nature of the primordial density fluctuations. Here we investigate the impact of primordial non-Gaussianities (PNG) on the scaling properties of galaxy clusters. We performed a series of hydrodynamic N-body simulations featuring adiabatic gas physics and different levels of non-Gaussianity within the Λ cold dark matter framework. We focus on the T-M, S-M, Y-M and YX-M scalings relating the total cluster mass with temperature, entropy and Sunyaev-Zeld'ovich integrated pressure that reflect the thermodynamic state of the intracluster medium. Our results show that PNG have an impact on cluster scalings laws. The scalings mass power-law indexes are almost unaffected by the existence of PNG, but the amplitude and redshift evolution of their normalizations are clearly affected. Changes in the Y-M and YX-M normalizations are as high as 22 per cent and 16 per cent when fNL varies from -500 to 500, respectively. Results are consistent with the view that positive/negative fNL affect cluster profiles due to an increase/decrease of cluster concentrations. At low values of fNL, as suggested by present Planck constraints on a scale invariant fNL, the impact on the scaling normalizations is only a few per cent. However, if fNL varies with scale, PNG may have larger amplitudes at clusters scales; thus, our results suggest that PNG should be taken into account when cluster data are used to infer or forecast cosmological parameters from existing or future cluster surveys.
Stress and adhesion of chromia-rich scales on ferritic stainless steels in relation with spallation
Directory of Open Access Journals (Sweden)
A. Galerie
2004-03-01
Full Text Available The relation between chromia scale spallation during oxidation or cooling down of ferritic stainless steels is generally discussed in terms of mechanical stresses induced by volume changes or differential thermal expansion. In the present paper, growth and thermal stress measurements in scales grown on different ferritic steel grades have shown that the main stress accumulation occurs during isothermal scale growth and that thermal stresses are of minor importance. However, when spallation occurs, it is always during cooling down. Steel-oxide interface undulation seems to play a major role at this stage, thus relating spallation to the metal mechanical properties, thickness and surface preparation. A major influence on spallation of the minor stabilizing elements of the steels was observed which could not be related to any difference in stress state. Therefore, an original inverted blister test was developed to derive quantitative values of the metal-oxide adhesion energy. These values clearly confirmed that this parameter was influenced by scale thickness and by minor additions, titanium greatly increasing adhesion whereas niobium decreased it.
International Nuclear Information System (INIS)
Scott, Nicholas; Graham, Alister W.
2013-01-01
We investigate whether or not nuclear star clusters and supermassive black holes (SMBHs) follow a common set of mass scaling relations with their host galaxy's properties, and hence can be considered to form a single class of central massive object (CMO). We have compiled a large sample of galaxies with measured nuclear star cluster masses and host galaxy properties from the literature and fit log-linear scaling relations. We find that nuclear star cluster mass, M NC , correlates most tightly with the host galaxy's velocity dispersion: log M NC = (2.11 ± 0.31)log (σ/54) + (6.63 ± 0.09), but has a slope dramatically shallower than the relation defined by SMBHs. We find that the nuclear star cluster mass relations involving host galaxy (and spheroid) luminosity and stellar and dynamical mass, intercept with but are in general shallower than the corresponding black hole scaling relations. In particular, M NC ∝M 0.55±0.15 Gal,dyn ; the nuclear cluster mass is not a constant fraction of its host galaxy or spheroid mass. We conclude that nuclear stellar clusters and SMBHs do not form a single family of CMOs.
Lu, Feiyu; Yuan, Naiming; Fu, Zuntao; Mao, Jiangyu
2012-10-01
Volatility series (defined as the magnitude of the increments between successive elements) of five different meteorological variables over China are analyzed by means of detrended fluctuation analysis (DFA for short). Universal scaling behaviors are found in all volatility records, whose scaling exponents take similar distributions with similar mean values and standard deviations. To reconfirm the relation between long-range correlations in volatility and nonlinearity in original series, DFA is also applied to the magnitude records (defined as the absolute values of the original records). The results clearly indicate that the nonlinearity of the original series is more pronounced in the magnitude series.
Measuring emotions during epistemic activities: the Epistemically-Related Emotion Scales.
Pekrun, Reinhard; Vogl, Elisabeth; Muis, Krista R; Sinatra, Gale M
2017-09-01
Measurement instruments assessing multiple emotions during epistemic activities are largely lacking. We describe the construction and validation of the Epistemically-Related Emotion Scales, which measure surprise, curiosity, enjoyment, confusion, anxiety, frustration, and boredom occurring during epistemic cognitive activities. The instrument was tested in a multinational study of emotions during learning from conflicting texts (N = 438 university students from the United States, Canada, and Germany). The findings document the reliability, internal validity, and external validity of the instrument. A seven-factor model best fit the data, suggesting that epistemically-related emotions should be conceptualised in terms of discrete emotion categories, and the scales showed metric invariance across the North American and German samples. Furthermore, emotion scores changed over time as a function of conflicting task information and related significantly to perceived task value and use of cognitive and metacognitive learning strategies.
Energy storage cell impedance measuring apparatus, methods and related systems
Morrison, John L.; Morrison, William H.; Christophersen, Jon P.
2017-12-26
Energy storage cell impedance testing devices, circuits, and related methods are disclosed. An energy storage cell impedance measuring device includes a sum of sinusoids (SOS) current excitation circuit including differential current sources configured to isolate a ground terminal of the differential current sources from a positive terminal and a negative terminal of an energy storage cell. A method includes applying an SOS signal comprising a sum of sinusoidal current signals to the energy storage cell with the SOS current excitation circuit, each of the sinusoidal current signals oscillating at a different one of a plurality of different frequencies. The method also includes measuring an electrical signal at a positive terminal and a negative terminal of the energy storage cell, and computing an impedance of the energy storage cell at each of the plurality of different frequencies using the measured electrical signal.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
Sidle, R. C.
2013-12-01
Hydrologic, pedologic, and geomorphic processes are strongly interrelated and affected by scale. These interactions exert important controls on runoff generation, preferential flow, contaminant transport, surface erosion, and mass wasting. Measurement of hydraulic conductivity (K) and infiltration capacity at small scales generally underestimates these values for application at larger field, hillslope, or catchment scales. Both vertical and slope-parallel saturated flow and related contaminant transport are often influenced by interconnected networks of preferential flow paths, which are not captured in K measurements derived from soil cores. Using such K values in models may underestimate water and contaminant fluxes and runoff peaks. As shown in small-scale runoff plot studies, infiltration rates are typically lower than integrated infiltration across a hillslope or in headwater catchments. The resultant greater infiltration-excess overland flow in small plots compared to larger landscapes is attributed to the lack of preferential flow continuity; plot border effects; greater homogeneity of rainfall inputs, topography and soil physical properties; and magnified effects of hydrophobicity in small plots. At the hillslope scale, isolated areas with high infiltration capacity can greatly reduce surface runoff and surface erosion at the hillslope scale. These hydropedologic and hydrogeomorphic processes are also relevant to both occurrence and timing of landslides. The focus of many landslide studies has typically been either on small-scale vadose zone process and how these affect soil mechanical properties or on larger scale, more descriptive geomorphic studies. One of the issues in translating laboratory-based investigations on geotechnical behavior of soils to field scales where landslides occur is the characterization of large-scale hydrological processes and flow paths that occur in heterogeneous and anisotropic porous media. These processes are not only affected
Scaling Green-Kubo Relation and Application to Three Aging Systems
Directory of Open Access Journals (Sweden)
A. Dechant
2014-02-01
Full Text Available The Green-Kubo formula relates the spatial diffusion coefficient to the stationary velocity autocorrelation function. We derive a generalization of the Green-Kubo formula that is valid for systems with long-range or nonstationary correlations for which the standard approach is no longer valid. For the systems under consideration, the velocity autocorrelation function ⟨v(t+τv(t⟩ asymptotically exhibits a certain scaling behavior and the diffusion is anomalous, ⟨x^{2}(t⟩≃2D_{ν}t^{ν}. We show how both the anomalous diffusion coefficient D_{ν} and the exponent ν can be extracted from this scaling form. Our scaling Green-Kubo relation thus extends an important relation between transport properties and correlation functions to generic systems with scale-invariant dynamics. This includes stationary systems with slowly decaying power-law correlations, as well as aging systems, systems whose properties depend on the age of the system. Even for systems that are stationary in the long-time limit, we find that the long-time diffusive behavior can strongly depend on the initial preparation of the system. In these cases, the diffusivity D_{ν} is not unique, and we determine its values, respectively, for a stationary or nonstationary initial state. We discuss three applications of the scaling Green-Kubo relation: free diffusion with nonlinear friction corresponding to cold atoms diffusing in optical lattices, the fractional Langevin equation with external noise recently suggested to model active transport in cells, and the Lévy walk with numerous applications, in particular, blinking quantum dots. These examples underline the wide applicability of our approach, which is able to treat very different mechanisms of anomalous diffusion.
Methods for Dissecting Motivation and Related Psychological Processes in Rodents.
Ward, Ryan D
2016-01-01
Motivational impairments are increasingly recognized as being critical to functional deficits and decreased quality of life in patients diagnosed with psychiatric disease. Accordingly, much preclinical research has focused on identifying psychological and neurobiological processes which underlie motivation . Inferring motivation from changes in overt behavioural responding in animal models, however, is complicated, and care must be taken to ensure that the observed change is accurately characterized as a change in motivation , and not due to some other, task-related process. This chapter discusses current methods for assessing motivation and related psychological processes in rodents. Using an example from work characterizing the motivational impairments in an animal model of the negative symptoms of schizophrenia, we highlight the importance of careful and rigorous experimental dissection of motivation and the related psychological processes when characterizing motivational deficits in rodent models . We suggest that such work is critical to the successful translation of preclinical findings to therapeutic benefits for patients.
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
1997-01-01
This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei
2017-02-01
Studying small-scale geologic discontinuities, such as faults, cavities and fractures, plays a vital role in analyzing the inner conditions of reservoirs, as these geologic structures and elements can provide storage spaces and migration pathways for petroleum. However, these geologic discontinuities have weak energy and are easily contaminated with noises, and therefore effectively extracting them from seismic data becomes a challenging problem. In this paper, a method for detecting small-scale discontinuities using dictionary learning and sparse representation is proposed that can dig up high-resolution information by sparse coding. A K-SVD (K-means clustering via Singular Value Decomposition) sparse representation model that contains two stage of iteration procedure: sparse coding and dictionary updating, is suggested for mathematically expressing these seismic small-scale discontinuities. Generally, the orthogonal matching pursuit (OMP) algorithm is employed for sparse coding. However, the method can only update one dictionary atom at one time. In order to improve calculation efficiency, a regularized version of OMP algorithm is presented for simultaneously updating a number of atoms at one time. Two numerical experiments demonstrate the validity of the developed method for clarifying and enhancing small-scale discontinuities. The field example of carbonate reservoirs further demonstrates its effectiveness in revealing masked tiny faults and small-scale cavities.
Gauge-independent scales related to the Standard Model vacuum instability
International Nuclear Information System (INIS)
Espinosa, J.R.; Garny, M.; Konstandin, T.; Riotto, A.
2016-08-01
The measured (central) values of the Higgs and top quark masses indicate that the Standard Model (SM) effective potential develops an instability at high field values. The scale of this instability, determined as the Higgs field value at which the potential drops below the electroweak minimum, is about 10"1"1 GeV. However, such a scale is unphysical as it is not gauge invariant and suffers from a gauge-fixing uncertainty of up to two orders of magnitude. Subjecting our system, the SM, to several probes of the instability (adding higher order operators to the potential; letting the vacuum decay through critical bubbles; heating up the system to very high temperature; inflating it) and asking in each case physical questions, we are able to provide several gauge-invariant scales related with the Higgs potential instability.
Cultural adaptation of the Tuberculosis-related stigma scale to Brazil.
Crispim, Juliane de Almeida; Touso, Michelle Mosna; Yamamura, Mellina; Popolin, Marcela Paschoal; Garcia, Maria Concebida da Cunha; Santos, Cláudia Benedita Dos; Palha, Pedro Fredemir; Arcêncio, Ricardo Alexandre
2016-06-01
The process of stigmatization associated with TB has been undervalued in national research as this social aspect is important in the control of the disease, especially in marginalized populations. This paper introduces the stages of the process of cultural adaptation in Brazil of the Tuberculosis-related stigma scale for TB patients. It is a methodological study in which the items of the scale were translated and back-translated with semantic validation with 15 individuals of the target population. After translation, the reconciled back-translated version was compared with the original version by the project coordinator in Southern Thailand, who approved the final version in Brazilian Portuguese. The results of the semantic validation conducted with TB patients enable the identification that, in general, the scale was well accepted and easily understood by the participants.
Gauge-Independent Scales Related to the Standard Model Vacuum Instability
Espinosa, Jose R.; Konstandin, Thomas; Riotto, Antonio
2017-01-01
The measured (central) values of the Higgs and top quark masses indicate that the Standard Model (SM) effective potential develops an instability at high field values. The scale of this instability, determined as the Higgs field value at which the potential drops below the electroweak minimum, is about $10^{11}$ GeV. However, such a scale is unphysical as it is not gauge-invariant and suffers from a gauge-fixing uncertainty of up to two orders of magnitude. Subjecting our system, the SM, to several probes of the instability (adding higher order operators to the potential; letting the vacuum decay through critical bubbles; heating up the system to very high temperature; inflating it) and asking in each case physical questions, we are able to provide several gauge-invariant scales related with the Higgs potential instability.
The resource-based relative value scale and physician reimbursement policy.
Laugesen, Miriam J
2014-11-01
Most physicians are unfamiliar with the details of the Resource-Based Relative Value Scale (RBRVS) and how changes in the RBRVS influence Medicare and private reimbursement rates. Physicians in a wide variety of settings may benefit from understanding the RBRVS, including physicians who are employees, because many organizations use relative value units as productivity measures. Despite the complexity of the RBRVS, its logic and ideal are simple: In theory, the resource usage (comprising physician work, practice expense, and liability insurance premium costs) for one service is relative to the resource usage of all others. Ensuring relativity when new services are introduced or existing services are changed is, therefore, critical. Since the inception of the RBRVS, the American Medical Association's Relative Value Scale Update Committee (RUC) has made recommendations to the Centers for Medicare & Medicaid Services on changes to relative value units. The RUC's core focus is to develop estimates of physician work, but work estimates also partly determine practice expense payments. Critics have attributed various health-care system problems, including declining and growing gaps between primary care and specialist incomes, to the RUC's role in the RBRVS update process. There are persistent concerns regarding the quality of data used in the process and the potential for services to be overvalued. The Affordable Care Act addresses some of these concerns by increasing payments to primary care physicians, requiring reevaluation of the data underlying work relative value units, and reviewing misvalued codes.
Schnettler, Berta; Grunert, Klaus G; Miranda-Zapata, Edgardo; Orellana, Ligia; Sepúlveda, José; Lobos, Germán; Hueche, Clementina; Höger, Yesli
2017-06-01
The aims of this study were to test the relationships between food neophobia, satisfaction with food-related life and food technology neophobia, distinguishing consumer segments according to these variables and characterizing them according to willingness to purchase food produced with novel technologies. A survey was conducted with 372 university students (mean aged=20.4years, SD=2.4). The questionnaire included the Abbreviated version of the Food Technology Neophobia Scale (AFTNS), Satisfaction with Life Scale (SWLS), and a 6-item version of the Food Neophobia Scale (FNS). Using confirmatory factor analysis, it was confirmed that SWFL correlated inversely with FNS, whereas FNS correlated inversely with AFTNS. No relationship was found between SWFL and AFTNS. Two main segments were identified using cluster analysis; these segments differed according to gender and family size. Group 1 (57.8%) possessed higher AFTNS and FNS scores than Group 2 (28.5%). However, these groups did not differ in their SWFL scores. Group 1 was less willing to purchase foods produced with new technologies than Group 2. The AFTNS and the 6-item version of the FNS are suitable instruments to measure acceptance of foods produced using new technologies in South American developing countries. The AFTNS constitutes a parsimonious alternative for the international study of food technology neophobia. Copyright © 2017 Elsevier Ltd. All rights reserved.
The development and validation of the Relational Self-Esteem Scale.
Du, Hongfei; King, Ronnel B; Chi, Peilian
2012-06-01
According to the tripartite model of the self (Brewer & Gardner, 1996), the self consists of three aspects: personal, relational, and collective. Correspondingly, individuals can achieve a sense of self-worth through their personal attributes (personal self-esteem), relationship with significant others (relational self-esteem), or social group membership (collective self-esteem). Existing measures on personal and collective self-esteem are available in the literature; however, no scale exists that assesses relational self-esteem. The authors developed a scale to measure individual differences in relational self-esteem and tested it with two samples of Chinese university students. Between and within-network approaches to construct validation were used. The scale showed adequate internal consistency reliability and results of the confirmatory factor analysis showed good fit. It also exhibited meaningful correlations with theoretically relevant constructs in the nomological network. Implications and directions for future research are discussed. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.
Weinstein, Ben G; Graham, Catherine H
2016-08-01
A challenge in community ecology is connecting biogeographic patterns with local scale observations. In Neotropical hummingbirds, closely related species often co-occur less frequently than expected (overdispersion) when compared to a regional species pool. While this pattern has been attributed to interspecific competition, it is important to connect these findings with local scale mechanisms of coexistence. We measured the importance of the presence of competitors and the availability of resources on selectivity at experimental feeders for Andean hummingbirds along a wide elevation gradient. Selectivity was measured as the time a bird fed at a feeder with a high sucrose concentration when presented with feeders of both low and high sucrose concentrations. Resource selection was measured using time-lapse cameras to identity which floral resources were used by each hummingbird species. We found that the increased abundance of preferred resources surrounding the feeder best explained increased species selectivity, and that related hummingbirds with similar morphology chose similar floral resources. We did not find strong support for direct agonism based on differences in body size or phylogenetic relatedness in predicting selectivity. These results suggest closely related hummingbird species have overlapping resource niches, and that the intensity of interspecific competition is related to the abundance of those preferred resources. If these competitive interactions have negative demographic effects, our results could help explain the pattern of phylogenetic overdispersion observed at regional scales. © 2016 by the Ecological Society of America.
A multiple-scale power series method for solving nonlinear ordinary differential equations
Directory of Open Access Journals (Sweden)
Chein-Shan Liu
2016-02-01
Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.
Large Scale Water Vapor Sources Relative to the October 2000 Piedmont Flood
Turato, Barbara; Reale, Oreste; Siccardi, Franco
2003-01-01
Very intense mesoscale or synoptic-scale rainfall events can occasionally be observed in the Mediterranean region without any deep cyclone developing over the areas affected by precipitation. In these perplexing cases the synoptic situation can superficially look similar to cases in which very little precipitation occurs. These situations could possibly baffle the operational weather forecasters. In this article, the major precipitation event that affected Piedmont (Italy) between 13 and 16 October 2000 is investigated. This is one of the cases in which no intense cyclone was observed within the Mediterranean region at any time, only a moderate system was present, and yet exceptional rainfall and flooding occurred. The emphasis of this study is on the moisture origin and transport. Moisture and energy balances are computed on different space- and time-scales, revealing that precipitation exceeds evaporation over an area inclusive of Piedmont and the northwestern Mediterranean region, on a time-scale encompassing the event and about two weeks preceding it. This is suggestive of an important moisture contribution originating from outside the region. A synoptic and dynamic analysis is then performed to outline the potential mechanisms that could have contributed to the large-scale moisture transport. The central part of the work uses a quasi-isentropic water-vapor back trajectory technique. The moisture sources obtained by this technique are compared with the results of the balances and with the synoptic situation, to unveil possible dynamic mechanisms and physical processes involved. It is found that moisture sources on a variety of atmospheric scales contribute to this event. First, an important contribution is caused by the extratropical remnants of former tropical storm Leslie. The large-scale environment related to this system allows a significant amount of moisture to be carried towards Europe. This happens on a time- scale of about 5-15 days preceding the
Two scale damage model and related numerical issues for thermo-mechanical high cycle fatigue
International Nuclear Information System (INIS)
Desmorat, R.; Kane, A.; Seyedi, M.; Sermage, J.P.
2007-01-01
On the idea that fatigue damage is localized at the microscopic scale, a scale smaller than the mesoscopic one of the Representative Volume Element (RVE), a three-dimensional two scale damage model has been proposed for High Cycle Fatigue applications. It is extended here to aniso-thermal cases and then to thermo-mechanical fatigue. The modeling consists in the micro-mechanics analysis of a weak micro-inclusion subjected to plasticity and damage embedded in an elastic meso-element (the RVE of continuum mechanics). The consideration of plasticity coupled with damage equations at micro-scale, altogether with Eshelby-Kroner localization law, allows to compute the value of microscopic damage up to failure for any kind of loading, 1D or 3D, cyclic or random, isothermal or aniso-thermal, mechanical, thermal or thermo-mechanical. A robust numerical scheme is proposed in order to make the computations fast. A post-processor for damage and fatigue (DAMAGE-2005) has been developed. It applies to complex thermo-mechanical loadings. Examples of the representation by the two scale damage model of physical phenomena related to High Cycle Fatigue are given such as the mean stress effect, the non-linear accumulation of damage. Examples of thermal and thermo-mechanical fatigue as well as complex applications on real size testing structure subjected to thermo-mechanical fatigue are detailed. (authors)
Development and validation of the Chinese version of dry eye related quality of life scale.
Zheng, Bang; Liu, Xiao-Jing; Sun, Yue-Qian Fiona; Su, Jia-Zeng; Zhao, Yang; Xie, Zheng; Yu, Guang-Yan
2017-07-17
To develop the Chinese version of quality of life scale for dry eye patients based on the Impact of Dry Eye on Everyday Life (IDEEL) questionnaire and to assess the reliability and validity of the developed scale. The original IDEEL was adapted cross-culturally to Chinese language and further developed following standard procedures. A total of 100 Chinese patients diagnosed with dry eye syndrome were included to investigate the psychometric properties of the Chinese version of scale. Psychometric tests included internal consistency (Cronbach's ɑ coefficients), construct validity (exploratory factor analysis), and known-groups validity (the analysis of variance). The Chinese version of Dry Eye Related Quality of Life (CDERQOL) Scale contains 45 items classified into 5 domains. Good to excellent internal consistency reliability was demonstrated for all 5 domains (Cronbach's ɑ coefficients range from 0.716 to 0.913). Construct validity assessment indicated a consistent factorial structure of the CDERQOL scale with hypothesized construct, with the exception of "Dry Eye Symptom-Bother" domain. All domain scores were detected with significant difference across three severity groups of dry eye patients (P dry eye syndrome among Chinese population, and could be used as a supplementary diagnostic and treatment-effectiveness measure.
3D large-scale calculations using the method of characteristics
International Nuclear Information System (INIS)
Dahmani, M.; Roy, R.; Koclas, J.
2004-01-01
An overview of the computational requirements and the numerical developments made in order to be able to solve 3D large-scale problems using the characteristics method will be presented. To accelerate the MCI solver, efficient acceleration techniques were implemented and parallelization was performed. However, for the very large problems, the size of the tracking file used to store the tracks can still become prohibitive and exceed the capacity of the machine. The new 3D characteristics solver MCG will now be introduced. This methodology is dedicated to solve very large 3D problems (a part or a whole core) without spatial homogenization. In order to eliminate the input/output problems occurring when solving these large problems, we define a new computing scheme that requires more CPU resources than the usual one, based on sweeps over large tracking files. The huge capacity of storage needed in some problems and the related I/O queries needed by the characteristics solver are replaced by on-the-fly recalculation of tracks at each iteration step. Using this technique, large 3D problems are no longer I/O-bound, and distributed CPU resources can be efficiently used. (author)
Paraskevoulakou, Alexia; Vrettou, Kassiani; Pikouli, Katerina; Triantafillou, Evgenia; Lykou, Anastasia; Economou, Marina
2017-09-01
Since evaluation regarding the impact of mental illness related internalized stigma is scarce, there is a great need for psychometric instruments which could contribute to understanding its adverse effects among Greek patients with severe mental illness. The Brief Internalized Stigma of Mental Illness (ISMI) scale is one of the most widely used measures designed to assess the subjective experience of stigma related to mental illness. The present study aimed to investigate the psychometric properties of the Greek version of the Brief ISMI scale. In addition to presenting psychometric findings, we explored the relationship of the Greek version of the Brief ISMI subscales with indicators of self-esteem and quality of life. 272 outpatients (108 males, 164 females) meeting the DSM-IV TR criteria for severe mental disorder (schizophrenia, bipolar disorder, major depression) completed the Brief ISMI, the RSES and the WHOQOL-BREF scales. Patients reported age and educational level. A retest was conducted with 124 patients. The Chronbach's alpha coefficient was 0 0.83. The test-retest reliability coefficients varied from 0.81 to 0.91, indicating substantial agreement. The ICC was for the total score 0.83 and for the two factors, 0.69 and 0.77 respectively. Factor analysis provided strong evidence for a two factor model. Factors 1 and 2 were named respectively "how others view me" and "how I view myself". They were negatively correlated with both RSES and WHOQOL-BREF scales, as well as with educational level. Factor 2 was significantly associated with the type of diagnosis. The Greek version of the Brief ISMI scale can be used as a reliable and valid tool for assessing mental illness related internalized stigma among Greek patients with severe mental illness.
Selective vulnerability related to aging in large-scale resting brain networks.
Zhang, Hong-Ying; Chen, Wen-Xin; Jiao, Yun; Xu, Yao; Zhang, Xiang-Rong; Wu, Jing-Tao
2014-01-01
Normal aging is associated with cognitive decline. Evidence indicates that large-scale brain networks are affected by aging; however, it has not been established whether aging has equivalent effects on specific large-scale networks. In the present study, 40 healthy subjects including 22 older (aged 60-80 years) and 18 younger (aged 22-33 years) adults underwent resting-state functional MRI scanning. Four canonical resting-state networks, including the default mode network (DMN), executive control network (ECN), dorsal attention network (DAN) and salience network, were extracted, and the functional connectivities in these canonical networks were compared between the younger and older groups. We found distinct, disruptive alterations present in the large-scale aging-related resting brain networks: the ECN was affected the most, followed by the DAN. However, the DMN and salience networks showed limited functional connectivity disruption. The visual network served as a control and was similarly preserved in both groups. Our findings suggest that the aged brain is characterized by selective vulnerability in large-scale brain networks. These results could help improve our understanding of the mechanism of degeneration in the aging brain. Additional work is warranted to determine whether selective alterations in the intrinsic networks are related to impairments in behavioral performance.
Distant Supervision for Relation Extraction with Ranking-Based Methods
Directory of Open Access Journals (Sweden)
Yang Xiang
2016-05-01
Full Text Available Relation extraction has benefited from distant supervision in recent years with the development of natural language processing techniques and data explosion. However, distant supervision is still greatly limited by the quality of training data, due to its natural motivation for greatly reducing the heavy cost of data annotation. In this paper, we construct an architecture called MIML-sort (Multi-instance Multi-label Learning with Sorting Strategies, which is built on the famous MIML framework. Based on MIML-sort, we propose three ranking-based methods for sample selection with which we identify relation extractors from a subset of the training data. Experiments are set up on the KBP (Knowledge Base Propagation corpus, one of the benchmark datasets for distant supervision, which is large and noisy. Compared with previous work, the proposed methods produce considerably better results. Furthermore, the three methods together achieve the best F1 on the official testing set, with an optimal enhancement of F1 from 27.3% to 29.98%.
Methods for large-scale international studies on ICT in education
Pelgrum, W.J.; Plomp, T.; Voogt, Joke; Knezek, G.A.
2008-01-01
International comparative assessment is a research method applied for describing and analyzing educational processes and outcomes. They are used to ‘describe the status quo’ in educational systems from an international comparative perspective. This chapter reviews different large scale international
A fast method for large-scale isolation of phages from hospital ...
African Journals Online (AJOL)
This plaque-forming method could be adopted to isolate E. coli phage easily, rapidly and in large quantities. Among the 18 isolated E. coli phages, 10 of them had a broad host range in E. coli and warrant further study. Key words: Escherichia coli phages, large-scale isolation, drug resistance, biological properties.
Non-Abelian Kubo formula and the multiple time-scale method
International Nuclear Information System (INIS)
Zhang, X.; Li, J.
1996-01-01
The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc
DEFF Research Database (Denmark)
Schnettler, Berta; Orellana, Ligia; Lobos, Germán
2015-01-01
Aim: to characterize types of university students based on satisfaction with life domains that affect eating habits, satisfaction with food-related life and subjective happiness. Materials and methods: a questionnaire was applied to a nonrandom sample of 305 students of both genders in five...... universities in Chile. The questionnaire included the abbreviated Multidimensional Student’s Life Satisfaction Scale (MSLSS), Satisfaction with Food-related Life Scale (SWFL) and the Subjective Happiness Scale (SHS). Eating habits, frequency of food consumption in and outside the place of residence...
[Scale Relativity Theory in living beings morphogenesis: fratal, determinism and chance].
Chaline, J
2012-10-01
The Scale Relativity Theory has many biological applications from linear to non-linear and, from classical mechanics to quantum mechanics. Self-similar laws have been used as model for the description of a huge number of biological systems. Theses laws may explain the origin of basal life structures. Log-periodic behaviors of acceleration or deceleration can be applied to branching macroevolution, to the time sequences of major evolutionary leaps. The existence of such a law does not mean that the role of chance in evolution is reduced, but instead that randomness and contingency may occur within a framework which may itself be structured in a partly statistical way. The scale relativity theory can open new perspectives in evolution. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Aquino-Ortíz, E.; Valenzuela, O.; Sánchez, S. F.; Hernández-Toledo, H.; Ávila-Reese, V.; van de Ven, G.; Rodríguez-Puebla, A.; Zhu, L.; Mancillas, B.; Cano-Díaz, M.; García-Benito, R.
2018-06-01
We used ionized gas and stellar kinematics for 667 spatially resolved galaxies publicly available from the Calar Alto Legacy Integral Field Area survey (CALIFA) 3rd Data Release with the aim of studying kinematic scaling relations as the Tully & Fisher (TF) relation using rotation velocity, Vrot, the Faber & Jackson (FJ) relation using velocity dispersion, σ, and also a combination of Vrot and σ through the SK parameter defined as SK^2 = KV_{rot}^2 + σ ^2 with constant K. Late-type and early-type galaxies reproduce the TF and FJ relations. Some early-type galaxies also follow the TF relation and some late-type galaxies the FJ relation, but always with larger scatter. On the contrary, when we use the SK parameter, all galaxies, regardless of the morphological type, lie on the same scaling relation, showing a tight correlation with the total stellar mass, M⋆. Indeed, we find that the scatter in this relation is smaller or equal to that of the TF and FJ relations. We explore different values of the K parameter without significant differences (slope and scatter) in our final results with respect the case K = 0.5 besides than a small change in the zero point. We calibrate the kinematic SK^2 dynamical mass proxy in order to make it consistent with sophisticated published dynamical models within 0.15 dex. We show that the SK proxy is able to reproduce the relation between the dynamical mass and the stellar mass in the inner regions of galaxies. Our result may be useful in order to produce fast estimations of the central dynamical mass in galaxies and to study correlations in large galaxy surveys.
Energy Technology Data Exchange (ETDEWEB)
Gaulme, P.; McKeever, J.; Jackiewicz, J.; Rawls, M. L. [Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, NM 88003-8001 (United States); Corsaro, E. [Laboratoire AIM, CEA/DRF-CNRS, Université Paris 7 Diderot, IRFU/SAp, Centre de Saclay, F-91191 Gif-sur-Yvette (France); Mosser, B. [LESIA, Observatoire de Paris, PSL Research University, CNRS, Université Pierre et Marie Curie, Université Denis Diderot, F-92195 Meudon (France); Southworth, J. [Astrophysics Group, Keele University, Staffordshire, ST5 5BG (United Kingdom); Mahadevan, S.; Bender, C.; Deshpande, R., E-mail: gaulme@nmsu.edu [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States)
2016-12-01
Given the potential of ensemble asteroseismology for understanding fundamental properties of large numbers of stars, it is critical to determine the accuracy of the scaling relations on which these measurements are based. From several powerful validation techniques, all indications so far show that stellar radius estimates from the asteroseismic scaling relations are accurate to within a few percent. Eclipsing binary systems hosting at least one star with detectable solar-like oscillations constitute the ideal test objects for validating asteroseismic radius and mass inferences. By combining radial velocity (RV) measurements and photometric time series of eclipses, it is possible to determine the masses and radii of each component of a double-lined spectroscopic binary. We report the results of a four-year RV survey performed with the échelle spectrometer of the Astrophysical Research Consortium’s 3.5 m telescope and the APOGEE spectrometer at Apache Point Observatory. We compare the masses and radii of 10 red giants (RGs) obtained by combining radial velocities and eclipse photometry with the estimates from the asteroseismic scaling relations. We find that the asteroseismic scaling relations overestimate RG radii by about 5% on average and masses by about 15% for stars at various stages of RG evolution. Systematic overestimation of mass leads to underestimation of stellar age, which can have important implications for ensemble asteroseismology used for Galactic studies. As part of a second objective, where asteroseismology is used for understanding binary systems, we confirm that oscillations of RGs in close binaries can be suppressed enough to be undetectable, a hypothesis that was proposed in a previous work.
Device for collecting chemical compounds and related methods
Scott, Jill R.; Groenewold, Gary S.; Rae, Catherine
2013-01-01
A device for sampling chemical compounds from fixed surfaces and related methods are disclosed. The device may include a vacuum source, a chamber and a sorbent material. The device may utilize vacuum extraction to volatilize the chemical compounds from the fixed surfaces so that they may be sorbed by the sorbent material. The sorbent material may then be analyzed using conventional thermal desorption/gas chromatography/mass spectrometry (TD/GC/MS) instrumentation to determine presence of the chemical compounds. The methods may include detecting release and presence of one or more chemical compounds and determining the efficacy of decontamination. The device may be useful in collection and analysis of a variety of chemical compounds, such as residual chemical warfare agents, chemical attribution signatures and toxic industrial chemicals.
Theoretical and applied aerodynamics and related numerical methods
Chattot, J J
2015-01-01
This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...
Stewart, S H; Watt, M C
2000-01-01
The Illness Attitudes Scale (IAS) is a self-rated measure that consists of nine subscales designed to assess fears, attitudes and beliefs associated with hypochondriacal concerns and abnormal illness behavior [Kellner, R. (1986). Somatization and hypochondriasis. New York: Praeger; Kellner, R. (1987). Abridged manual of the Illness Attitudes Scale. Department of Psychiatry, School of Medicine, University of New Mexico]. The purposes of the present study were to explore the hierarchical factor structure of the IAS in a nonclinical sample of young adult volunteers and to examine the relations of each illness attitudes dimension to a set of anxiety-related measures. One-hundred and ninety-seven undergraduate university students (156 F, 41 M; mean age = 21.9 years) completed the IAS as well as measures of anxiety sensitivity, trait anxiety and panic attack history. The results of principal components analyses with oblique (Oblimin) rotation suggested that the IAS is best conceptualized as a four-factor measure at the lower order level (with lower-order dimensions tapping illness-related Fears, Behavior, Beliefs and Effects, respectively), and a unifactorial measure at the higher-order level (i.e. higher-order dimension tapping General Hypochondriacal Concerns). The factor structure overlapped to some degree with the scoring of the IAS proposed by Kellner (1986, 1987), as well as with the factor structures identified in previously-tested clinical and nonclinical samples [Ferguson, E. & Daniel, E. (1995). The Illness Attitudes Scale (IAS): a psychometric evaluation on a nonclinical population. Personality and Individual Differences, 18, 463-469; Hadjistavropoulos, H. D. & Asmundson, G. J. G. (1998). Factor analytic investigation of the Illness Attitudes Scale in a chronic pain sample. Behaviour Research and Therapy, 36, 1185-1195; Hadjistavropoulos, H. D., Frombach, I. & Asmundson, G. J. G. (in press). Exploratory and confirmatory factor analytic investigations of the
Finite-size scaling method for the Berezinskii–Kosterlitz–Thouless transition
International Nuclear Information System (INIS)
Hsieh, Yun-Da; Kao, Ying-Jer; Sandvik, Anders W
2013-01-01
We test an improved finite-size scaling method for reliably extracting the critical temperature T BKT of a Berezinskii–Kosterlitz–Thouless (BKT) transition. Using known single-parameter logarithmic corrections to the spin stiffness ρ s at T BKT in combination with the Kosterlitz–Nelson relation between the transition temperature and the stiffness, ρ s (T BKT ) = 2T BKT /π, we define a size-dependent transition temperature T BKT (L 1 ,L 2 ) based on a pair of system sizes L 1 ,L 2 , e.g., L 2 = 2L 1 . We use Monte Carlo data for the standard two-dimensional classical XY model to demonstrate that this quantity is well behaved and can be reliably extrapolated to the thermodynamic limit using the next expected logarithmic correction beyond the ones included in defining T BKT (L 1 ,L 2 ). For the Monte Carlo calculations we use GPU (graphical processing unit) computing to obtain high-precision data for L up to 512. We find that the sub-leading logarithmic corrections have significant effects on the extrapolation. Our result T BKT = 0.8935(1) is several error bars above the previously best estimates of the transition temperature, T BKT ≈ 0.8929. If only the leading log-correction is used, the result is, however, consistent with the lower value, suggesting that previous works have underestimated T BKT because of the neglect of sub-leading logarithms. Our method is easy to implement in practice and should be applicable to generic BKT transitions. (paper)
Pilot-Scale Field Validation Of The Long Electrode Electrical Resistivity Tomography Method
International Nuclear Information System (INIS)
Glaser, D.R.; Rucker, D.F.; Crook, N.; Loke, M.H.
2011-01-01
Field validation for the long electrode electrical resistivity tomography (LE-ERT) method was attempted in order to demonstrate the performance of the technique in imaging a simple buried target. The experiment was an approximately 1/17 scale mock-up of a region encompassing a buried nuclear waste tank on the Hanford site. The target of focus was constructed by manually forming a simulated plume within the vadose zone using a tank waste simulant. The LE-ERT results were compared to ERT using conventional point electrodes on the surface and buried within the survey domain. Using a pole-pole array, both point and long electrode imaging techniques identified the lateral extents of the pre-formed plume with reasonable fidelity, but the LE-ERT was handicapped in reconstructing the vertical boundaries. The pole-dipole and dipole-dipole arrays were also tested with the LE-ERT method and were shown to have the least favorable target properties, including the position of the reconstructed plume relative to the known plume and the intensity of false positive targets. The poor performance of the pole-dipole and dipole-dipole arrays was attributed to an inexhaustive and non-optimal coverage of data at key electrodes, as well as an increased noise for electrode combinations with high geometric factors. However, when comparing the model resolution matrix among the different acquisition strategies, the pole-dipole and dipole-dipole arrays using long electrodes were shown to have significantly higher average and maximum values than any pole-pole array. The model resolution describes how well the inversion model resolves the subsurface. Given the model resolution performance of the pole-dipole and dipole-dipole arrays, it may be worth investing in tools to understand the optimum subset of randomly distributed electrode pairs to produce maximum performance from the inversion model.
PILOT-SCALE FIELD VALIDATION OF THE LONG ELECTRODE ELECTRICAL RESISTIVITY TOMOGRAPHY METHOD
Energy Technology Data Exchange (ETDEWEB)
GLASER DR; RUCKER DF; CROOK N; LOKE MH
2011-07-14
Field validation for the long electrode electrical resistivity tomography (LE-ERT) method was attempted in order to demonstrate the performance of the technique in imaging a simple buried target. The experiment was an approximately 1/17 scale mock-up of a region encompassing a buried nuclear waste tank on the Hanford site. The target of focus was constructed by manually forming a simulated plume within the vadose zone using a tank waste simulant. The LE-ERT results were compared to ERT using conventional point electrodes on the surface and buried within the survey domain. Using a pole-pole array, both point and long electrode imaging techniques identified the lateral extents of the pre-formed plume with reasonable fidelity, but the LE-ERT was handicapped in reconstructing the vertical boundaries. The pole-dipole and dipole-dipole arrays were also tested with the LE-ERT method and were shown to have the least favorable target properties, including the position of the reconstructed plume relative to the known plume and the intensity of false positive targets. The poor performance of the pole-dipole and dipole-dipole arrays was attributed to an inexhaustive and non-optimal coverage of data at key electrodes, as well as an increased noise for electrode combinations with high geometric factors. However, when comparing the model resolution matrix among the different acquisition strategies, the pole-dipole and dipole-dipole arrays using long electrodes were shown to have significantly higher average and maximum values than any pole-pole array. The model resolution describes how well the inversion model resolves the subsurface. Given the model resolution performance of the pole-dipole and dipole-dipole arrays, it may be worth investing in tools to understand the optimum subset of randomly distributed electrode pairs to produce maximum performance from the inversion model.
Uncertainties related to numerical methods for neutron spectra unfolding
International Nuclear Information System (INIS)
Glodic, S.; Ninkovic, M.; Adarougi, N.A.
1987-10-01
One of the often used techniques for neutron detection in radiation protection utilities is the Bonner multisphere spectrometer. Besides its advantages and universal applicability for evaluating integral parameters of neutron fields in health physics practices, the outstanding problems of the method are data analysis and the accuracy of the results. This paper briefly discusses some numerical problems related to neutron spectra unfolding, such as uncertainty of the response matrix as a source of error, and the possibility of real time data reduction using spectrometers. (author)
Methods of studying oxide scales grown on zirconium alloys in autoclaves and in a PWR
International Nuclear Information System (INIS)
Blank, H.; Bart, G.; Thiele, H.
1992-01-01
The analysis of water-side corrosion of zirconium alloys has been a field of research for more than 25 years, but the details of the mechanisms involved still cannot be put into a coherent picture. Improved methods are required to establish the details of the microstructure of the oxide scales. A new approach has been made for a general analysis of oxide specimens from scales grown on the zirconium-based cladding alloys of PWR rods in order to analyse the morphology of these scales, the topography of the oxide/metal interface and the crystal structures close to this interface: a) Instead of using the conventional pickling solutions, the Zr-alloys are dissolved using a 'softer' solution (Br 2 in an organic solvent) in order to avoid damage to the oxide at the oxide/metal interface to be analysed by SEM (scanning electron microscopy). A second advantage of this method is easy etching of the grain structure of Zr-alloys for SEM analysis; b) By using the particular properties of the oxide scales, the corrosion-rate-determining innermost part of the oxide layer at the oxide/metal interface can be separated from the rest of the oxide scale and then analysed by SEM, STEM (scanning transmission electron microscopy), TEM (transmission electron microscopy) and electron diffraction after dissolution of the alloy. Examples are given from oxides grown on Zr-alloys in a pressurized water reactor and in autoclaves. (author) 8 figs., 3 tabs., 9 refs
Confirmation of general relativity on large scales from weak lensing and galaxy velocities
Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E.; Lombriser, Lucas; Smith, Robert E.
2010-03-01
Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, EG, that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to `galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of EG different from the general relativistic prediction because, in these theories, the `gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that EG = 0.39+/-0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of EG~0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f() theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.
Gao, Y Nina
2018-04-06
The Resource-Based Relative Value Scale Update Committee (RUC) submits recommended reimbursement values for physician work (wRVUs) under Medicare Part B. The RUC includes rotating representatives from medical specialties. To identify changes in physician reimbursements associated with RUC rotating seat representation. Relative Value Scale Update Committee members 1994-2013; Medicare Part B Relative Value Scale 1994-2013; Physician/Supplier Procedure Summary Master File 2007; Part B National Summary Data File 2000-2011. I match service and procedure codes to specialties using 2007 Medicare billing data. Subsequently, I model wRVUs as a function of RUC rotating committee representation and level of code specialization. An annual RUC rotating seat membership is associated with a statistically significant 3-5 percent increase in Medicare expenditures for codes billed to that specialty. For codes that are performed by a small number of physicians, the association between reimbursement and rotating subspecialty representation is positive, 0.177 (SE = 0.024). For codes that are performed by a large number of physicians, the association is negative, -0.183 (SE = 0.026). Rotating representation on the RUC is correlated with overall reimbursement rates. The resulting differential changes may exacerbate existing reimbursement discrepancies between generalist and specialist practitioners. © Health Research and Educational Trust.
The MUSIC of galaxy clusters - II. X-ray global properties and scaling relations
Biffi, V.; Sembolini, F.; De Petris, M.; Valdarnini, R.; Yepes, G.; Gottlöber, S.
2014-03-01
We present the X-ray properties and scaling relations of a large sample of clusters extracted from the Marenostrum MUltidark SImulations of galaxy Clusters (MUSIC) data set. We focus on a sub-sample of 179 clusters at redshift z ˜ 0.11, with 3.2 × 1014 h-1 M⊙ mass. We employed the X-ray photon simulator PHOX to obtain synthetic Chandra observations and derive observable-like global properties of the intracluster medium (ICM), as X-ray temperature (TX) and luminosity (LX). TX is found to slightly underestimate the true mass-weighted temperature, although tracing fairly well the cluster total mass. We also study the effects of TX on scaling relations with cluster intrinsic properties: total (M500 and gas Mg,500 mass; integrated Compton parameter (YSZ) of the Sunyaev-Zel'dovich (SZ) thermal effect; YX = Mg,500 TX. We confirm that YX is a very good mass proxy, with a scatter on M500-YX and YSZ-YX lower than 5 per cent. The study of scaling relations among X-ray, intrinsic and SZ properties indicates that simulated MUSIC clusters reasonably resemble the self-similar prediction, especially for correlations involving TX. The observational approach also allows for a more direct comparison with real clusters, from which we find deviations mainly due to the physical description of the ICM, affecting TX and, particularly, LX.
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
2002-01-01
This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
Scaling relations for plasma production and acceleration of rotating plasma flows
International Nuclear Information System (INIS)
Ikehata, Takashi; Tanabe, Toshio; Mase, Hiroshi; Sekine, Ryusuke; Hasegawa, Kazuyuki.
1989-01-01
Scaling relations are investigated theoretically and experimentally of the plasma production and acceleration in the rotating plasma gun which has been developed as a new means of plasma centrifuge. Two operational modes: the gas-discharge mode for gaseous elements and the vacuum-discharge mode for solid elements are studied. Relations of the plasma density and velocities to the discharge current and the magnetic field are derived. The agreement between experiment and theory is quite well. It is found that fully-ionized rotating plasmas produced in the gas-discharge mode is most advantageous to realize efficient plasma centrifuge. (author)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
Scale-invariant Green-Kubo relation for time-averaged diffusivity
Meyer, Philipp; Barkai, Eli; Kantz, Holger
2017-12-01
In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.
Directory of Open Access Journals (Sweden)
Tovilović Snežana
2004-01-01
Full Text Available The research which was realized belongs to one of three research fields within framework of rational-emotional-behavioral therapy (REBT - to the theory of emotional disorders. It was undertaken with the aim to establish presence and nature of relations between social anxiety, treated as dimension and the construct of irrational beliefs from REBT theory. The research was carried out on the sample of 261 students of Novi Sad University, both genders, age 18 to 26. First of all, the latent structure of newly constructed Scale of Social Anxiety (SA of the author Tovilović S. was tested. SA scale was proved to be of satisfying reliability (α =0.92. Principal-component factor analysis was conducted under gathered data. Four factors of social anxiety, which explain 44,09% of total variance of the items of SA scale, were named: social-evaluation anxiety, inhibition in social-uncertain situations, low self-respect and hypersensitivity on rejection. The other test that was used is Scale of General Attitudes and Beliefs of the author Marić Z. Reliability of the sub-scale of irrational beliefs that was got on our sample is α =0.91 yet the subscale of rational beliefs is α =0.70. Canonical correlational analysis was conducted under manifest variables of both scales. Three pairs of statistically significant canonical factors were got, with correlations within the span between Rc=0.78 and Rc=0.64. We discussed nature of correlation between social anxiety and irrational beliefs in the light of REBT model of social phobia, REBT theory of emotional disorder, researches and model of social anxiety in wider, cognitive-behavioral framework.
A method of orbital analysis for large-scale first-principles simulations
Energy Technology Data Exchange (ETDEWEB)
Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)
2014-06-28
An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})
Directory of Open Access Journals (Sweden)
Ru Liang
2018-01-01
Full Text Available The magnitude of business dynamics has increased rapidly due to increased complexity, uncertainty, and risk of large-scale infrastructure projects. This fact made it increasingly tough to “go alone” into a contractor. As a consequence, joint venture contractors with diverse strengths and weaknesses cooperatively bid for bidding. Understanding project complexity and making decision on the optimal joint venture contractor is challenging. This paper is to study how to select joint venture contractors for undertaking large-scale infrastructure projects based on a multiattribute mathematical model. Two different methods are developed to solve the problem. One is based on ideal points and the other one is based on balanced ideal advantages. Both of the two methods consider individual difference in expert judgment and contractor attributes. A case study of Hong Kong-Zhuhai-Macao-Bridge (HZMB project in China is used to demonstrate how to apply these two methods and their advantages.
Ward identities and consistency relations for the large scale structure with multiple species
International Nuclear Information System (INIS)
Peloso, Marco; Pietroni, Massimo
2014-01-01
We present fully nonlinear consistency relations for the squeezed bispectrum of Large Scale Structure. These relations hold when the matter component of the Universe is composed of one or more species, and generalize those obtained in [1,2] in the single species case. The multi-species relations apply to the standard dark matter + baryons scenario, as well as to the case in which some of the fields are auxiliary quantities describing a particular population, such as dark matter halos or a specific galaxy class. If a large scale velocity bias exists between the different populations new terms appear in the consistency relations with respect to the single species case. As an illustration, we discuss two physical cases in which such a velocity bias can exist: (1) a new long range scalar force in the dark matter sector (resulting in a violation of the equivalence principle in the dark matter-baryon system), and (2) the distribution of dark matter halos relative to that of the underlying dark matter field
A multiple-scaling method of the computation of threaded structures
International Nuclear Information System (INIS)
Andrieux, S.; Leger, A.
1989-01-01
The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented
International Nuclear Information System (INIS)
Farmer, C.L.
1986-02-01
The design and assessment of underground waste disposal options requires modelling the dispersal of contaminants within aquifers. The logical structure of the development and application of disposal models is discussed. In particular we examine the validity and interpretation of the gradient diffusion model. The effective dispersion parameters in such a model seem to depend upon the scale on which they are measured. This phenomenon is analysed and methods for modelling scale dependent parameters are reviewed. Specific recommendations regarding the modelling of contaminant dispersal are provided. (author)
ADVANTAGES OF RAPID METHOD FOR DETERMINING SCALE MASS AND DECARBURIZED LAYER OF ROLLED COIL STEEL
Directory of Open Access Journals (Sweden)
E. V. Parusov
2016-08-01
Full Text Available Purpose. To determine the universal empirical relationships that allow for operational calculation of scale mass and decarbonized layer depth based on the parameters of the technological process for rolled coil steel production. Methodology. The research is carried out on the industrial batches of the rolled steel of SAE 1006 and SAE 1065 grades. Scale removability was determined in accordance with the procedure of «Bekaert» company by the specifi-cations: GA-03-16, GA-03-18, GS-03-02, GS-06-01. The depth of decarbonized layer was identified in accordance with GOST 1763-68 (M method. Findings. Analysis of experimental data allowed us to determine the rational temperature of coil formation of the investigated steel grades, which provide the best possible removal of scale from the metal surface, a minimal amount of scale, as well as compliance of the metal surface color with the require-ments of European consumers. Originality. The work allowed establishing correlation of the basic quality indicators of the rolled coil high carbon steel (scale mass, depth of decarbonized layer and inter-laminar distance in pearlite with one of the main parameters (coil formation temperature of the deformation and heat treatment mode. The re-sulting regression equations, without metallographic analysis, can be used to determine, with a minimum error, the quantitative values of the total scale mass, depth of decarbonized layer and the average inter-lamellar distance in pearlite of the rolled coil high carbon steel. Practical value. Based on the specifications of «Bekaert» company (GA-03-16, GA-03-18, GS-03-02 and GS-06-01 the method of testing descaling by mechanical means from the surface of the rolled coil steel of low- and high-carbon steel grades was developed and approved in the environment of PJSC «ArcelorMittal Kryvyi Rih». The work resulted in development of the rapid method for determination of total and remaining scale mass on the rolled coil steel
Local-scaling density-functional method: Intraorbit and interorbit density optimizations
International Nuclear Information System (INIS)
Koga, T.; Yamamoto, Y.; Ludena, E.V.
1991-01-01
The recently proposed local-scaling density-functional theory provides us with a practical method for the direct variational determination of the electron density function ρ(r). The structure of ''orbits,'' which ensures the one-to-one correspondence between the electron density ρ(r) and the N-electron wave function Ψ({r k }), is studied in detail. For the realization of the local-scaling density-functional calculations, procedures for intraorbit and interorbit optimizations of the electron density function are proposed. These procedures are numerically illustrated for the helium atom in its ground state at the beyond-Hartree-Fock level
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.
2018-01-01
In a number of environmental studies, relationships between natural processes are often assessed through regression analyses, using time series data. Such data are often multi-scale and non-stationary, leading to a poor accuracy of the resulting regression models and therefore to results with moderate reliability. To deal with this issue, the present paper introduces the EMD-regression methodology consisting in applying the empirical mode decomposition (EMD) algorithm on data series and then using the resulting components in regression models. The proposed methodology presents a number of advantages. First, it accounts of the issues of non-stationarity associated to the data series. Second, this approach acts as a scan for the relationship between a response variable and the predictors at different time scales, providing new insights about this relationship. To illustrate the proposed methodology it is applied to study the relationship between weather and cardiovascular mortality in Montreal, Canada. The results shed new knowledge concerning the studied relationship. For instance, they show that the humidity can cause excess mortality at the monthly time scale, which is a scale not visible in classical models. A comparison is also conducted with state of the art methods which are the generalized additive models and distributed lag models, both widely used in weather-related health studies. The comparison shows that EMD-regression achieves better prediction performances and provides more details than classical models concerning the relationship.
Subspace Barzilai-Borwein Gradient Method for Large-Scale Bound Constrained Optimization
International Nuclear Information System (INIS)
Xiao Yunhai; Hu Qingjie
2008-01-01
An active set subspace Barzilai-Borwein gradient algorithm for large-scale bound constrained optimization is proposed. The active sets are estimated by an identification technique. The search direction consists of two parts: some of the components are simply defined; the other components are determined by the Barzilai-Borwein gradient method. In this work, a nonmonotone line search strategy that guarantees global convergence is used. Preliminary numerical results show that the proposed method is promising, and competitive with the well-known method SPG on a subset of bound constrained problems from CUTEr collection
FDTD method for laser absorption in metals for large scale problems.
Deng, Chun; Ki, Hyungson
2013-10-21
The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.
A method for investigating relative timing information on phylogenetic trees.
Ford, Daniel; Matsen, Frederick A; Stadler, Tanja
2009-04-01
In this paper, we present a new way to describe the timing of branching events in phylogenetic trees. Our description is in terms of the relative timing of diversification events between sister clades; as such it is complementary to existing methods using lineages-through-time plots which consider diversification in aggregate. The method can be applied to look for evidence of diversification happening in lineage-specific "bursts", or the opposite, where diversification between 2 clades happens in an unusually regular fashion. In order to be able to distinguish interesting events from stochasticity, we discuss 2 classes of neutral models on trees with relative timing information and develop a statistical framework for testing these models. These model classes include both the coalescent with ancestral population size variation and global rate speciation-extinction models. We end the paper with 2 example applications: first, we show that the evolution of the hepatitis C virus deviates from the coalescent with arbitrary population size. Second, we analyze a large tree of ants, demonstrating that a period of elevated diversification rates does not appear to have occurred in a bursting manner.
Räth, Christoph; Baum, Thomas; Monetti, Roberto; Sidorenko, Irina; Wolf, Petra; Eckstein, Felix; Matsuura, Maiko; Lochmüller, Eva-Maria; Zysset, Philippe K; Rummeny, Ernst J; Link, Thomas M; Bauer, Jan S
2013-12-01
In this study, we investigated the scaling relations between trabecular bone volume fraction (BV/TV) and parameters of the trabecular microstructure at different skeletal sites. Cylindrical bone samples with a diameter of 8mm were harvested from different skeletal sites of 154 human donors in vitro: 87 from the distal radius, 59/69 from the thoracic/lumbar spine, 51 from the femoral neck, and 83 from the greater trochanter. μCT images were obtained with an isotropic spatial resolution of 26μm. BV/TV and trabecular microstructure parameters (TbN, TbTh, TbSp, scaling indices ( and σ of α and αz), and Minkowski Functionals (Surface, Curvature, Euler)) were computed for each sample. The regression coefficient β was determined for each skeletal site as the slope of a linear fit in the double-logarithmic representations of the correlations of BV/TV versus the respective microstructure parameter. Statistically significant correlation coefficients ranging from r=0.36 to r=0.97 were observed for BV/TV versus microstructure parameters, except for Curvature and Euler. The regression coefficients β were 0.19 to 0.23 (TbN), 0.21 to 0.30 (TbTh), -0.28 to -0.24 (TbSp), 0.58 to 0.71 (Surface) and 0.12 to 0.16 (), 0.07 to 0.11 (), -0.44 to -0.30 (σ(α)), and -0.39 to -0.14 (σ(αz)) at the different skeletal sites. The 95% confidence intervals of β overlapped for almost all microstructure parameters at the different skeletal sites. The scaling relations were independent of vertebral fracture status and similar for subjects aged 60-69, 70-79, and >79years. In conclusion, the bone volume fraction-microstructure scaling relations showed a rather universal character. © 2013.
Liu, Huiyu; Zhang, Mingyang; Lin, Zhenshan
2017-10-05
Climate changes are considered to significantly impact net primary productivity (NPP). However, there are few studies on how climate changes at multiple time scales impact NPP. With MODIS NPP product and station-based observations of sunshine duration, annual average temperature and annual precipitation, impacts of climate changes at different time scales on annual NPP, have been studied with EEMD (ensemble empirical mode decomposition) method in the Karst area of northwest Guangxi, China, during 2000-2013. Moreover, with partial least squares regression (PLSR) model, the relative importance of climatic variables for annual NPP has been explored. The results show that (1) only at quasi 3-year time scale do sunshine duration and temperature have significantly positive relations with NPP. (2) Annual precipitation has no significant relation to NPP by direct comparison, but significantly positive relation at 5-year time scale, which is because 5-year time scale is not the dominant scale of precipitation; (3) the changes of NPP may be dominated by inter-annual variabilities. (4) Multiple time scales analysis will greatly improve the performance of PLSR model for estimating NPP. The variable importance in projection (VIP) scores of sunshine duration and temperature at quasi 3-year time scale, and precipitation at quasi 5-year time scale are greater than 0.8, indicating important for NPP during 2000-2013. However, sunshine duration and temperature at quasi 3-year time scale are much more important. Our results underscore the importance of multiple time scales analysis for revealing the relations of NPP to changing climate.
A stochastic immersed boundary method for fluid-structure dynamics at microscopic length scales
International Nuclear Information System (INIS)
Atzberger, Paul J.; Kramer, Peter R.; Peskin, Charles S.
2007-01-01
In modeling many biological systems, it is important to take into account flexible structures which interact with a fluid. At the length scale of cells and cell organelles, thermal fluctuations of the aqueous environment become significant. In this work, it is shown how the immersed boundary method of [C.S. Peskin, The immersed boundary method, Acta Num. 11 (2002) 1-39.] for modeling flexible structures immersed in a fluid can be extended to include thermal fluctuations. A stochastic numerical method is proposed which deals with stiffness in the system of equations by handling systematically the statistical contributions of the fastest dynamics of the fluid and immersed structures over long time steps. An important feature of the numerical method is that time steps can be taken in which the degrees of freedom of the fluid are completely underresolved, partially resolved, or fully resolved while retaining a good level of accuracy. Error estimates in each of these regimes are given for the method. A number of theoretical and numerical checks are furthermore performed to assess its physical fidelity. For a conservative force, the method is found to simulate particles with the correct Boltzmann equilibrium statistics. It is shown in three dimensions that the diffusion of immersed particles simulated with the method has the correct scaling in the physical parameters. The method is also shown to reproduce a well-known hydrodynamic effect of a Brownian particle in which the velocity autocorrelation function exhibits an algebraic (τ -3/2 ) decay for long times [B.J. Alder, T.E. Wainwright, Decay of the Velocity Autocorrelation Function, Phys. Rev. A 1(1) (1970) 18-21]. A few preliminary results are presented for more complex systems which demonstrate some potential application areas of the method. Specifically, we present simulations of osmotic effects of molecular dimers, worm-like chain polymer knots, and a basic model of a molecular motor immersed in fluid subject to a
Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David
2015-04-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no [Centre for Theoretical and Computational Chemistry CTCC, Department of Chemistry, University of Tromsø, N-9037 Tromsø (Norway); Törk, Lisa; Hättig, Christof, E-mail: christof.haettig@rub.de [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, D-44801 Bochum (Germany)
2014-11-21
We present scaling factors for vibrational frequencies calculated within the harmonic approximation and the correlated wave-function methods coupled cluster singles and doubles model (CC2) and Møller-Plesset perturbation theory (MP2) with and without a spin-component scaling (SCS or spin-opposite scaling (SOS)). Frequency scaling factors and the remaining deviations from the reference data are evaluated for several non-augmented basis sets of the cc-pVXZ family of generally contracted correlation-consistent basis sets as well as for the segmented contracted TZVPP basis. We find that the SCS and SOS variants of CC2 and MP2 lead to a slightly better accuracy for the scaled vibrational frequencies. The determined frequency scaling factors can also be used for vibrational frequencies calculated for excited states through response theory with CC2 and the algebraic diagrammatic construction through second order and their spin-component scaled variants.
Chacó n Rebollo, Tomá s; Dia, Ben Mansour
2015-01-01
This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.
Chacón Rebollo, Tomás
2015-03-01
This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.
Adaptation study of the Turkish version of the Gambling-Related Cognitions Scale (GRCS-T).
Arcan, K; Karanci, A N
2015-03-01
This study aimed to adapt and to test the validity and the reliability of the Turkish version of the Gambling-Related Cognitions Scale (GRCS-T) that was developed by Raylu and Oei (Addiction 99(6):757-769, 2004a). The significance of erroneous cognitions in the development and the maintenance of gambling problems, the importance of promoting gambling research in different cultures, and the limited information about the gambling individuals in Turkey due to limited gambling research interest inspired the present study. The sample consisted of 354 voluntary male participants who were above age 17 and betting on sports and horse races selected through convenience sampling in betting terminals. The results of the confirmatory factor analysis following the original scale's five factor structure indicated a good fit for the data. The analyses were carried out with 21 items due to relatively inadequate psychometric properties of two GRCS-T items. Correlational analyses and group comparison tests supported the concurrent and the criterion validity of the GRCS-T. Cronbach's alpha coefficient for the whole scale was 0.84 whereas the coefficients ranged between 0.52 and 0.78 for the subscales of GRCS-T. The findings suggesting that GRCS-T is a valid and reliable instrument to identify gambling cognitions in Turkish samples are discussed considering the possible influence of the sample make-up and cultural texture within the limitations of the present study and in the light of the relevant literature.
Multi-Scale Entropy Analysis as a Method for Time-Series Analysis of Climate Data
Directory of Open Access Journals (Sweden)
Heiko Balzter
2015-03-01
Full Text Available Evidence is mounting that the temporal dynamics of the climate system are changing at the same time as the average global temperature is increasing due to multiple climate forcings. A large number of extreme weather events such as prolonged cold spells, heatwaves, droughts and floods have been recorded around the world in the past 10 years. Such changes in the temporal scaling behaviour of climate time-series data can be difficult to detect. While there are easy and direct ways of analysing climate data by calculating the means and variances for different levels of temporal aggregation, these methods can miss more subtle changes in their dynamics. This paper describes multi-scale entropy (MSE analysis as a tool to study climate time-series data and to identify temporal scales of variability and their change over time in climate time-series. MSE estimates the sample entropy of the time-series after coarse-graining at different temporal scales. An application of MSE to Central European, variance-adjusted, mean monthly air temperature anomalies (CRUTEM4v is provided. The results show that the temporal scales of the current climate (1960–2014 are different from the long-term average (1850–1960. For temporal scale factors longer than 12 months, the sample entropy increased markedly compared to the long-term record. Such an increase can be explained by systems theory with greater complexity in the regional temperature data. From 1961 the patterns of monthly air temperatures are less regular at time-scales greater than 12 months than in the earlier time period. This finding suggests that, at these inter-annual time scales, the temperature variability has become less predictable than in the past. It is possible that climate system feedbacks are expressed in altered temporal scales of the European temperature time-series data. A comparison with the variance and Shannon entropy shows that MSE analysis can provide additional information on the
Fault diagnosis of rolling element bearing using a new optimal scale morphology analysis method.
Yan, Xiaoan; Jia, Minping; Zhang, Wan; Zhu, Lin
2018-02-01
Periodic transient impulses are key indicators of rolling element bearing defects. Efficient acquisition of impact impulses concerned with the defects is of much concern to the precise detection of bearing defects. However, transient features of rolling element bearing are generally immersed in stochastic noise and harmonic interference. Therefore, in this paper, a new optimal scale morphology analysis method, named adaptive multiscale combination morphological filter-hat transform (AMCMFH), is proposed for rolling element bearing fault diagnosis, which can both reduce stochastic noise and reserve signal details. In this method, firstly, an adaptive selection strategy based on the feature energy factor (FEF) is introduced to determine the optimal structuring element (SE) scale of multiscale combination morphological filter-hat transform (MCMFH). Subsequently, MCMFH containing the optimal SE scale is applied to obtain the impulse components from the bearing vibration signal. Finally, fault types of bearing are confirmed by extracting the defective frequency from envelope spectrum of the impulse components. The validity of the proposed method is verified through the simulated analysis and bearing vibration data derived from the laboratory bench. Results indicate that the proposed method has a good capability to recognize localized faults appeared on rolling element bearing from vibration signal. The study supplies a novel technique for the detection of faulty bearing. Copyright © 2018. Published by Elsevier Ltd.
International Nuclear Information System (INIS)
Yang, W.; Wu, H.; Cao, L.
2012-01-01
More and more MOX fuels are used in all over the world in the past several decades. Compared with UO 2 fuel, it contains some new features. For example, the neutron spectrum is harder and more resonance interference effects within the resonance energy range are introduced because of more resonant nuclides contained in the MOX fuel. In this paper, the wavelets scaling function expansion method is applied to study the resonance behavior of plutonium isotopes within MOX fuel. Wavelets scaling function expansion continuous-energy self-shielding method is developed recently. It has been validated and verified by comparison to Monte Carlo calculations. In this method, the continuous-energy cross-sections are utilized within resonance energy, which means that it's capable to solve problems with serious resonance interference effects without iteration calculations. Therefore, this method adapts to treat the MOX fuel resonance calculation problem natively. Furthermore, plutonium isotopes have fierce oscillations of total cross-section within thermal energy range, especially for 240 Pu and 242 Pu. To take thermal resonance effect of plutonium isotopes into consideration the wavelet scaling function expansion continuous-energy resonance calculation code WAVERESON is enhanced by applying the free gas scattering kernel to obtain the continuous-energy scattering source within thermal energy range (2.1 eV to 4.0 eV) contrasting against the resonance energy range in which the elastic scattering kernel is utilized. Finally, all of the calculation results of WAVERESON are compared with MCNP calculation. (authors)
III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.
Davis-Kean, Pamela E; Jager, Justin
2017-06-01
For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.
Dorazio, Robert; Delampady, Mohan; Dey, Soumen; Gopalaswamy, Arjun M.; Karanth, K. Ullas; Nichols, James D.
2017-01-01
Conservationists and managers are continually under pressure from the public, the media, and political policy makers to provide “tiger numbers,” not just for protected reserves, but also for large spatial scales, including landscapes, regions, states, nations, and even globally. Estimating the abundance of tigers within relatively small areas (e.g., protected reserves) is becoming increasingly tractable (see Chaps. 9 and 10), but doing so for larger spatial scales still presents a formidable challenge. Those who seek “tiger numbers” are often not satisfied by estimates of tiger occupancy alone, regardless of the reliability of the estimates (see Chaps. 4 and 5). As a result, wherever tiger conservation efforts are underway, either substantially or nominally, scientists and managers are frequently asked to provide putative large-scale tiger numbers based either on a total count or on an extrapolation of some sort (see Chaps. 1 and 2).
van der Hilst, R. D.; de Hoop, M. V.; Shim, S. H.; Shang, X.; Wang, P.; Cao, Q.
2012-04-01
Over the past three decades, tremendous progress has been made with the mapping of mantle heterogeneity and with the understanding of these structures in terms of, for instance, the evolution of Earth's crust, continental lithosphere, and thermo-chemical mantle convection. Converted wave imaging (e.g., receiver functions) and reflection seismology (e.g. SS stacks) have helped constrain interfaces in crust and mantle; surface wave dispersion (from earthquake or ambient noise signals) characterizes wavespeed variations in continental and oceanic lithosphere, and body wave and multi-mode surface wave data have been used to map trajectories of mantle convection and delineate mantle regions of anomalous elastic properties. Collectively, these studies have revealed substantial ocean-continent differences and suggest that convective flow is strongly influenced by but permitted to cross the upper mantle transition zone. Many questions have remained unanswered, however, and further advances in understanding require more accurate depictions of Earth's heterogeneity at a wider range of length scales. To meet this challenge we need new observations—more, better, and different types of data—and methods that help us extract and interpret more information from the rapidly growing volumes of broadband data. The huge data volumes and the desire to extract more signal from them means that we have to go beyond 'business as usual' (that is, simplified theory, manual inspection of seismograms, …). Indeed, it inspires the development of automated full wave methods, both for tomographic delineation of smooth wavespeed variations and the imaging (for instance through inverse scattering) of medium contrasts. Adjoint tomography and reverse time migration, which are closely related wave equation methods, have begun to revolutionize seismic inversion of global and regional waveform data. In this presentation we will illustrate this development - and its promise - drawing from our work
Inhibitory effect of glutamic acid on the scale formation process using electrochemical methods.
Karar, A; Naamoune, F; Kahoul, A; Belattar, N
2016-08-01
The formation of calcium carbonate CaCO3 in water has some important implications in geoscience researches, ocean chemistry studies, CO2 emission issues and biology. In industry, the scaling phenomenon may cause technical problems, such as reduction in heat transfer efficiency in cooling systems and obstruction of pipes. This paper focuses on the study of the glutamic acid (GA) for reducing CaCO3 scale formation on metallic surfaces in the water of Bir Aissa region. The anti-scaling properties of glutamic acid (GA), used as a complexing agent of Ca(2+) ions, have been evaluated by the chronoamperometry and electrochemical impedance spectroscopy methods in conjunction with a microscopic examination. Chemical and electrochemical study of this water shows a high calcium concentration. The characterization using X-ray diffraction reveals that while the CaCO3 scale formed chemically is a mixture of calcite, aragonite and vaterite, the one deposited electrochemically is a pure calcite. The effect of temperature on the efficiency of the inhibitor was investigated. At 30 and 40°C, a complete scaling inhibition was obtained at a GA concentration of 18 mg/L with 90.2% efficiency rate. However, the efficiency of GA decreased at 50 and 60°C.
Superhydrophobic multi-scale ZnO nanostructures fabricated by chemical vapor deposition method.
Zhou, Ming; Feng, Chengheng; Wu, Chunxia; Ma, Weiwei; Cai, Lan
2009-07-01
The ZnO nanostructures were synthesized on Si(100) substrates by chemical vapor deposition (CVD) method. Different Morphologies of ZnO nanostructures, such as nanoparticle film, micro-pillar and micro-nano multi-structure, were obtained with different conditions. The results of XRD and TEM showed the good quality of ZnO crystal growth. Selected area electron diffraction analysis indicates the individual nano-wire is single crystal. The wettability of ZnO was studied by contact angle admeasuring apparatus. We found that the wettability can be changed from hydrophobic to super-hydrophobic when the structure changed from smooth particle film to single micro-pillar, nano-wire and micro-nano multi-scale structure. Compared with the particle film with contact angle (CA) of 90.7 degrees, the CA of single scale microstructure and sparse micro-nano multi-scale structure is 130-140 degrees, 140-150 degrees respectively. But when the surface is dense micro-nano multi-scale structure such as nano-lawn, the CA can reach to 168.2 degrees . The results indicate that microstructure of surface is very important to the surface wettability. The wettability on the micro-nano multi-structure is better than single-scale structure, and that of dense micro-nano multi-structure is better than sparse multi-structure.
DEFF Research Database (Denmark)
Schnettler, Berta; Miranda, Horacio; Miranda-Zapata, Edgardo
2017-01-01
This study examined longitudinal measurement invariance in the Satisfaction with Food-related Life (SWFL) scale using follow-up data from university students. We examined this measure of the SWFL in different groups of students, separated by various characteristics. Through non......-probabilistic longitudinal sampling, 114 university students (65.8% female, mean age: 22.5) completed the SWFL questionnaire three times, over intervals of approximately one year. Confirmatory factor analysis was used to examine longitudinal measurement invariance. Two types of analysis were conducted: first, a longitudinal...... students of both sexes, and among those older and younger than 22 years. Generally, these findings suggest that the SWFL scale has satisfactory psychometric properties for longitudinal measurement invariance in university students with similar characteristics as the students that participated...
Stability of neutrino parameters and self-complementarity relation with varying SUSY breaking scale
Singh, K. Sashikanta; Roy, Subhankar; Singh, N. Nimai
2018-03-01
The scale at which supersymmetry (SUSY) breaks (ms) is still unknown. The present article, following a top-down approach, endeavors to study the effect of varying ms on the radiative stability of the observational parameters associated with the neutrino mixing. These parameters get additional contributions in the minimal supersymmetric model (MSSM). A variation in ms will influence the bounds for which the Standard Model (SM) and MSSM work and hence, will account for the different radiative contributions received from both sectors, respectively, while running the renormalization group equations (RGE). The present work establishes the invariance of the self complementarity relation among the three mixing angles, θ13+θ12≈θ23 against the radiative evolution. A similar result concerning the mass ratio, m2:m1 is also found to be valid. In addition to varying ms, the work incorporates a range of different seesaw (SS) scales and tries to see how the latter affects the parameters.
Directory of Open Access Journals (Sweden)
De-Xin Yu
2013-01-01
Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.
Multi-scale method for the resolution of the neutronic kinetics equations
International Nuclear Information System (INIS)
Chauvet, St.
2008-10-01
In this PhD thesis and in order to improve the time/precision ratio of the numerical simulation calculations, we investigate multi-scale techniques for the resolution of the reactor kinetics equations. We choose to focus on the mixed dual diffusion approximation and the quasi-static methods. We introduce a space dependency for the amplitude function which only depends on the time variable in the standard quasi-static context. With this new factorization, we develop two mixed dual problems which can be solved with Cea's solver MINOS. An algorithm is implemented, performing the resolution of these problems defined on different scales (for time and space). We name this approach: the Local Quasi-Static method. We present here this new multi-scale approach and its implementation. The inherent details of amplitude and shape treatments are discussed and justified. Results and performances, compared to MINOS, are studied. They illustrate the improvement on the time/precision ratio for kinetics calculations. Furthermore, we open some new possibilities to parallelize computations with MINOS. For the future, we also introduce some improvement tracks with adaptive scales. (author)
Raju, K. P.
2018-05-01
The Calcium K spectroheliograms of the Sun from Kodaikanal have a data span of about 100 years and covers over 9 solar cycles. The Ca line is a strong chromospheric line dominated by chromospheric network and plages which are good indicators of solar activity. Length-scales and relative intensities of the chromospheric network have been obtained in the solar latitudes from 50 degree N to 50 degree S from the spectroheliograms. The length-scale was obtained from the half-width of the two-dimensional autocorrelation of the latitude strip which gives a measure of the width of the network boundary. As reported earlier for the transition region extreme ultraviolet (EUV) network, relative intensity and width of the chromospheric network boundary are found to be dependent on the solar cycle. A varying phase difference has been noticed in the quantities in different solar latitudes. A cross-correlation analysis of the quantities from other latitudes with ±30 degree latitude revealed an interesting phase difference pattern indicating flux transfer. Evidence of equatorward flux transfer has been observed. The average equatorward flux transfer was estimated to be 5.8 ms-1. The possible reasons of the drift could be meridional circulation, torsional oscillations, or the bright point migration. Cross-correlation of intensity and length-scale from the same latitude showed increasing phase difference with increasing latitude. We have also obtained the cross correlation of the quantities across the equator to see the possible phase lags in the two hemispheres. Signatures of lags are seen in the length scales of southern hemisphere near the equatorial latitudes, but no such lags in the intensity are observed. The results have important implications on the flux transfer over the solar surface and hence on the solar activity and dynamo.
Energy Technology Data Exchange (ETDEWEB)
Carey, G.F.; Young, D.M.
1993-12-31
The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.
2012-07-01
With the use of supplementary cementing materials (SCMs) in concrete mixtures, salt scaling tests such as ASTM C672 have been found to be overly aggressive and do correlate well with field scaling performance. The reasons for this are thought to be b...
Scale relativity theory and integrative systems biology: 2. Macroscopic quantum-type mechanics.
Nottale, Laurent; Auffray, Charles
2008-05-01
In these two companion papers, we provide an overview and a brief history of the multiple roots, current developments and recent advances of integrative systems biology and identify multiscale integration as its grand challenge. Then we introduce the fundamental principles and the successive steps that have been followed in the construction of the scale relativity theory, which aims at describing the effects of a non-differentiable and fractal (i.e., explicitly scale dependent) geometry of space-time. The first paper of this series was devoted, in this new framework, to the construction from first principles of scale laws of increasing complexity, and to the discussion of some tentative applications of these laws to biological systems. In this second review and perspective paper, we describe the effects induced by the internal fractal structures of trajectories on motion in standard space. Their main consequence is the transformation of classical dynamics into a generalized, quantum-like self-organized dynamics. A Schrödinger-type equation is derived as an integral of the geodesic equation in a fractal space. We then indicate how gauge fields can be constructed from a geometric re-interpretation of gauge transformations as scale transformations in fractal space-time. Finally, we introduce a new tentative development of the theory, in which quantum laws would hold also in scale space, introducing complexergy as a measure of organizational complexity. Initial possible applications of this extended framework to the processes of morphogenesis and the emergence of prokaryotic and eukaryotic cellular structures are discussed. Having founded elements of the evolutionary, developmental, biochemical and cellular theories on the first principles of scale relativity theory, we introduce proposals for the construction of an integrative theory of life and for the design and implementation of novel macroscopic quantum-type experiments and devices, and discuss their potential
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging
SVC Planning in Large–scale Power Systems via a Hybrid Optimization Method
DEFF Research Database (Denmark)
Yang, Guang ya; Majumder, Rajat; Xu, Zhao
2009-01-01
The research on allocation of FACTS devices has attracted quite a lot interests from various aspects. In this paper, a hybrid model is proposed to optimise the number, location as well as the parameter settings of static Var compensator (SVC) deployed in large–scale power systems. The model...... utilises the result of vulnerability assessment for determining the candidate locations. A hybrid optimisation method including two stages is proposed to find out the optimal solution of SVC in large– scale planning problem. In the first stage, a conventional genetic algorithm (GA) is exploited to generate...... a candidate solution pool. Then in the second stage, the candidates are presented to a linear planning model to investigate the system optimal loadability, hence the optimal solution for SVC planning can be achieved. The method is presented to IEEE 300–bus system....
Studying the properties of photonic quasi-crystals by the scaling convergence method
International Nuclear Information System (INIS)
Ho, I-Lin; Ng, Ming-Yaw; Mai, Chien Chin; Ko, Peng Yu; Chang, Yia-Chung
2013-01-01
This work introduces the iterative scaling (or inflation) method to systematically approach and analyse the infinite structure of quasi-crystals. The resulting structures preserve local geometric orderings in order to prevent artificial disclination across the boundaries of super-cells, with realistic quasi-crystals coming out under high iteration (infinite super-cell). The method provides an easy way for decorations of quasi-crystalline lattices, and for compact reliefs with a quasi-periodic arrangement to underlying applications. Numerical examples for in-plane and off-plane properties of square-triangle quasi-crystals show fast convergence during iteratively geometric scaling, revealing characteristics that do not appear on regular crystals. (paper)
A novel laboratory scale method for studying heat treatment of cake flour
Chesterton, AKS; Wilson, David Ian; Sadd, PI; Moggridge, Geoffrey Dillwyn
2014-01-01
A lab-scale method for replicating the time–temperature history experienced by cake flours undergoing heat treatment was developed based on a packed bed configuration. The performance of heat-treated flours was compared with untreated and commercially heat-treated flour by test baking a high ratio cake formulation. Both cake volume and AACC shape measures were optimal after 15 min treatment at 130 °C, though their values varied between harvests. Separate oscillatory rheometry tests of cake ba...
Energy Technology Data Exchange (ETDEWEB)
Martena, Valentina; Censi, Roberta [University of Camerino, School of Pharmacy (Italy); Hoti, Ela; Malaj, Ledjan [University of Tirana, Department of Pharmacy (Albania); Di Martino, Piera, E-mail: piera.dimartino@unicam.it [University of Camerino, School of Pharmacy (Italy)
2012-12-15
The objective of this study is to select very simple and well-known laboratory scale methods able to reduce particle size of indomethacin until the nanometric scale. The effect on the crystalline form and the dissolution behavior of the different samples was deliberately evaluated in absence of any surfactants as stabilizers. Nanocrystals of indomethacin (native crystals are in the {gamma} form) (IDM) were obtained by three laboratory scale methods: A (Batch A: crystallization by solvent evaporation in a nano-spray dryer), B (Batch B-15 and B-30: wet milling and lyophilization), and C (Batch C-20-N and C-40-N: Cryo-milling in the presence of liquid nitrogen). Nanocrystals obtained by the method A (Batch A) crystallized into a mixture of {alpha} and {gamma} polymorphic forms. IDM obtained by the two other methods remained in the {gamma} form and a different attitude to the crystallinity decrease were observed, with a more considerable decrease in crystalline degree for IDM milled for 40 min in the presence of liquid nitrogen. The intrinsic dissolution rate (IDR) revealed a higher dissolution rate for Batches A and C-40-N, due to the higher IDR of {alpha} form than {gamma} form for the Batch A, and the lower crystallinity degree for both the Batches A and C-40-N. These factors, as well as the decrease in particle size, influenced the IDM dissolution rate from the particle samples. Modifications in the solid physical state that may occur using different particle size reduction treatments have to be taken into consideration during the scale up and industrial development of new solid dosage forms.
CoRE: A context-aware relation extraction method for relation completion
Li, Zhixu; Sharaf, Mohamed Abdel Fattah; Sitbon, Laurianne; Du, Xiaoyong; Zhou, Xiaofang
2014-01-01
We identify relation completion (RC) as one recurring problem that is central to the success of novel big data applications such as Entity Reconstruction and Data Enrichment. Given a semantic relation {\\cal R}, RC attempts at linking entity pairs between two entity lists under the relation {\\cal R}. To accomplish the RC goals, we propose to formulate search queries for each query entity \\alpha based on some auxiliary information, so that to detect its target entity \\beta from the set of retrieved documents. For instance, a pattern-based method (PaRE) uses extracted patterns as the auxiliary information in formulating search queries. However, high-quality patterns may decrease the probability of finding suitable target entities. As an alternative, we propose CoRE method that uses context terms learned surrounding the expression of a relation as the auxiliary information in formulating queries. The experimental results based on several real-world web data collections demonstrate that CoRE reaches a much higher accuracy than PaRE for the purpose of RC. © 1989-2012 IEEE.
CoRE: A context-aware relation extraction method for relation completion
Li, Zhixu
2014-04-01
We identify relation completion (RC) as one recurring problem that is central to the success of novel big data applications such as Entity Reconstruction and Data Enrichment. Given a semantic relation {\\\\cal R}, RC attempts at linking entity pairs between two entity lists under the relation {\\\\cal R}. To accomplish the RC goals, we propose to formulate search queries for each query entity \\\\alpha based on some auxiliary information, so that to detect its target entity \\\\beta from the set of retrieved documents. For instance, a pattern-based method (PaRE) uses extracted patterns as the auxiliary information in formulating search queries. However, high-quality patterns may decrease the probability of finding suitable target entities. As an alternative, we propose CoRE method that uses context terms learned surrounding the expression of a relation as the auxiliary information in formulating queries. The experimental results based on several real-world web data collections demonstrate that CoRE reaches a much higher accuracy than PaRE for the purpose of RC. © 1989-2012 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in
Torrey, Paul; Vogelsberger, Mark; Hernquist, Lars; McKinnon, Ryan; Marinacci, Federico; Simcoe, Robert A.; Springel, Volker; Pillepich, Annalisa; Naiman, Jill; Pakmor, Rüdiger; Weinberger, Rainer; Nelson, Dylan; Genel, Shy
2018-06-01
The fundamental metallicity relation (FMR) is a postulated correlation between galaxy stellar mass, star formation rate (SFR), and gas-phase metallicity. At its core, this relation posits that offsets from the mass-metallicity relation (MZR) at a fixed stellar mass are correlated with galactic SFR. In this Letter, we use hydrodynamical simulations to quantify the time-scales over which populations of galaxies oscillate about the average SFR and metallicity values at fixed stellar mass. We find that Illustris and IllustrisTNG predict that galaxy offsets from the star formation main sequence and MZR oscillate over similar time-scales, are often anticorrelated in their evolution, evolve with the halo dynamical time, and produce a pronounced FMR. Our models indicate that galaxies oscillate about equilibrium SFR and metallicity values - set by the galaxy's stellar mass - and that SFR and metallicity offsets evolve in an anticorrelated fashion. This anticorrelated variability of the metallicity and SFR offsets drives the existence of the FMR in our models. In contrast to Illustris and IllustrisTNG, we speculate that the SFR and metallicity evolution tracks may become decoupled in galaxy formation models dominated by feedback-driven globally bursty SFR histories, which could weaken the FMR residual correlation strength. This opens the possibility of discriminating between bursty and non-bursty feedback models based on the strength and persistence of the FMR - especially at high redshift.
THE NON-CAUSAL ORIGIN OF THE BLACK-HOLE-GALAXY SCALING RELATIONS
International Nuclear Information System (INIS)
Jahnke, Knud; Maccio, Andrea V.
2011-01-01
We show that the M BH -M bulge scaling relations observed from the local to the high-z universe can be largely or even entirely explained by a non-causal origin, i.e., they do not imply the need for any physically coupled growth of black hole (BH) and bulge mass, for example, through feedback by active galactic nuclei (AGNs). Provided some physics for the absolute normalization, the creation of the scaling relations can be fully explained by the hierarchical assembly of BH and stellar mass through galaxy merging, from an initially uncorrelated distribution of BH and stellar masses in the early universe. We show this with a suite of dark matter halo merger trees for which we make assumptions about (uncorrelated) BH and stellar mass values at early cosmic times. We then follow the halos in the presence of global star formation and BH accretion recipes that (1) work without any coupling of the two properties per individual galaxy and (2) correctly reproduce the observed star formation and BH accretion rate density in the universe. With disk-to-bulge conversion in mergers included, our simulations even create the observed slope of ∼1.1 for the M BH -M bulge relation at z = 0. This also implies that AGN feedback is not a required (though still a possible) ingredient in galaxy evolution. In light of this, other mechanisms that can be invoked to truncate star formation in massive galaxies are equally justified.
Scaling relation of the anomalous Hall effect in (Ga,Mn)As
Glunk, M.; Daeubler, J.; Schoch, W.; Sauer, R.; Limmer, W.
2009-09-01
We present magnetotransport studies performed on an extended set of (Ga,Mn)As samples at 4.2 K with longitudinal conductivities σxx ranging from the low-conductivity to the high-conductivity regime. The anomalous Hall conductivity σxy(AH) is extracted from the measured longitudinal and Hall resistivities. A transition from σxy(AH)=20Ω-1cm-1 due to the Berry phase effect in the high-conductivity regime to a scaling relation σxy(AH)∝σxx1.6 for low-conductivity samples is observed. This scaling relation is consistent with a recently developed unified theory of the anomalous Hall effect in the framework of the Keldysh formalism. It turns out to be independent of crystallographic orientation, growth conditions, Mn concentration, and strain, and can therefore be considered universal for low-conductivity (Ga,Mn)As. The relation plays a crucial role when deriving values of the hole concentration from magnetotransport measurements in low-conductivity (Ga,Mn)As. In addition, the hole diffusion constants for the high-conductivity samples are determined from the measured longitudinal conductivities.
A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer
Directory of Open Access Journals (Sweden)
Xiangqing Huang
2017-10-01
Full Text Available A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI. Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.
A New Scale Factor Adjustment Method for Magnetic Force Feedback Accelerometer.
Huang, Xiangqing; Deng, Zhongguang; Xie, Yafei; Li, Zhu; Fan, Ji; Tu, Liangcheng
2017-10-27
A new and simple method to adjust the scale factor of a magnetic force feedback accelerometer is presented, which could be used in developing a rotating accelerometer gravity gradient instrument (GGI). Adjusting and matching the acceleration-to-current transfer function of the four accelerometers automatically is one of the basic and necessary technologies for rejecting the common mode accelerations in the development of GGI. In order to adjust the scale factor of the magnetic force rebalance accelerometer, an external current is injected and combined with the normal feedback current; they are then applied together to the torque coil of the magnetic actuator. The injected current could be varied proportionally according to the external adjustment needs, and the change in the acceleration-to-current transfer function then realized dynamically. The new adjustment method has the advantages of no extra assembly and ease of operation. Changes in the scale factors range from 33% smaller to 100% larger are verified experimentally by adjusting the different external coefficients. The static noise of the used accelerometer is compared under conditions with and without the injecting current, and the experimental results find no change at the current noise level, which further confirms the validity of the presented method.
Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel
2017-07-01
Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.
International Nuclear Information System (INIS)
Reynolds, J.G.
2011-01-01
Previous researchers have developed correlations between oxide electronegativity and oxide basicity. The present paper revises those correlations using a newer method of calculating electronegativity of the oxygen anion. Basicity is expressed using the Smith α parameter scale. A linear relation was found between the oxide electronegativity and the Smith α parameter, with an R 2 of 0.92. An example application of this new correlation to the durability of high-level nuclear waste glass is demonstrated. The durability of waste glass was found to be directly proportional to the quantity and basicity of the oxides of tetrahedrally coordinated network forming ions.
Multi-scale image segmentation method with visual saliency constraints and its application
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works
Veerman, Marcel; Resendiz, Marino J E; Garcia-Garibay, Miguel A
2006-06-08
Photochemical reactions in the solid state can be scaled up from a few milligrams to 10 grams by using colloidal suspensions of a photoactive molecular crystal prepared by the solvent shift method. Pure products are recovered by filtration, and the use of H(2)O as a suspension medium makes this method a very attractive one from a green chemistry perspective. Using the photodecarbonylation of dicumyl ketone (DCK) as a test system, we show that reaction efficiencies in colloidal suspensions rival those observed in solution. [reaction: see text
The method of arbitrarily large moments to calculate single scale processes in quantum field theory
Energy Technology Data Exchange (ETDEWEB)
Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC)
2017-01-15
We device a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.
The method of arbitrarily large moments to calculate single scale processes in quantum field theory
Directory of Open Access Journals (Sweden)
Johannes Blümlein
2017-08-01
Full Text Available We devise a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.
Separation methods for acyclovir and related antiviral compounds.
Loregian, A; Gatti, R; Palù, G; De Palo, E F
2001-11-25
Acyclovir (ACV) is an antiviral drug, which selectively inhibits replication of members of the herpes group of DNA viruses with low cell toxicity. Valaciclovir (VACV), a prodrug of ACV is usually preferred in the oral treatment of viral infections, mainly herpes simplex virus (HSV). Also other analogues such as ganciclovir and penciclovir are discussed here. The former acts against cytomegalovirus (CMV) in general and the latter against CMV retinitis. The action mechanism of these antiviral drugs is presented briefly here, mainly via phosphorylation and inhibition of the viral DNA polymerase. The therapeutic use and the pharmacokinetics are also outlined. The measurement of the concentration of acyclovir and related compounds in biological samples poses a particularly significant challenge because these drugs tend to be structurally similar to endogenous substances. The analysis requires the use of highly selective analytical techniques and chromatography methods are a first choice to determine drug content in pharmaceuticals and to measure them in body fluids. Chromatography can be considered the procedure of choice for the bio-analysis of this class of antiviral compounds, as this methodology is characterised by good specificity and accuracy and it is particularly useful when metabolites need to be monitored. Among chromatographic techniques, the reversed-phase (RP) HPLC is widely used for the analysis. C18 Silica columns from 7.5 to 30 cm in length are used, the separation is carried out mainly at room temperature and less than 10 min is sufficient for the analysis at 1.0-1.5 ml/min of flow-rate. The separation methods require an isocratic system, and various authors have proposed a variety of mobile phases. The detection requires absorbance or fluorescence measurements carried out at 250-254 nm and at lambdaex=260-285 nm, lambdaem=375-380 nm, respectively. The detection limit is about 0.3-10 ng/ml but the most important aspect is related to the sample treatment
International Nuclear Information System (INIS)
Kobayashi, Satoru
2013-01-01
We report low-field magnetic hysteresis scaling in thulium with strong uniaxial anisotropy. A power-law hysteresis scaling with an exponent of 1.13±0.02 is found between hysteresis loss and remanent flux density of minor loops in the low-temperature ferrimagnetic phase. This exponent value is slightly lower than 1.25–1.4 observed previously for ferromagnets and helimagnets. Unlike spiral and/or Bloch walls with a finite transition width, typical for Dy, Tb, and Ho with planar anisotropy, a soliton wall with a sudden phase shift between neighboring domains may dominate in Tm due to its Ising-like character. The observations imply the presence of universality class of hysteresis scaling that depends on the type of magnetic anisotropy. - Highlights: ► We observe magnetic hysteresis scaling in thulium with a power law exponent of 1.13. ► Irreversibility of soliton walls dominates owing to its strong uniaxial anisotropy. ► The exponent is lower than those for Bloch wall and spiral wall. ► The results imply the presence of universality class that depends on the wall type.
Fractional Nottale's Scale Relativity and emergence of complexified gravity
Energy Technology Data Exchange (ETDEWEB)
EL-Nabulsi, Ahmad Rami [Department of Nuclear and Energy Engineering, Cheju National University, Ara-dong 1, Jeju 690-756 (Korea, Republic of)], E-mail: nabulsiahmadrami@yahoo.fr
2009-12-15
Fractional calculus of variations has recently gained significance in studying weak dissipative and nonconservative dynamical systems ranging from classical mechanics to quantum field theories. In this paper, fractional Nottale's Scale Relativity (NSR) for an arbitrary fractal dimension is introduced within the framework of fractional action-like variational approach recently introduced by the author. The formalism is based on fractional differential operators that generalize the differential operators of conventional NSR but that reduces to the standard formalism in the integer limit. Our main aim is to build the fractional setting for the NSR dynamical equations. Many interesting consequences arise, in particular the emergence of complexified gravity and complex time.
Feldt, Ronald; Lindley, Kyla; Louison, Rebecca; Roe, Allison; Timm, Megan; Utinkova, Nikola
2015-01-01
The Emotional Regulation Related to Testing Scale (ERT Scale) assesses strategies students use to regulate emotion related to academic testing. It has four dimensions: Cognitive Appraising Processes (CAP), Emotion-Focusing Processes (EFP), Task-Focusing Processes (TFP), and Regaining Task-Focusing Processes (RTFP). The study examined the factor…
A Bayesian method for construction of Markov models to describe dynamics on various time-scales.
Rains, Emily K; Andersen, Hans C
2010-10-14
The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most
A New Feature Extraction Method Based on EEMD and Multi-Scale Fuzzy Entropy for Motor Bearing
Directory of Open Access Journals (Sweden)
Huimin Zhao
2016-12-01
Full Text Available Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD, mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.
DEFF Research Database (Denmark)
Huber, D.; Bedding, T.R.; Stello, D.
2011-01-01
), and oscillation amplitudes. We show that the difference of the Δν-νmax relation for unevolved and evolved stars can be explained by different distributions in effective temperature and stellar mass, in agreement with what is expected from scaling relations. For oscillation amplitudes, we show that neither (L/M) s......We have analyzed solar-like oscillations in ~1700 stars observed by the Kepler Mission, spanning from the main sequence to the red clump. Using evolutionary models, we test asteroseismic scaling relations for the frequency of maximum power (νmax), the large frequency separation (Δν...... scaling nor the revised scaling relation by Kjeldsen & Bedding is accurate for red-giant stars, and demonstrate that a revised scaling relation with a separate luminosity-mass dependence can be used to calculate amplitudes from the main sequence to red giants to a precision of ~25%. The residuals show...
A method for developing a large-scale sediment yield index for European river basins
Energy Technology Data Exchange (ETDEWEB)
Delmas, Magalie; Cerdan, Olivier; Garcin, Manuel [BRGM ARN/ESL, Orleans (France); Mouchel, Jean-Marie [UMR Sisyphe, Univ. P and M Curie, Paris (France)
2009-12-15
Background, aim, and scope: Sediment fluxes within continental areas play a major role in biogeochemical cycles and are often the cause of soil surface degradation as well as water and ecosystem pollution. In a situation where a high proportion of the land surface is experiencing significant global land use and climate changes, it appears important to establish sediment budgets considering the major processes forcing sediment redistribution within drainage areas. In this context, the aim of this study is to test a methodology to estimate a sediment yield index at a large spatial resolution for European river basins. Data and methods: Four indicators representing processes respectively considered as sources (mass movement and hillslope erosion), sinks (deposits), and transfers of sediments (drainage density) are defined using distributed data. Using these indicators we propose a basic conceptual approach to test the possibility of explaining sediment yield observed at the outlet of 29 selected European river basins. We propose an index which adds the two sources and transfers, and subsequently subtracts the sink term. This index is then compared to observed sediment yield data. Results: With this approach, variability between river basins is observed and the evolution of each indicator analyzed. A linear regression shows a correlation coefficient of 0.83 linking observed specific sediment yield (SSY) with the SSY index. Discussion: To improve this approach at this large river basin scale, basin classification is further refined using the relation between the observed SSY and the index obtained from the four indicators. It allows a refinement of the results. Conclusions: This study presents a conceptual approach offering the advantages of using spatially distributed data combined with major sediment redistribution processes to estimate the sediment yield observed at the outlet of river basins. Recommendations and perspectives: Inclusion of better information on
Meder, M; Farin, E
2009-11-01
Health valuations are one way of measuring patient preferences with respect to the results of their treatment. The study examines three different methods of health valuations--willingness to pay (WTP), visual analogue scale (VAS), and a rating question for evaluating the subjective significance. The goal is to test the understandability and acceptance of these methods for implementation in questionnaires. In various rehabilitation centres, a total of six focus groups were conducted with 5-9 patients each with a mean age of 57.1 years. The illnesses considered were chronic-ischaemic heart disease, chronic back pain, and breast cancer. Patients filled out a questionnaire that was then discussed in the group. In addition to the quantitative evaluation of the data in the questionnaire, a qualitative analysis of the contents of the group discussion protocols was made. We have results from a total of 42 patients. 14.6% of the patients had "great difficulties" understanding the WTP or rated it as "completely incomprehensible"; this value was 7.3% for VAS and 0% for the rating scale. With respect to acceptance, 31.0% of the patients indicated that they were "not really" or "not at all" willing to answer such a WTP question in a questionnaire; this was 6.6% for the VAS, and again 0% for the rating scale. The qualitative analysis provided an indication as to why some patients view the WTP question in particular in a negative light. Many difficulties in understanding it were related to the formulation of the question and the structure of the questionnaire. However, the patients' statements also made it apparent that the hypothetical nature of the WTP questionnaire was not always recognised. The most frequent reason for the lack of acceptance of the WTP was the patients' fear of negative financial consequences of their responses. With respect to understandability and acceptance, VAS questions appear to be better suited for reflecting patient preferences than WTP questions. The
Some applications of the moving finite element method to fluid flow and related problems
International Nuclear Information System (INIS)
Berry, R.A.; Williamson, R.L.
1983-01-01
The Moving Finite Element (MFE) method is applied to one-dimensional, nonlinear wave type partial differential equations which are characteristics of fluid dynamic and related flow phenomena problems. These equation systems tend to be difficult to solve because their transient solutions exhibit a spacial stiffness property, i.e., they represent physical phenomena of widely disparate length scales which must be resolved simultaneously. With the MFE method the node points automatically move (in theory) to optimal locations giving a much better approximation than can be obtained with fixed mesh methods (with a reasonable number of nodes) and with significantly reduced artificial viscosity or diffusion content. Three applications are considered. In order of increasing complexity they are: (1) a thermal quench problem, (2) an underwater explosion problem, and (3) a gas dynamics shock tube problem. The results are briefly shown
A challenge in ecological studies is defining scales of observation that correspond to relevant ecological scales for organisms or processes. Image segmentation has been proposed as an alternative to pixel-based methods for scaling remotely-sensed data into ecologically-meaningful units. However, to...
The scale-dependent market trend: Empirical evidences using the lagged DFA method
Li, Daye; Kou, Zhun; Sun, Qiankun
2015-09-01
In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.
Jongschaap, Raymond E. E.; Booij, Remmie
2004-09-01
Chlorophyll contents in vegetation depend on soil nitrogen availability and on crop nitrogen uptake, which are important management factors in arable farming. Crop nitrogen uptake is important, as nitrogen is needed for chlorophyll formation, which is important for photosynthesis, i.e. the conversion of absorbed radiance into plant biomass. The objective of this study was to estimate leaf and canopy nitrogen contents by near and remote sensing observations and to link observations at leaf, plant and canopy level. A theoretical base is presented for scaling-up leaf optical properties to whole plants and crops, by linking different optical recording techniques at leaf, plant and canopy levels through the integration of vertical nitrogen distribution. Field data come from potato experiments in The Netherlands in 1997 and 1998, comprising two potato varieties: Eersteling and Bintje, receiving similar nitrogen treatments (0, 100, 200 and 300 kg N ha -1) in varying application schemes to create differences in canopy nitrogen status during the growing season. Ten standard destructive field samplings were performed to follow leaf area index and crop dry weight evolution. Samples were analysed for inorganic nitrogen and total nitrogen contents. At sampling dates, spectral measurements were taken both at leaf level and at canopy level. At leaf level, an exponential relation between SPAD-502 readings and leaf organic nitrogen contents with a high correlation factor of 0.91 was found. At canopy level, an exponential relation between canopy organic nitrogen contents and red edge position ( λrep, nm) derived from reflectance measurements was found with a good correlation of 0.82. Spectral measurements (SPAD-502) at leaf level of a few square mm were related to canopy reflectance measurements (CropScan™) of approximately 0.44 m 2. Statistical regression techniques were used to optimise theoretical vertical nitrogen profiles that allowed scaling-up leaf chlorophyll measurements
Scaling relations for a beam-deflecting TM110 mode in an asymmetric cavity
International Nuclear Information System (INIS)
Takeda, H.
1989-01-01
A deflecting mode in an rf cavity caused by an aperture of the coupling hole from a waveguide is studied. If the coupling hole was a finite size, the rf modes in the cavity can be distorted. We consider the distorted mode as a sum of the accelerating mode, and the deflecting mode. The finite-size coupling hole can be considered as radiating dipole sources in a closed cavity. Following the prescription given by H. Bethe, the relative strength of the deflecting mode TM 110 to the accelerating TM 010 mode is calculated by decomposing the dipole source field into cavity eigenmodes. Scaling relations are obtained as a function of the coupling hole radius. 2 refs., 6 figs
Quantum cosmological relational model of shape and scale in 1D
International Nuclear Information System (INIS)
Anderson, Edward
2011-01-01
Relational particle models are useful toy models for quantum cosmology and the problem of time in quantum general relativity. This paper shows how to extend existing work on concrete examples of relational particle models in 1D to include a notion of scale. This is useful as regards forming a tight analogy with quantum cosmology and the emergent semiclassical time and hidden time approaches to the problem of time. This paper shows furthermore that the correspondence between relational particle models and classical and quantum cosmology can be strengthened using judicious choices of the mechanical potential. This gives relational particle mechanics models with analogues of spatial curvature, cosmological constant, dust and radiation terms. A number of these models are then tractable at the quantum level. These models can be used to study important issues (1) in canonical quantum gravity: the problem of time, the semiclassical approach to it and timeless approaches to it (such as the naive Schroedinger interpretation and records theory) and (2) in quantum cosmology, such as in the investigation of uniform states, robustness and the qualitative understanding of the origin of structure formation.
International Nuclear Information System (INIS)
Madriz Aguilar, Jose Edgar; Bellini, Mauricio
2009-01-01
Considering a five-dimensional (5D) Riemannian spacetime with a particular stationary Ricci-flat metric, we obtain in the framework of the induced matter theory an effective 4D static and spherically symmetric metric which give us ordinary gravitational solutions on small (planetary and astrophysical) scales, but repulsive (anti gravitational) forces on very large (cosmological) scales with ω=-1. Our approach is an unified manner to describe dark energy, dark matter and ordinary matter. We illustrate the theory with two examples, the solar system and the great attractor. From the geometrical point of view, these results follow from the assumption that exists a confining force that make possible that test particles move on a given 4D hypersurface.
Madriz Aguilar, José Edgar; Bellini, Mauricio
2009-08-01
Considering a five-dimensional (5D) Riemannian spacetime with a particular stationary Ricci-flat metric, we obtain in the framework of the induced matter theory an effective 4D static and spherically symmetric metric which give us ordinary gravitational solutions on small (planetary and astrophysical) scales, but repulsive (anti gravitational) forces on very large (cosmological) scales with ω=-1. Our approach is an unified manner to describe dark energy, dark matter and ordinary matter. We illustrate the theory with two examples, the solar system and the great attractor. From the geometrical point of view, these results follow from the assumption that exists a confining force that make possible that test particles move on a given 4D hypersurface.
Energy Technology Data Exchange (ETDEWEB)
Madriz Aguilar, Jose Edgar [Instituto de Fisica de la Universidad de Guanajuato, C.P. 37150, Leon Guanajuato (Mexico); Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Funes 3350, C.P. 7600, Mar del Plata (Argentina)], E-mail: madriz@mdp.edu.ar; Bellini, Mauricio [Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Funes 3350, C.P. 7600, Mar del Plata (Argentina); Consejo Nacional de Investigaciones Cientificas y Tecnicas (CONICET) (Argentina)], E-mail: mbellini@mdp.edu.ar
2009-08-31
Considering a five-dimensional (5D) Riemannian spacetime with a particular stationary Ricci-flat metric, we obtain in the framework of the induced matter theory an effective 4D static and spherically symmetric metric which give us ordinary gravitational solutions on small (planetary and astrophysical) scales, but repulsive (anti gravitational) forces on very large (cosmological) scales with {omega}=-1. Our approach is an unified manner to describe dark energy, dark matter and ordinary matter. We illustrate the theory with two examples, the solar system and the great attractor. From the geometrical point of view, these results follow from the assumption that exists a confining force that make possible that test particles move on a given 4D hypersurface.
Compositions comprising enhanced graphene oxide structures and related methods
Kumar, Priyank Vijaya; Bardhan, Neelkanth M.; Belcher, Angela; Grossman, Jeffrey
2016-12-27
Embodiments described herein generally relate to compositions comprising a graphene oxide species. In some embodiments, the compositions advantageously have relatively high oxygen content, even after annealing.
International Nuclear Information System (INIS)
Rorai, Alberto; Hennawi, Joseph F.; White, Martin
2013-01-01
only 20 close quasar pair spectra can pinpoint the Jeans scale to ≅ 5% precision, independent of the amplitude T 0 and slope γ of the temperature-density relation of the IGM T=T 0 (ρ/ ρ-bar ) γ-1 . This exquisite sensitivity arises because even long-wavelength one-dimensional Fourier modes ∼10 Mpc, i.e., two orders of magnitude larger than the Jeans scale, are nevertheless dominated by projected small-scale three-dimensional (3D) power. Hence phase angle differences between all modes of quasar pair spectra actually probe the shape of the 3D power spectrum on scales comparable to the pair separation. We show that this new method for measuring the Jeans scale is unbiased and is insensitive to a battery of systematics that typically plague Lyα forest measurements, such as continuum fitting errors, imprecise knowledge of the noise level and/or spectral resolution, and metal-line absorption
Energy Technology Data Exchange (ETDEWEB)
Rorai, Alberto; Hennawi, Joseph F. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); White, Martin [Department of Astronomy, University of California at Berkeley, 601 Campbell Hall, Berkeley, CA 94720-3411 (United States)
2013-10-01
only 20 close quasar pair spectra can pinpoint the Jeans scale to ≅ 5% precision, independent of the amplitude T{sub 0} and slope γ of the temperature-density relation of the IGM T=T{sub 0}(ρ/ ρ-bar ){sup γ-1}. This exquisite sensitivity arises because even long-wavelength one-dimensional Fourier modes ∼10 Mpc, i.e., two orders of magnitude larger than the Jeans scale, are nevertheless dominated by projected small-scale three-dimensional (3D) power. Hence phase angle differences between all modes of quasar pair spectra actually probe the shape of the 3D power spectrum on scales comparable to the pair separation. We show that this new method for measuring the Jeans scale is unbiased and is insensitive to a battery of systematics that typically plague Lyα forest measurements, such as continuum fitting errors, imprecise knowledge of the noise level and/or spectral resolution, and metal-line absorption.
Schnettler, Berta; Miranda, Horacio; Miranda-Zapata, Edgardo; Salinas-Oñate, Natalia; Grunert, Klaus G; Lobos, Germán; Sepúlveda, José; Orellana, Ligia; Hueche, Clementina; Bonilla, Héctor
2017-06-01
This study examined longitudinal measurement invariance in the Satisfaction with Food-related Life (SWFL) scale using follow-up data from university students. We examined this measure of the SWFL in different groups of students, separated by various characteristics. Through non-probabilistic longitudinal sampling, 114 university students (65.8% female, mean age: 22.5) completed the SWFL questionnaire three times, over intervals of approximately one year. Confirmatory factor analysis was used to examine longitudinal measurement invariance. Two types of analysis were conducted: first, a longitudinal invariance by time, and second, a multigroup longitudinal invariance by sex, age, socio-economic status and place of residence during the study period. Results showed that the 3-item version of the SWFL exhibited strong longitudinal invariance (equal factor loadings and equal indicator intercepts). Longitudinal multigroup invariance analysis also showed that the 3-item version of the SWFL displays strong invariance by socio-economic status and place of residence during the study period over time. Nevertheless, it was only possible to demonstrate equivalence of the longitudinal factor structure among students of both sexes, and among those older and younger than 22 years. Generally, these findings suggest that the SWFL scale has satisfactory psychometric properties for longitudinal measurement invariance in university students with similar characteristics as the students that participated in this research. It is also possible to suggest that satisfaction with food-related life is associated with sex and age. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Dourado, Eneida R.G.; Assis, Juliana T. de; Lage, Ricardo F.; Lopes, Karina B.
2013-01-01
This paper aims to frame the event overflow organic solvent rich in uranium, from a decanter of ore beneficiation plant, caused by the fall in the supply of electricity, according to the criteria established by the International Nuclear Event Scale and radiological (INES), facilitating the understanding of the occurrence and communication with the public regarding the radiation safety aspects involved. With the fall of electricity, routine procedures in situations of installation stop were performed, however, due to operational failure, the valve on the transfer line liquor was not closed. Thus, the mixer continued being fed with liquor, that led the consequent leakage of solvent loaded with uranium. It reached the drainage system, and the box of rainwater harvesting of the plant. However, immediately after the detection of the event, corrective actions were initiated and the overflow was contained. Regulatory agencies followed the removal of the solvent and on the results of the analysis of environmental monitoring, found that the event did not provide exposure to workers or any other impact. Therefore, comparing the characteristics of the event and the guidelines proposed by the INES scale, it is concluded that the classification of the event is below scale/level 0, confirming the absence of risk to the local population, workers and the environment
A Hamiltonian-based derivation of Scaled Boundary Finite Element Method for elasticity problems
International Nuclear Information System (INIS)
Hu Zhiqiang; Lin Gao; Wang Yi; Liu Jun
2010-01-01
The Scaled Boundary Finite Method (SBFEM) is a semi-analytical solution approach for solving partial differential equation. For problem in elasticity, the governing equations can be obtained by mechanically based formulation, Scaled-boundary-transformation-based formulation and principle of virtual work. The governing equations are described in the frame of Lagrange system and the unknowns are displacements. But in the solution procedure, the auxiliary variables are introduced and the equations are solved in the state space. Based on the observation that the duality system to solve elastic problem proposed by W.X. Zhong is similar to the above solution approach, the discretization of the SBFEM and the duality system are combined to derive the governing equations in the Hamilton system by introducing the dual variables in this paper. The Precise Integration Method (PIM) used in Duality system is also an efficient method for the solution of the governing equations of SBFEM in displacement and boundary stiffness matrix especially for the case which results some numerical difficulties in the usually uses the eigenvalue method. Numerical examples are used to demonstrate the validity and effectiveness of the PIM for solution of boundary static stiffness.
Iteratively-coupled propagating exterior complex scaling method for electron-hydrogen collisions
International Nuclear Information System (INIS)
Bartlett, Philip L; Stelbovics, Andris T; Bray, Igor
2004-01-01
A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schroedinger equation, for L ≤ 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources. (letter to the editor)
A Real-Time Analysis Method for Pulse Rate Variability Based on Improved Basic Scale Entropy
Directory of Open Access Journals (Sweden)
Yongxin Chou
2017-01-01
Full Text Available Base scale entropy analysis (BSEA is a nonlinear method to analyze heart rate variability (HRV signal. However, the time consumption of BSEA is too long, and it is unknown whether the BSEA is suitable for analyzing pulse rate variability (PRV signal. Therefore, we proposed a method named sliding window iterative base scale entropy analysis (SWIBSEA by combining BSEA and sliding window iterative theory. The blood pressure signals of healthy young and old subjects are chosen from the authoritative international database MIT/PhysioNet/Fantasia to generate PRV signals as the experimental data. Then, the BSEA and the SWIBSEA are used to analyze the experimental data; the results show that the SWIBSEA reduces the time consumption and the buffer cache space while it gets the same entropy as BSEA. Meanwhile, the changes of base scale entropy (BSE for healthy young and old subjects are the same as that of HRV signal. Therefore, the SWIBSEA can be used for deriving some information from long-term and short-term PRV signals in real time, which has the potential for dynamic PRV signal analysis in some portable and wearable medical devices.
Atomistic simulations of materials: Methods for accurate potentials and realistic time scales
Tiwary, Pratyush
This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well
Relative Contributions of Three Descriptive Methods: Implications for Behavioral Assessment
Pence, Sacha T.; Roscoe, Eileen M.; Bourret, Jason C.; Ahearn, William H.
2009-01-01
This study compared the outcomes of three descriptive analysis methods--the ABC method, the conditional probability method, and the conditional and background probability method--to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior…
International Nuclear Information System (INIS)
Swiderska-Kowalczyk, M.; Gomez, F.J.; Martin, M.
1997-01-01
In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO 2 , UO 2 Al 2 O 3 , Al 2 SiO 5 , B 2 O 3 , Cd, CdO, Fe 2 O 3 , MnO, SiO 2 , AgO, SnO 2 , Te, U 3 O 8 , BaO, CsCl, CsNO 3 , Urania, RuO 2 , TiO 2 , Al(OH) 3 , BaSO 4 , Eu 2 O 3 and Sn. (Author)
New Conjugacy Conditions and Related Nonlinear Conjugate Gradient Methods
International Nuclear Information System (INIS)
Dai, Y.-H.; Liao, L.-Z.
2001-01-01
Conjugate gradient methods are a class of important methods for unconstrained optimization, especially when the dimension is large. This paper proposes a new conjugacy condition, which considers an inexact line search scheme but reduces to the old one if the line search is exact. Based on the new conjugacy condition, two nonlinear conjugate gradient methods are constructed. Convergence analysis for the two methods is provided. Our numerical results show that one of the methods is very efficient for the given test problems
Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya
2013-01-01
Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607
Dealing with missing data in a multi-question depression scale: a comparison of imputation methods
Directory of Open Access Journals (Sweden)
Stuart Heather
2006-12-01
Full Text Available Abstract Background Missing data present a challenge to many research projects. The problem is often pronounced in studies utilizing self-report scales, and literature addressing different strategies for dealing with missing data in such circumstances is scarce. The objective of this study was to compare six different imputation techniques for dealing with missing data in the Zung Self-reported Depression scale (SDS. Methods 1580 participants from a surgical outcomes study completed the SDS. The SDS is a 20 question scale that respondents complete by circling a value of 1 to 4 for each question. The sum of the responses is calculated and respondents are classified as exhibiting depressive symptoms when their total score is over 40. Missing values were simulated by randomly selecting questions whose values were then deleted (a missing completely at random simulation. Additionally, a missing at random and missing not at random simulation were completed. Six imputation methods were then considered; 1 multiple imputation, 2 single regression, 3 individual mean, 4 overall mean, 5 participant's preceding response, and 6 random selection of a value from 1 to 4. For each method, the imputed mean SDS score and standard deviation were compared to the population statistics. The Spearman correlation coefficient, percent misclassified and the Kappa statistic were also calculated. Results When 10% of values are missing, all the imputation methods except random selection produce Kappa statistics greater than 0.80 indicating 'near perfect' agreement. MI produces the most valid imputed values with a high Kappa statistic (0.89, although both single regression and individual mean imputation also produced favorable results. As the percent of missing information increased to 30%, or when unbalanced missing data were introduced, MI maintained a high Kappa statistic. The individual mean and single regression method produced Kappas in the 'substantial agreement' range
Processor farming method for multi-scale analysis of masonry structures
Krejčí, Tomáš; Koudelka, Tomáš
2017-07-01
This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.
Scale Space Methods for Analysis of Type 2 Diabetes Patients' Blood Glucose Values
Directory of Open Access Journals (Sweden)
Stein Olav Skrøvseth
2011-01-01
Full Text Available We describe how scale space methods can be used for quantitative analysis of blood glucose concentrations from type 2 diabetes patients. Blood glucose values were recorded voluntarily by the patients over one full year as part of a self-management process, where the time and frequency of the recordings are decided by the patients. This makes a unique dataset in its extent, though with a large variation in reliability of the recordings. Scale space and frequency space techniques are suited to reveal important features of unevenly sampled data, and useful for identifying medically relevant features for use both by patients as part of their self-management process, and provide useful information for physicians.
Second-order two-scale method for bending behaviors of composite plate with periodic configuration
International Nuclear Information System (INIS)
Zhu Guoqing; Cui Junzhi
2010-01-01
In this paper, the second-order two-scale analysis method for bending behaviors of the plate made from composites with 3-D periodic configuration is presented by means of construction way. It can capture the microscopic 3-D mechanics behaviors caused from 3-D micro-structures. First, directly starting from the 3-D elastic plate model of composite materials with 3-D periodic configuration, three cell models are defined, and correspondingly the three classes of cell functions only defined on 3 normalized cells are constructed. And then, the effective homogenization parameters of composites are calculated from those local functions, it leads to a 2-D homogenized laminar plate problem. Next, to solve it the homogenization solution is obtained. Finally, the second-order two-scale solution is constructed from the micro-cell functions and the homogenization solution.
Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods
Wang, Cheng
2018-05-17
Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.
Accessible methods for the dynamic time-scale decomposition of biochemical systems.
Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula
2009-11-01
The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.
A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark
Directory of Open Access Journals (Sweden)
Yong Wang
2016-02-01
Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.
International Nuclear Information System (INIS)
Skerovic, V; Zarubica, V; Aleksic, M; Zekovic, L; Belca, I
2010-01-01
Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.
Energy Technology Data Exchange (ETDEWEB)
Skerovic, V; Zarubica, V; Aleksic, M [Directorate of measures and precious metals, Optical radiation Metrology department, Mike Alasa 14, 11000 Belgrade (Serbia); Zekovic, L; Belca, I, E-mail: vladanskerovic@dmdm.r [Faculty of Physics, Department for Applied physics and metrology, Studentski trg 12-16, 11000 Belgrade (Serbia)
2010-10-15
Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.
Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method
Directory of Open Access Journals (Sweden)
Qing-He Yao
2014-01-01
Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
Stand-scale soil respiration estimates based on chamber methods in a Bornean tropical rainforest
Kume, T.; Katayama, A.; Komatsu, H.; Ohashi, M.; Nakagawa, M.; Yamashita, M.; Otsuki, K.; Suzuki, M.; Kumagai, T.
2009-12-01
This study was undertaken to estimate stand-scale soil respiration in an aseasonal tropical rainforest on Borneo Island. To this aim, we identified critical and practical factors explaining spatial variations in soil respiration based on the soil respiration measurements conducted at 25 points in a 40 × 40 m subplot of a 4 ha study plot for five years in relation to soil, root, and forest structural factors. Consequently, we found significant positive correlation between the soil respiration and forest structural parameters. The most important factor was the mean DBH within 6 m of the measurement points, which had a significant linear relationship with soil respiration. Using the derived linear regression and an inventory dataset, we estimated the 4 ha-scale soil respiration. The 4 ha-scale estimation (6.0 μmol m-2 s-1) was nearly identical to the subplot scale measurements (5.7 μmol m-2 s-1), which were roughly comparable to the nocturnal CO2 fluxes calculated using the eddy covariance technique. To confirm the spatial representativeness of soil respiration estimates in the subplot, we performed variogram analysis. Semivariance of DBH(6) in the 4 ha plot showed that there was autocorrelation within the separation distance of about 20 m, and that the spatial dependence was unclear at a separation distance of greater than 20 m. This ascertained that the 40 × 40 m subplot could represent the whole forest structure in the 4 ha plot. In addition, we discuss characteristics of the stand-scale soil respiration at this site by comparing with those of other forests reported in previous literature in terms of the soil C balance. Soil respiration at our site was noticeably greater, relative to the incident litterfall amount, than soil respiration in other tropical and temperate forests probably owing to the larger total belowground C allocation by emergent trees. Overall, this study suggests the arrangement of emergent trees and their bellow ground C allocation could be
International Nuclear Information System (INIS)
Libo Wu; Kaneko, S.; Matsuoka, S.
2005-01-01
It is noteworthy that income elasticity of energy consumption in China shifted from positive to negative after 1996, accompanied by an unprecedented decline in energy-related CO 2 emissions. This paper therefore investigate the evolution of energy-related CO 2 emissions in China from 1985 to 1999 and the underlying driving forces, using the newly proposed three-level 'perfect decomposition' method and provincially aggregated data. The province-based estimates and analyses reveal a 'sudden stagnancy' of energy consumption, supply and energy-related CO 2 emissions in China from 1996 to 1999. The speed of a decrease in energy intensity and a slowdown in the growth of average labor productivity of industrial enterprises may have been the dominant contributors to this 'stagnancy'. The findings of this paper point to the highest rate of deterioration of state-owned enterprises in early 1996, the industrial restructuring caused by changes in ownership, the shutdown of small-scale power plants, and the introduction of policies to improve energy efficiency as probable factors. Taking into account the characteristics of those key driving forces, we characterize China's decline of energy-related CO 2 emissions as a short-term fluctuation and incline to the likelihood that China will resume an increasing trend from a lower starting point in the near future. (author)
International Nuclear Information System (INIS)
Wu, T.; Li, Y.; Hekker, S.
2014-01-01
Stellar mass M, radius R, and gravity g are important basic parameters in stellar physics. Accurate values for these parameters can be obtained from the gravitational interaction between stars in multiple systems or from asteroseismology. Stars in a cluster are thought to be formed coevally from the same interstellar cloud of gas and dust. The cluster members are therefore expected to have some properties in common. These common properties strengthen our ability to constrain stellar models and asteroseismically derived M, R, and g when tested against an ensemble of cluster stars. Here we derive new scaling relations based on a relation for stars on the Hayashi track (√(T eff )∼g p R q ) to determine the masses and metallicities of red giant branch stars in open clusters NGC 6791 and NGC 6819 from the global oscillation parameters Δν (the large frequency separation) and ν max (frequency of maximum oscillation power). The Δν and ν max values are derived from Kepler observations. From the analysis of these new relations we derive: (1) direct observational evidence that the masses of red giant branch stars in a cluster are the same within their uncertainties, (2) new methods to derive M and z of the cluster in a self-consistent way from Δν and ν max , with lower intrinsic uncertainties, and (3) the mass dependence in the Δν - ν max relation for red giant branch stars.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Gambling-Related Cognition Scale (GRCS): Are skills-based games at a disadvantage?
Lévesque, David; Sévigny, Serge; Giroux, Isabelle; Jacques, Christian
2017-09-01
The Gambling-Related Cognition Scale (GRCS; Raylu & Oei, 2004) was developed to evaluate gambling-related cognitive distortions for all types of gamblers, regardless of their gambling activities (poker, slot machine, etc.). It is therefore imperative to ascertain the validity of its interpretation across different types of gamblers; however, some skills-related items endorsed by players could be interpreted as a cognitive distortion despite the fact that they play skills-related games. Using an intergroup (168 poker players and 73 video lottery terminal [VLT] players) differential item functioning (DIF) analysis, this study examined the possible manifestation of item biases associated with the GRCS. DIF was analyzed with ordinal logistic regressions (OLRs) and Ramsay's (1991) nonparametric kernel smoothing approach with TestGraf. Results show that half of the items display at least moderate DIF between groups and, depending on the type of analysis used, 3 to 7 items displayed large DIF. The 5 items with the most DIF were more significantly endorsed by poker players (uniform DIF) and were all related to skills, knowledge, learning, or probabilities. Poker players' interpretations of some skills-related items may lead to an overestimation of their cognitive distortions due to their total score increased by measurement artifact. Findings indicate that the current structure of the GRCS contains potential biases to be considered when poker players are surveyed. The present study conveys new and important information on bias issues to ponder carefully before using and interpreting the GRCS and other similar wide-range instruments with poker players. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Scaling Mode Shapes in Output-Only Structure by a Mass-Change-Based Method
Directory of Open Access Journals (Sweden)
Liangliang Yu
2017-01-01
Full Text Available A mass-change-based method based on output-only data for the rescaling of mode shapes in operational modal analysis (OMA is introduced. The mass distribution matrix, which is defined as a diagonal matrix whose diagonal elements represent the ratios among the diagonal elements of the mass matrix, is calculated using the unscaled mode shapes. Based on the theory of null space, the mass distribution vector or mass distribution matrix is obtained. A small mass with calibrated weight is added to a certain location of the structure, and then the mass distribution vector of the modified structure is estimated. The mass matrix is identified according to the difference of the mass distribution vectors between the original and modified structures. Additionally, the universal set of modes is unnecessary when calculating the mass distribution matrix, indicating that modal truncation is allowed in the proposed method. The mass-scaled mode shapes estimated in OMA according to the proposed method are compared with those obtained by experimental modal analysis. A simulation is employed to validate the feasibility of the method. Finally, the method is tested on output-only data from an experiment on a five-storey structure, and the results confirm the effectiveness of the method.
Energy Technology Data Exchange (ETDEWEB)
Hildebrand, S.G. (ed.)
1980-10-01
Potential environmental impacts in reservoirs and downstream river reaches below dams that may be caused by the water level fluctuation resulting from development and operation of small scale (under 25MW) hydroelectric projects are identified. The impacts discussed will be of potential concern at only those small-scale hydroelectric projects that are operated in a store and release (peaking) mode. Potential impacts on physical and chemical characteristics in reservoirs resulting from water level fluctuation include resuspension and redistribution of bank and bed sediment; leaching of soluble organic matter from sediment in the littoral zone; and changes in water quality resulting from changes in sediment and nutrient trap efficiency. Potential impacts on reservoir biota as a result of water level fluctuation include habitat destruction and the resulting partial or total loss of aquatic species; changes in habitat quality, which result in reduced standing crop and production of aquatic biota; and possible shifts in species diversity. The potential physical effects of water level fluctuation on downstream systems below dams are streambed and bank erosion and water quality problems related to resuspension and redistribution of these materials. Potential biological impacts of water level fluctuation on downstream systems below dams result from changes in current velocity, habitat reduction, and alteration in food supply. These alterations, either singly or in combination, can adversely affect aquatic populations below dams. The nature and potential significance of adverse impacts resulting from water level fluctuation are discussed. Recommendations for site-specific evaluation of water level fluctuation at small-scale hydroelectric projects are presented.
Casola, J. H.; Huber, D.
2013-12-01
Many media, academic, government, and advocacy organizations have achieved sophistication in developing effective messages based on scientific information, and can quickly translate salient aspects of emerging climate research and evolving observations. However, there are several ways in which valid messages can be misconstrued by decision makers, leading them to inaccurate conclusions about the risks associated with climate impacts. Three cases will be discussed: 1) Issues of spatial scale in interpreting climate observations: Local climate observations may contradict summary statements about the effects of climate change on larger regional or global spatial scales. Effectively addressing these differences often requires communicators to understand local and regional climate drivers, and the distinction between a 'signal' associated with climate change and local climate 'noise.' Hydrological statistics in Missouri and California are shown to illustrate this case. 2) Issues of complexity related to extreme events: Climate change is typically invoked following a wide range of damaging meteorological events (e.g., heat waves, landfalling hurricanes, tornadoes), regardless of the strength of the relationship between anthropogenic climate change and the frequency or severity of that type of event. Examples are drawn from media coverage of several recent events, contrasting useful and potentially confusing word choices and frames. 3) Issues revolving around climate sensitivity: The so-called 'pause' or 'hiatus' in global warming has reverberated strongly through political and business discussions of climate change. Addressing the recent slowdown in warming yields an important opportunity to raise climate literacy in these communities. Attempts to use recent observations as a wedge between climate 'believers' and 'deniers' is likely to be counterproductive. Examples are drawn from Congressional testimony and media stories. All three cases illustrate ways that decision
Tiwari, Nishidha; Tiwari, Shilpi; Thakur, Ruchi; Agrawal, Nikita; Shashikiran, N D; Singla, Shilpy
2015-01-01
Dental treatment is usually a poignant phenomenon for children. Projective scales are preferred over psychometric scales to recognize it, and to obtain the self-report from children. The aims were to evaluate treatment related fear using a newly developed fear scale for children, fear assessment picture scale (FAPS), and anxiety with colored version of modified facial affective scale (MFAS) - three faces along with physiologic responses (pulse rate and oxygen saturation) obtained by pulse oximeter before and during pulpectomy procedure. Total, 60 children of age 6-8 years who were visiting the dental hospital for the first time and needed pulpectomy treatment were selected. Children selected were of sound physical, physiological, and mental condition. Two projective scales were used; one to assess fear - FAPS and to assess anxiety - colored version of MFAS - three faces. These were co-related with the physiological responses (oxygen saturation and pulse rate) of children obtained by pulse oximeter before and during the pulpectomy procedure. Shapiro-Wilk test, McNemar's test, Wilcoxon signed ranks test, Kruskal-Wallis test, Mann-Whitney test were applied in the study. The physiological responses showed association with FAPS and MFAS though not significant. However, oxygen saturation with MFAS showed a significant change between "no anxiety" and "some anxiety" as quantified by Kruskal-Wallis test value 6.287, P = 0.043 (test is easy and fast to apply on children and reduces the chair-side time.
Methods and scales in soil erosion studies in Spain: problems and perspectives
Energy Technology Data Exchange (ETDEWEB)
Garcia-Ruiz, J. M.
2009-07-01
Soil erosion is a major problem in some areas of Spain. Research groups have studied a variety of aspects of this problem indifferent environments, and at a range of scales using a diversity of methods, from piquettes and rainfall simulation to experimental plots, catchment and large regional areas. This has increased knowledge and identified the main problems: farmland abandonment, badlands erosion, the effects of land use changes, and the role of extreme events and erosion in certain crops (particularly vineyards). However, comparison of results among various research groups has been difficult, posing problems in developing solutions from State and Regional administrators. (Author) 73 refs.
DEFF Research Database (Denmark)
de Tomás, Alberto; Nieto, Héctor; Guzinski, Radoslaw
2014-01-01
Remote sensing has proved to be a consistent tool for monitoring water fluxes at regional scales. The triangle method, in particular, estimates the evaporative fraction (EF), defined as the ratio of latent heat flux (LE) to available energy, based on the relationship between satellite observations...... of land surface temperature and a vegetation index. Among other methodologies, this approach has been commonly used as an approximation to estimate LE, mainly over large semi-arid areas with uniform landscape features. In this study, an interpretation of the triangular space has been applied over...
Elongation cutoff technique armed with quantum fast multipole method for linear scaling.
Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko
2009-11-30
A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Gray, S.K.; Noid, D.W.; Sumpter, B.G.
1994-01-01
We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique
Dynamical properties of the growing continuum using multiple-scale method
Directory of Open Access Journals (Sweden)
Hynčík L.
2008-12-01
Full Text Available The theory of growth and remodeling is applied to the 1D continuum. This can be mentioned e.g. as a model of the muscle fibre or piezo-electric stack. Hyperelastic material described by free energy potential suggested by Fung is used whereas the change of stiffness is taken into account. Corresponding equations define the dynamical system with two degrees of freedom. Its stability and the properties of bifurcations are studied using multiple-scale method. There are shown the conditions under which the degenerated Hopf's bifurcation is occuring.
Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.
Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L
2002-09-01
We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.
A Proactive Complex Event Processing Method for Large-Scale Transportation Internet of Things
Wang, Yongheng; Cao, Kening
2014-01-01
The Internet of Things (IoT) provides a new way to improve the transportation system. The key issue is how to process the numerous events generated by IoT. In this paper, a proactive complex event processing method is proposed for large-scale transportation IoT. Based on a multilayered adaptive dynamic Bayesian model, a Bayesian network structure learning algorithm using search-and-score is proposed to support accurate predictive analytics. A parallel Markov decision processes model is design...
Modern psychometric approaches to analysis of scales for health-related quality of life
DEFF Research Database (Denmark)
Bjorner, Jakob Bue; Bech, Per
2016-01-01
In recent years, much effort has been invested in the development of new instruments for assessment of health-related quality of life (HRQOL). For many new instruments, modern psychometric methods, such as item response theory (IRT) models, have been used, either as supplemental to classical....... The models include Rasch models (Rasch 1980; Fischer and Molenaar 1995), other IRT models (Samejima 1969; van der Linden and Hambleton 1997), and factor analytic models for categorical data (Muthén 1984). “Modern” psychometric methods have actually a rather long history within psychiatric research (both...
Van Strien, Jan W.; Isbell, Lynne A.
2017-01-01
Studies of event-related potentials in humans have established larger early posterior negativity (EPN) in response to pictures depicting snakes than to pictures depicting other creatures. Ethological research has recently shown that macaques and wild vervet monkeys respond strongly to partially exposed snake models and scale patterns on the snake skin. Here, we examined whether snake skin patterns and partially exposed snakes elicit a larger EPN in humans. In Task 1, we employed pictures with close-ups of snake skins, lizard skins, and bird plumage. In task 2, we employed pictures of partially exposed snakes, lizards, and birds. Participants watched a random rapid serial visual presentation of these pictures. The EPN was scored as the mean activity (225–300 ms after picture onset) at occipital and parieto-occipital electrodes. Consistent with previous studies, and with the Snake Detection Theory, the EPN was significantly larger for snake skin pictures than for lizard skin and bird plumage pictures, and for lizard skin pictures than for bird plumage pictures. Likewise, the EPN was larger for partially exposed snakes than for partially exposed lizards and birds. The results suggest that the EPN snake effect is partly driven by snake skin scale patterns which are otherwise rare in nature. PMID:28387376
Evaluation of ground motion scaling methods for analysis of structural systems
O'Donnell, A. P.; Beltsar, O.A.; Kurama, Y.C.; Kalkan, E.; Taflanidis, A.A.
2011-01-01
Ground motion selection and scaling comprises undoubtedly the most important component of any seismic risk assessment study that involves time-history analysis. Ironically, this is also the single parameter with the least guidance provided in current building codes, resulting in the use of mostly subjective choices in design. The relevant research to date has been primarily on single-degree-of-freedom systems, with only a few studies using multi-degree-of-freedom systems. Furthermore, the previous research is based solely on numerical simulations with no experimental data available for the validation of the results. By contrast, the research effort described in this paper focuses on an experimental evaluation of selected ground motion scaling methods based on small-scale shake-table experiments of re-configurable linearelastic and nonlinear multi-story building frame structure models. Ultimately, the experimental results will lead to the development of guidelines and procedures to achieve reliable demand estimates from nonlinear response history analysis in seismic design. In this paper, an overview of this research effort is discussed and preliminary results based on linear-elastic dynamic response are presented. ?? ASCE 2011.
Energy Technology Data Exchange (ETDEWEB)
O' Leary, Patrick [Kitware, Inc., Clifton Park, NY (United States)
2017-09-13
The primary challenge motivating this project is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who can perform analysis only on a small fraction of the data they calculate, resulting in the substantial likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, which is known as in situ processing. The idea in situ processing was not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by Department of Energy (DOE) science projects. Our objective was to produce and enable the use of production-quality in situ methods and infrastructure, at scale, on DOE high-performance computing (HPC) facilities, though we expected to have an impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve this objective, we engaged in software technology research and development (R&D), in close partnerships with DOE science code teams, to produce software technologies that were shown to run efficiently at scale on DOE HPC platforms.
Mercury exposure of workers and health problems related with small-scale gold panning and extraction
International Nuclear Information System (INIS)
Khan, S.; Shah, M.T.; Din, I.U.; Rehman, S.
2012-01-01
This study was conducted to investigate mercury (Hg) exposure and health problems related to small-scale gold panning and extraction (GPE) in the northern Pakistan. Urine and blood samples of occupational and non-occupational persons were analyzed for total Hg, while blood's fractions including red blood cells and plasma were analyzed for total Hg and its inorganic and organic species. The concentrations of Hg in urine and blood samples were significantly (P<0.01) higher in occupational persons as compared to non-occupational and exceeded the permissible limits set by World Health Organization (WHO) and United State Environmental Protection Agency (US-EPA). Furthermore, the data indicated that numerous health problems were present in occupational persons involved in GPE. (author)
Kim, Hyunji; Kim, Eunbee; Suh, Eunkook M; Callan, Mitchell J
2018-01-01
The current research developed and validated a Korean-translated version of the Personal Relative Deprivation Scale (PRDS). The PRDS measures individual differences in people's tendencies to feel resentful about what they have compared to what other people like them have. Across 2 studies, Exploratory Factor Analyses revealed that the two reverse-worded items from the original PRDS did not load onto the primary factor for the Korean-translated PRDS. A reduced 3-item Korean PRDS, however, showed good convergent validity. Replicating previous findings using Western samples, greater tendencies to make social comparisons of abilities (but not opinions) were associated with higher PRDS (Studies 1 and 2), and participants scoring higher on the 3-item Korean PRDS were more materialistic (Studies 1 and 2), reported worse physical health (Study 1), had lower self-esteem (Study 2) and experienced higher stress (Study 2).
Montoya, Joseph H; Tsai, Charlie; Vojvodic, Aleksandra; Nørskov, Jens K
2015-07-08
The electrochemical production of NH3 under ambient conditions represents an attractive prospect for sustainable agriculture, but electrocatalysts that selectively reduce N2 to NH3 remain elusive. In this work, we present insights from DFT calculations that describe limitations on the low-temperature electrocatalytic production of NH3 from N2 . In particular, we highlight the linear scaling relations of the adsorption energies of intermediates that can be used to model the overpotential requirements in this process. By using a two-variable description of the theoretical overpotential, we identify fundamental limitations on N2 reduction analogous to those present in processes such as oxygen evolution. Using these trends, we propose new strategies for catalyst design that may help guide the search for an electrocatalyst that can achieve selective N2 reduction. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Comparison of relativity theories with observer-independent scales of both velocity and length/mass
International Nuclear Information System (INIS)
Amelino-Camelia, Giovanni; Benedetti, Dario; D'Andrea, Francesco; Procaccini, Andrea
2003-01-01
We consider the two most studied proposals of relativity theories with observer-independent scales of both velocity and length/mass: the one discussed by Amelino-Camelia as an illustrative example for the original proposal (Preprint gr-qc/0012051) of theories with two relativistic invariants, and an alternative more recently proposed by Magueijo and Smolin (Preprint hep-th/0112090). We show that these two relativistic theories are much more closely connected than it would appear on the basis of a naive analysis of their original formulations. In particular, in spite of adopting a rather different formal description of the deformed boost generators, they end up assigning the same dependence of momentum on rapidity, which can be described as the core feature of these relativistic theories. We show that this observation can be used to clarify the concepts of particle mass, particle velocity and energy-momentum conservation rules in these theories with two relativistic invariants
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
Van Oost, Kristof; Nadeu, Elisabet; Wiaux, François; Wang, Zhengang; Stevens, François; Vanclooster, Marnik; Tran, Anh; Bogaert, Patrick; Doetterl, Sebastian; Lambot, Sébastien; Van wesemael, Bas
2014-05-01
In this paper, we synthesize the main outcomes of a collaborative project (2009-2014) initiated at the UCL (Belgium). The main objective of the project was to increase our understanding of soil organic matter dynamics in complex landscapes and use this to improve predictions of regional scale soil carbon balances. In a first phase, the project characterized the emergent spatial variability in soil organic matter storage and key soil properties at the regional scale. Based on the integration of remote sensing, geomorphological and soil analysis techniques, we quantified the temporal and spatial variability of soil carbon stock and pool distribution at the local and regional scales. This work showed a linkage between lateral fluxes of C in relation with sediment transport and the spatial variation in carbon storage at multiple spatial scales. In a second phase, the project focused on characterizing key controlling factors and process interactions at the catena scale. In-situ experiments of soil CO2 respiration showed that the soil carbon response at the catena scale was spatially heterogeneous and was mainly controlled by the catenary variation of soil physical attributes (soil moisture, temperature, C quality). The hillslope scale characterization relied on advanced hydrogeophysical techniques such as GPR (Ground Penetrating Radar), EMI (Electromagnetic induction), ERT (Electrical Resistivity Tomography), and geophysical inversion and data mining tools. Finally, we report on the integration of these insights into a coupled and spatially explicit model and its application. Simulations showed that C stocks and redistribution of mass and energy fluxes are closely coupled, they induce structured spatial and temporal patterns with non negligible attached uncertainties. We discuss the main outcomes of these activities in relation to sink-source behavior and relevance of erosion processes for larger-scale C budgets.
Fadhil, Sadeem Abbas; Alrawi, Aoday Hashim; Azeez, Jazeel H.; Hassan, Mohsen A.
2018-04-01
In the present work, a multiscale model is presented and used to modify the Hall-Petch relation for different scales from nano to micro. The modified Hall-Petch relation is derived from a multiscale equation that determines the cohesive energy between the atoms and their neighboring grains. This brings with it a new term that was originally ignored even in the atomistic models. The new term makes it easy to combine all other effects to derive one modified equation for the Hall-Petch relation that works for all scales together, without the need to divide the scales into two scales, each scale with a different equation, as it is usually done in other works. Due to that, applying the new relation does not require a previous knowledge of the grain size distribution. This makes the new derived relation more consistent and easier to be applied for all scales. The new relation is used to fit the data for Copper and Nickel and it is applied well for the whole range of grain sizes from nano to micro scales.
Lange, Florian; Wagner, Adina; Müller, Astrid; Eggert, Frank
2017-06-01
The place of impulsiveness in multidimensional personality frameworks is still unclear. In particular, no consensus has yet been reached with regard to the relation of impulsiveness to Neuroticism and Extraversion. We aim to contribute to a clearer understanding of these relationships by accounting for the multidimensional structure of impulsiveness. In three independent studies, we related the subscales of the Barratt Impulsiveness Scale (BIS) to the Big Five factors of personality. Study 1 investigated the associations between the BIS subscales and the Big Five factors as measured by the NEO Five-Factor Inventory (NEO-FFI) in a student sample (N = 113). Selective positive correlations emerged between motor impulsiveness and Extraversion and between attentional impulsiveness and Neuroticism. This pattern of results was replicated in Study 2 (N = 132) using a 10-item short version of the Big Five Inventory. In Study 3, we analyzed BIS and NEO-FFI data obtained from a sample of patients with pathological buying (N = 68). In these patients, the relationship between motor impulsiveness and Extraversion was significantly weakened when compared to the non-clinical samples. At the same time, the relationship between attentional impulsiveness and Neuroticism was substantially stronger in the clinical sample. Our studies highlight the utility of the BIS subscales for clarifying the relationship between impulsiveness and the Big Five personality factors. We conclude that impulsiveness might occupy multiple places in multidimensional personality frameworks, which need to be specified to improve the interpretability of impulsiveness scales. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
A high-speed transmission method for large-scale marine seismic prospecting systems
International Nuclear Information System (INIS)
KeZhu, Song; Ping, Cao; JunFeng, Yang; FuMing, Ruan
2012-01-01
A marine seismic prospecting system is a kind of data acquisition and transmission system with large-scale coverage and synchronous multi-node acquisition. In this kind of system, data transmission is a fundamental and difficult technique. In this paper, a high-speed data-transmission method is proposed, its implications and limitations are discussed, and conclusions are drawn. The method we propose has obvious advantages over traditional techniques with respect to long-distance operation, high speed, and real-time transmission. A marine seismic system with four streamers, each 6000 m long and capable of supporting up to 1920 channels, was designed and built based on this method. The effective transmission baud rate of this system was found to reach up to 240 Mbps, while the minimum sampling interval time was as short as 0.25 ms. This system was found to achieve a good synchronization: 83 ns. Laboratory and in situ experiments showed that this marine-prospecting system could work correctly and robustly, which verifies the feasibility and validity of the method proposed in this paper. In addition to the marine seismic applications, this method can also be used in land seismic applications and certain other transmission applications such as environmental or engineering monitoring systems. (paper)
An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery
Directory of Open Access Journals (Sweden)
Haiyan Gu
2018-04-01
Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.
Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval
Directory of Open Access Journals (Sweden)
Lijuan Duan
2017-01-01
Full Text Available Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI, to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications.
A high-speed transmission method for large-scale marine seismic prospecting systems
KeZhu, Song; Ping, Cao; JunFeng, Yang; FuMing, Ruan
2012-12-01
A marine seismic prospecting system is a kind of data acquisition and transmission system with large-scale coverage and synchronous multi-node acquisition. In this kind of system, data transmission is a fundamental and difficult technique. In this paper, a high-speed data-transmission method is proposed, its implications and limitations are discussed, and conclusions are drawn. The method we propose has obvious advantages over traditional techniques with respect to long-distance operation, high speed, and real-time transmission. A marine seismic system with four streamers, each 6000 m long and capable of supporting up to 1920 channels, was designed and built based on this method. The effective transmission baud rate of this system was found to reach up to 240 Mbps, while the minimum sampling interval time was as short as 0.25 ms. This system was found to achieve a good synchronization: 83 ns. Laboratory and in situ experiments showed that this marine-prospecting system could work correctly and robustly, which verifies the feasibility and validity of the method proposed in this paper. In addition to the marine seismic applications, this method can also be used in land seismic applications and certain other transmission applications such as environmental or engineering monitoring systems.
Risk assessment method for the implementation of materials divided up to the nanometric scale
International Nuclear Information System (INIS)
Gridelet, L; Delbecq, P; Hervé, L; Fayet, G; Fleury, D; Kowal, S; Boissolle, P
2013-01-01
A new approach of assessing the risks inherent in the implementation of powders, including nanomaterials has been developed. This tool is based on the method of the OHB (Occupational Hazard Band) widely spread in the chemical industry. The European classification and CLP scales of toxicity have not been modified; only the control of exposure has been worked at. The method applies essentially to the prevention of the exposures by airborne materials, whatever their particle size. The skin exposure is not treated there specifically for the time being. The method considers exposure based on seven parameters to take into account the characteristics of the materials used, their emission potential, the conditions of use, as well as classic parameters of the characterization of the exposure as the duration and the frequency. The method stresses on a pragmatic exploitation of the current knowledge and of the available data, bearing in mind that a lot of them are not easily accessible to plant operators. The product of the reflection is then positioned on a hazard x exposure matrix from which 3 levels of priority of action are defined, as in the classical OHB method applied to pure chemical risk. This approach fills in a gap in term of risk assessment and avoids jeopardizing all that had been set up for years, while introducing newelements of reflection accessible to all operators.
Syngas conversion to a light alkene and related methods
Ginosar, Daniel M.; Petkovic, Lucia M.
2017-11-14
Methods of producing a light alkene. The method comprises contacting syngas and tungstated zirconia to produce a product stream comprising at least one light alkene. The product stream is recovered. Methods of converting syngas to a light alkene are also disclosed. The method comprises heating a precursor of tungstated zirconia to a temperature of between about 350.degree. C. and about 550.degree. C. to form tungstated zirconia. Syngas is flowed over the tungstated zirconia to produce a product stream comprising at least one light alkene and the product stream comprising the at least one light alkene is recovered.
DEFF Research Database (Denmark)
Bjorner, Jakob B; Rose, Matthias; Gandek, Barbara
2014-01-01
OBJECTIVES: To test the impact of the method of administration (MOA) on score level, reliability, and validity of scales developed in the Patient Reported Outcomes Measurement Information System (PROMIS). STUDY DESIGN AND SETTING: Two nonoverlapping parallel forms each containing eight items from......, no significant mode differences were found and all confidence intervals were within the prespecified minimal important difference of 0.2 standard deviation. Parallel-forms reliabilities were very high (ICC = 0.85-0.93). Only one across-mode ICC was significantly lower than the same-mode ICC. Tests of validity...... questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) and a second form by PC, in the same administration. Method equivalence was evaluated through analyses of difference scores, intraclass correlations (ICCs), and convergent/discriminant validity. RESULTS: In difference score analyses...
[A Method of Synthesizing Tinnitus Rehabilitation Sound Based on Pentatonic Scale and Chaos].
Chen, Jiemei; He, Peiyu; Pan, Fan
2015-12-01
Tinnitus is a common clinical symptom and its occurrence rate is high. It seriously affects life quality of the patients. Scientific researches show that listening some similar and none-repetitive music can relieve tinnitus to some extent. The overall music accorded with self-similarity character by the direct mapping method based on chaos. However, there were often the same tones continuous repeating a few times and tone mutations. To solve the problem, this paper proposes a new method for tinnitus rehabilitation sound synthesis based on pentatonic scale, chaos and musical instrument digital interface (MIDI). Experimental results showed that the tinnitus rehabilitation sounds were not only self-similar and incompletely reduplicate, but also no sudden changes. Thus, it has a referential significance for tinnitus treatment.
Institute of Scientific and Technical Information of China (English)
Zhenyu ZHANG; Ning ZHAO; Wei ZHONG; Long WANG; Bofeng XU
2016-01-01
The computational ?uid dynamics (CFD) methods are applied to aerody-namic problems for large scale wind turbines. The progresses including the aerodynamic analyses of wind turbine pro?les, numerical ?ow simulation of wind turbine blades, evalu-ation of aerodynamic performance, and multi-objective blade optimization are discussed. Based on the CFD methods, signi?cant improvements are obtained to predict two/three-dimensional aerodynamic characteristics of wind turbine airfoils and blades, and the vorti-cal structure in their wake ?ows is accurately captured. Combining with a multi-objective genetic algorithm, a 1.5 MW NH-1500 optimized blade is designed with high e?ciency in wind energy conversion.
Application of spectral Lanczos decomposition method to large scale problems arising geophysics
Energy Technology Data Exchange (ETDEWEB)
Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)
1996-12-31
This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.
Directory of Open Access Journals (Sweden)
Ruth A. Lanius
2015-03-01
Full Text Available Background: Three intrinsic connectivity networks in the brain, namely the central executive, salience, and default mode networks, have been identified as crucial to the understanding of higher cognitive functioning, and the functioning of these networks has been suggested to be impaired in psychopathology, including posttraumatic stress disorder (PTSD. Objective: 1 To describe three main large-scale networks of the human brain; 2 to discuss the functioning of these neural networks in PTSD and related symptoms; and 3 to offer hypotheses for neuroscientifically-informed interventions based on treating the abnormalities observed in these neural networks in PTSD and related disorders. Method: Literature relevant to this commentary was reviewed. Results: Increasing evidence for altered functioning of the central executive, salience, and default mode networks in PTSD has been demonstrated. We suggest that each network is associated with specific clinical symptoms observed in PTSD, including cognitive dysfunction (central executive network, increased and decreased arousal/interoception (salience network, and an altered sense of self (default mode network. Specific testable neuroscientifically-informed treatments aimed to restore each of these neural networks and related clinical dysfunction are proposed. Conclusions: Neuroscientifically-informed treatment interventions will be essential to future research agendas aimed at targeting specific PTSD and related symptoms.
AIRS Observations Based Evaluation of Relative Climate Feedback Strengths on a GCM Grid-Scale
Molnar, G. I.; Susskind, J.
2012-12-01
Climate feedback strengths, especially those associated with moist processes, still have a rather wide range in GCMs, the primary tools to predict future climate changes associated with man's ever increasing influences on our planet. Here, we make use of the first 10 years of AIRS observations to evaluate interrelationships/correlations of atmospheric moist parameter anomalies computed from AIRS Version 5 Level-3 products, and demonstrate their usefulness to assess relative feedback strengths. Although one may argue about the possible usability of shorter-term, observed climate parameter anomalies for estimating the strength of various (mostly moist processes related) feedbacks, recent works, in particular analyses by Dessler [2008, 2010], have demonstrated their usefulness in assessing global water vapor and cloud feedbacks. First, we create AIRS-observed monthly anomaly time-series (ATs) of outgoing longwave radiation, water vapor, clouds and temperature profile over a 10-year long (Sept. 2002 through Aug. 2012) period using 1x1 degree resolution (a common GCM grid-scale). Next, we evaluate the interrelationships of ATs of the above parameters with the corresponding 1x1 degree, as well as global surface temperature ATs. The latter provides insight comparable with more traditional climate feedback definitions (e. g., Zelinka and Hartmann, 2012) whilst the former is related to a new definition of "local (in surface temperature too) feedback strengths" on a GCM grid-scale. Comparing the correlation maps generated provides valuable new information on the spatial distribution of relative climate feedback strengths. We argue that for GCMs to be trusted for predicting longer-term climate variability, they should be able to reproduce these observed relationships/metrics as closely as possible. For this time period the main climate "forcing" was associated with the El Niño/La Niña variability (e. g., Dessler, 2010), so these assessments may not be descriptive of longer
EXPLORING PHYSICIANS' DISSATISFACTION AND WORK-RELATED STRESS: DEVELOPMENT OF THE PhyDis SCALE
Directory of Open Access Journals (Sweden)
Monica Pedrazza
2016-08-01
Full Text Available Research, all over the world, is starting to recognize the potential impact of physicians’ dissatisfaction and burnout on their productivity, that is, on their intent to leave the job, on their work ability, on the amount of sick leave days, on their intent to continue practicing, and last but not least, on the quality of the services provided, which is an essential part of the general medical care system. It was interest of the provincial medical board’s ethical committee to acquire information about physician’s work-related stress and dissatisfaction. The research group was committed to define the indicators of dissatisfaction and work-related stressors. Focus groups were carried out, 21 stressful experience’s indicators were identified; we developed an online questionnaire to assess the amount of perceived stress relating to each indicator at work (3070 physicians were contacted by e-mail; quantitative and qualitative data analysis were carried out. The grounded theory perspective was applied in order to assure the most reliable procedure to investigate the concepts’ structure of work-related stress. We tested the five dimensions' model of the stressful experience with a confirmatory factor analysis: Personal Costs; Decline in Public Image and Role Uncertainty; Physician's Responsibility toward hopelessly ill Patients; Relationship with Staff and Colleagues; Bureaucracy. We split the sample according to attachment style (secure and insecure -anxious and avoidant-. Results show the complex representation of physicians’ dissatisfaction at work also with references to the variable of individual difference of attachment security/insecurity. The discriminant validity of the scale was tested. The original contribution of this paper lies on the one hand in the qualitative in depth inductive analysis of physicians’ dissatisfaction starting from physicians’ perception, on the other hand, it represents the first attempt to analyze the
Exploring Physicians' Dissatisfaction and Work-Related Stress: Development of the PhyDis Scale.
Pedrazza, Monica; Berlanda, Sabrina; Trifiletti, Elena; Bressan, Franco
2016-01-01
Research, all over the world, is starting to recognize the potential impact of physicians' dissatisfaction and burnout on their productivity, that is, on their intent to leave the job, on their work ability, on the amount of sick leave days, on their intent to continue practicing, and last but not least, on the quality of the services provided, which is an essential part of the general medical care system. It was interest of the provincial medical board's ethical committee to acquire information about physician's work-related stress and dissatisfaction. The research group was committed to define the indicators of dissatisfaction and work-related stressors. Focus groups were carried out, 21 stressful experience's indicators were identified; we developed an online questionnaire to assess the amount of perceived stress relating to each indicator at work (3070 physicians were contacted by e-mail); quantitative and qualitative data analysis were carried out. The grounded theory perspective was applied in order to assure the most reliable procedure to investigate the concepts' structure of "work-related stress." We tested the five dimensions' model of the stressful experience with a confirmatory factor analysis: Personal Costs; Decline in Public Image and Role Uncertainty; Physician's Responsibility toward hopelessly ill Patients; Relationship with Staff and Colleagues; Bureaucracy. We split the sample according to attachment style (secure and insecure -anxious and avoidant-). Results show the complex representation of physicians' dissatisfaction at work also with references to the variable of individual difference of attachment security/insecurity. The discriminant validity of the scale was tested. The original contribution of this paper lies on the one hand in the qualitative in depth inductive analysis of physicians' dissatisfaction starting from physicians' perception, on the other hand, it represents the first attempt to analyze the physicians' dissatisfaction with
DEFF Research Database (Denmark)
Hansen, Alice Ørts; Kristensen, Hanne Kaae; Cederlund, Ragnhild
2017-01-01
to be a powerful tool to measure the ICF component personal factors, which could have an impact on patients' rehabilitation outcomes. Implications for rehabilitation Antonovsky's SOC-13 scale showed test-retest reliability for patients with hand-related disorders. The SOC-13 scale could be a suitable tool to help...... measure personal factors....
Top-spray fluid bed coating: Scale-up in terms of relative droplet size and drying force
DEFF Research Database (Denmark)
Hede, Peter Dybdahl; Bach, P.; Jensen, Anker Degn
2008-01-01
in terms of particle size fractions larger than 425 mu m determined by sieve analysis. Results indicated that the particle size distribution may be reproduced across scale with statistical valid precision by keeping the drying force and the relative droplet size constant across scale. It is also shown...
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
Sagui, Celeste
2006-03-01
An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.
DGDFT: A massively parallel method for large scale density functional theory calculations.
Hu, Wei; Lin, Lin; Yang, Chao
2015-09-28
We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.
AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION
Energy Technology Data Exchange (ETDEWEB)
Alipour, N.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of); Innes, D. E. [Max-Planck Institut fuer Sonnensystemforschung, 37191 Katlenburg-Lindau (Germany)
2012-02-10
Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separated using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.
Qu, Xingtian; Li, Jinlai; Yin, Zhifu
2018-04-01
Micro- and nanofluidic chips are becoming increasing significance for biological and medical applications. Future advances in micro- and nanofluidics and its utilization in commercial applications depend on the development and fabrication of low cost and high fidelity large scale plastic micro- and nanofluidic chips. However, the majority of the present fabrication methods suffer from a low bonding rate of the chip during thermal bonding process due to air trapping between the substrate and the cover plate. In the present work, a novel bonding technique based on Ar plasma and water treatment was proposed to fully bond the large scale micro- and nanofluidic chips. The influence of Ar plasma parameters on the water contact angle and the effect of bonding conditions on the bonding rate and the bonding strength of the chip were studied. The fluorescence tests demonstrate that the 5 × 5 cm2 poly(methyl methacrylate) chip with 180 nm wide and 180 nm deep nanochannels can be fabricated without any block and leakage by our newly developed method.
DGDFT: A massively parallel method for large scale density functional theory calculations
Energy Technology Data Exchange (ETDEWEB)
Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)
2015-09-28
We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.
DGDFT: A massively parallel method for large scale density functional theory calculations
International Nuclear Information System (INIS)
Hu, Wei; Yang, Chao; Lin, Lin
2015-01-01
We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail
Method of constructing a fundamental equation of state based on a scaling hypothesis
Rykov, V. A.; Rykov, S. V.; Kudryavtseva, I. V.; Sverdlov, A. V.
2017-11-01
The work studies the issues associated with the construction of the equation of state (EOS) taking due account of substance behavior in the critical region and associated with the scaling theory of critical phenomena (ST). The authors have developed a new version of the scaling hypothesis; this approach uses the following: a) substance equation of state having a form of a Schofield-Litster-Ho linear model (LM) and b) the Benedek hypothesis. The Benedek hypothesis has found a similar behavior character for a number of properties (isochoric and isobaric heat capacities, isothermal compressibility coefficient) at critical and near-critical isochors in the vicinity of the critical point. A method is proposed to build the fundamental equation of state (FEOS) which satisfies the ST power laws. The FEOS building method is verified by building the equation of state for argon within the state parameters range: up to 1000 MPa in terms of pressure, and from 83.056 К to 13000 К in terms of temperature. The executed comparison with the fundamental equations of state of Stewart-Jacobsen (1989), of Kozlov at al (1996), of Tegeler-Span-Wagner (1999), of has shown that the FEOS describes the known experimental data with an essentially lower error.
Measuring the black hole mass in ultraluminous X-ray sources with the X-ray scaling method
Jang, I.; Gliozzi, M.; Satyapal, S.; Titarchuk, L.
2018-01-01
In our recent work, we demonstrated that a novel X-ray scaling method, originally introduced for Galactic black holes (BH), could be reliably extended to estimate the mass of supermassive black holes accreting at moderate to high level. Here, we apply this X-ray scaling method to ultraluminous X-ray sources (ULXs) to constrain their MBH. Using 49 ULXs with multiple XMM-Newton observations, we infer that ULXs host both stellar mass BHs and intermediate mass BHs. The majority of the sources of our sample seem to be consistent with the hypothesis of highly accreting massive stellar BHs with MBH ∼ 100 M⊙. Our results are in general agreement with the MBH values obtained with alternative methods, including model-independent variability methods. This suggests that the X-ray scaling method is an actual scale-independent method that can be applied to all BH systems accreting at moderate-high rate.
The Five Star Method: A Relational Dream Work Methodology
Sparrow, Gregory Scott; Thurston, Mark
2010-01-01
This article presents a systematic method of dream work called the Five Star Method. Based on cocreative dream theory, which views the dream as the product of the interaction between dreamer and dream, this creative intervention shifts the principal focus in dream analysis from the interpretation of static imagery to the analysis of the dreamer's…