WorldWideScience

Sample records for linear scale change

  1. Linear scaling of density functional algorithms

    International Nuclear Information System (INIS)

    Stechel, E.B.; Feibelman, P.J.; Williams, A.R.

    1993-01-01

    An efficient density functional algorithm (DFA) that scales linearly with system size will revolutionize electronic structure calculations. Density functional calculations are reliable and accurate in determining many condensed matter and molecular ground-state properties. However, because current DFA's, including methods related to that of Car and Parrinello, scale with the cube of the system size, density functional studies are not routinely applied to large systems. Linear scaling is achieved by constructing functions that are both localized and fully occupied, thereby eliminating the need to calculate global eigenfunctions. It is, however, widely believed that exponential localization requires the existence of an energy gap between the occupied and unoccupied states. Despite this, the authors demonstrate that linear scaling can still be achieved for metals. Using a linear scaling algorithm, they have explicitly constructed localized, almost fully occupied orbitals for the quintessential metallic system, jellium. The algorithm is readily generalizable to any system geometry and Hamiltonian. They will discuss the conceptual issues involved, convergence properties and scaling for their new algorithm

  2. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  3. Frequency scaling of linear super-colliders

    International Nuclear Information System (INIS)

    Mondelli, A.; Chernin, D.; Drobot, A.; Reiser, M.; Granatstein, V.

    1986-06-01

    The development of electron-positron linear colliders in the TeV energy range will be facilitated by the development of high-power rf sources at frequencies above 2856 MHz. Present S-band technology, represented by the SLC, would require a length in excess of 50 km per linac to accelerate particles to energies above 1 TeV. By raising the rf driving frequency, the rf breakdown limit is increased, thereby allowing the length of the accelerators to be reduced. Currently available rf power sources set the realizable gradient limit in an rf linac at frequencies above S-band. This paper presents a model for the frequency scaling of linear colliders, with luminosity scaled in proportion to the square of the center-of-mass energy. Since wakefield effects are the dominant deleterious effect, a separate single-bunch simulation model is described which calculates the evolution of the beam bunch with specified wakefields, including the effects of using programmed phase positioning and Landau damping. The results presented here have been obtained for a SLAC structure, scaled in proportion to wavelength

  4. Polarized atomic orbitals for linear scaling methods

    Science.gov (United States)

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  5. Linear Polarization Properties of Parsec-Scale AGN Jets

    Directory of Open Access Journals (Sweden)

    Alexander B. Pushkarev

    2017-12-01

    Full Text Available We used 15 GHz multi-epoch Very Long Baseline Array (VLBA polarization sensitive observations of 484 sources within a time interval 1996–2016 from the MOJAVE program, and also from the NRAO data archive. We have analyzed the linear polarization characteristics of the compact core features and regions downstream, and their changes along and across the parsec-scale active galactic nuclei (AGN jets. We detected a significant increase of fractional polarization with distance from the radio core along the jet as well as towards the jet edges. Compared to quasars, BL Lacs have a higher degree of polarization and exhibit more stable electric vector position angles (EVPAs in their core features and a better alignment of the EVPAs with the local jet direction. The latter is accompanied by a higher degree of linear polarization, suggesting that compact bright jet features might be strong transverse shocks, which enhance magnetic field regularity by compression.

  6. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard

    2017-01-01

    also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...

  7. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    Science.gov (United States)

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  8. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  9. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  10. Parameter Scaling in Non-Linear Microwave Tomography

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Talcoth, Oskar

    2012-01-01

    Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when the imag......Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when...... the imaging problem is formulated. Under such conditions, microwave imaging systems will most often be considerably more sensitive to changes in the electromagnetic properties in certain regions of the breast. The result is that the parameters might not be reconstructed correctly in the less sensitive regions...... introduced as a measure of the sensitivity. The scaling of the parameters is shown to improve performance of the microwave imaging system when applied to reconstruction of images from 2-D simulated data and measurement data....

  11. Scaling Climate Change Communication for Behavior Change

    Science.gov (United States)

    Rodriguez, V. C.; Lappé, M.; Flora, J. A.; Ardoin, N. M.; Robinson, T. N.

    2014-12-01

    Ultimately, effective climate change communication results in a change in behavior, whether the change is individual, household or collective actions within communities. We describe two efforts to promote climate-friendly behavior via climate communication and behavior change theory. Importantly these efforts are designed to scale climate communication principles focused on behavior change rather than soley emphasizing climate knowledge or attitudes. Both cases are embedded in rigorous evaluations (randomized controlled trial and quasi-experimental) of primary and secondary outcomes as well as supplementary analyses that have implications for program refinement and program scaling. In the first case, the Girl Scouts "Girls Learning Environment and Energy" (GLEE) trial is scaling the program via a Massive Open Online Course (MOOC) for Troop Leaders to teach the effective home electricity and food and transportation energy reduction programs. The second case, the Alliance for Climate Education (ACE) Assembly Program, is advancing the already-scaled assembly program by using communication principles to further engage youth and their families and communities (school and local communities) in individual and collective actions. Scaling of each program uses online learning platforms, social media and "behavior practice" videos, mastery practice exercises, virtual feedback and virtual social engagement to advance climate-friendly behavior change. All of these communication practices aim to simulate and advance in-person train-the-trainers technologies.As part of this presentation we outline scaling principles derived from these two climate change communication and behavior change programs.

  12. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  13. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  14. Turbulence Spreading into Linearly Stable Zone and Transport Scaling

    International Nuclear Information System (INIS)

    Hahm, T.S.; Diamond, P.H.; Lin, Z.; Itoh, K.; Itoh, S.-I.

    2003-01-01

    We study the simplest problem of turbulence spreading corresponding to the spatio-temporal propagation of a patch of turbulence from a region where it is locally excited to a region of weaker excitation, or even local damping. A single model equation for the local turbulence intensity I(x, t) includes the effects of local linear growth and damping, spatially local nonlinear coupling to dissipation and spatial scattering of turbulence energy induced by nonlinear coupling. In the absence of dissipation, the front propagation into the linearly stable zone occurs with the property of rapid progression at small t, followed by slower subdiffusive progression at late times. The turbulence radial spreading into the linearly stable zone reduces the turbulent intensity in the linearly unstable zone, and introduces an additional dependence on the rho* is always equal to rho i/a to the turbulent intensity and the transport scaling. These are in broad, semi-quantitative agreements with a number of global gyrokinetic simulation results with zonal flows and without zonal flows. The front propagation stops when the radial flux of fluctuation energy from the linearly unstable region is balanced by local dissipation in the linearly stable region

  15. Polarization properties of linearly polarized parabolic scaling Bessel beams

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Mengwen; Zhao, Daomu, E-mail: zhaodaomu@yahoo.com

    2016-10-07

    The intensity profiles for the dominant polarization, cross polarization, and longitudinal components of modified parabolic scaling Bessel beams with linear polarization are investigated theoretically. The transverse intensity distributions of the three electric components are intimately connected to the topological charge. In particular, the intensity patterns of the cross polarization and longitudinal components near the apodization plane reflect the sign of the topological charge. - Highlights: • We investigated the polarization properties of modified parabolic scaling Bessel beams with linear polarization. • We studied the evolution of transverse intensity profiles for the three components of these beams. • The intensity patterns of the cross polarization and longitudinal components can reflect the sign of the topological charge.

  16. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  17. Graph-based linear scaling electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  18. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  19. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  20. Reconnection Scaling Experiment (RSX): Magnetic Reconnection in Linear Geometry

    Science.gov (United States)

    Intrator, T.; Sovinec, C.; Begay, D.; Wurden, G.; Furno, I.; Werley, C.; Fisher, M.; Vermare, L.; Fienup, W.

    2001-10-01

    The linear Reconnection Scaling Experiment (RSX) at LANL is a new experiment that can create MHD relevant plasmas to look at the physics of magnetic reconnection. This experiment can scale many relevant parameters because the guns that generate the plasma and current channels do not depend on equilibrium or force balance for startup. We describe the experiment and initial electrostatic and magnetic probe data. Two parallel current channels sweep down a long plasma column and probe data accumulated over many shots gives 3D movies of magnetic reconnection. Our first data tries to define an operating regime free from kink instabilities that might otherwise confuse the data and shot repeatability. We compare this with MHD 2 fluid NIMROD simulations of the single current channel kink stability boundary for a variety of experimental conditions.

  1. Offset linear scaling for H-mode confinement

    International Nuclear Information System (INIS)

    Miura, Yukitoshi; Tamai, Hiroshi; Suzuki, Norio; Mori, Masahiro; Matsuda, Toshiaki; Maeda, Hikosuke; Takizuka, Tomonori; Itoh, Sanae; Itoh, Kimitaka.

    1992-01-01

    An offset linear scaling for the H-mode confinement time is examined based on single parameter scans on the JFT-2M experiment. Regression study is done for various devices with open divertor configuration such as JET, DIII-D, JFT-2M. The scaling law of the thermal energy is given in the MKSA unit as W th =0.0046R 1.9 I P 1.1 B T 0.91 √A+2.9x10 -8 I P 1.0 R 0.87 P√AP, where R is the major radius, I P is the plasma current, B T is the toroidal magnetic field, A is the average mass number of plasma and neutral beam particles, and P is the heating power. This fitting has a similar root mean square error (RMSE) compared to the power law scaling. The result is also compared with the H-mode in other configurations. The W th of closed divertor H-mode on ASDEX shows a little better values than that of open divertor H-mode. (author)

  2. Linear-scaling quantum mechanical methods for excited states.

    Science.gov (United States)

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  3. Recent development of linear scaling quantum theories in GAMESS

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Cheol Ho [Kyungpook National Univ., Daegu (Korea, Republic of)

    2003-06-01

    Linear scaling quantum theories are reviewed especially focusing on the method adopted in GAMESS. The three key translation equations of the fast multipole method (FMM) are deduced from the general polypolar expansions given earlier by Steinborn and Rudenberg. Simplifications are introduced for the rotation-based FMM that lead to a very compact FMM formalism. The OPS (optimum parameter searching) procedure, a stable and efficient way of obtaining the optimum set of FMM parameters, is established with complete control over the tolerable error {epsilon}. In addition, a new parallel FMM algorithm requiring virtually no inter-node communication, is suggested which is suitable for the parallel construction of Fock matrices in electronic structure calculations.

  4. Scaling laws for e+/e- linear colliders

    International Nuclear Information System (INIS)

    Delahaye, J.P.; Guignard, G.; Raubenheimer, T.; Wilson, I.

    1999-01-01

    Design studies of a future TeV e + e - Linear Collider (TLC) are presently being made by five major laboratories within the framework of a world-wide collaboration. A figure of merit is defined which enables an objective comparison of these different designs. This figure of merit is shown to depend only on a small number of parameters. General scaling laws for the main beam parameters and linac parameters are derived and prove to be very effective when used as guidelines to optimize the linear collider design. By adopting appropriate parameters for beam stability, the figure of merit becomes nearly independent of accelerating gradient and RF frequency of the accelerating structures. In spite of the strong dependence of the wake fields with frequency, the single-bunch emittance blow-up during acceleration along the linac is also shown to be independent of the RF frequency when using equivalent trajectory correction schemes. In this situation, beam acceleration using high-frequency structures becomes very advantageous because it enables high accelerating fields to be obtained, which reduces the overall length and consequently the total cost of the linac. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  5. Small-scale quantum information processing with linear optics

    International Nuclear Information System (INIS)

    Bergou, J.A.; Steinberg, A.M.; Mohseni, M.

    2005-01-01

    Full text: Photons are the ideal systems for carrying quantum information. Although performing large-scale quantum computation on optical systems is extremely demanding, non scalable linear-optics quantum information processing may prove essential as part of quantum communication networks. In addition efficient (scalable) linear-optical quantum computation proposal relies on the same optical elements. Here, by constructing multirail optical networks, we experimentally study two central problems in quantum information science, namely optimal discrimination between nonorthogonal quantum states, and controlling decoherence in quantum systems. Quantum mechanics forbids deterministic discrimination between nonorthogonal states. This is one of the central features of quantum cryptography, which leads to secure communications. Quantum state discrimination is an important primitive in quantum information processing, since it determines the limitations of a potential eavesdropper, and it has applications in quantum cloning and entanglement concentration. In this work, we experimentally implement generalized measurements in an optical system and demonstrate the first optimal unambiguous discrimination between three non-orthogonal states with a success rate of 55 %, to be compared with the 25 % maximum achievable using projective measurements. Furthermore, we present the first realization of unambiguous discrimination between a pure state and a nonorthogonal mixed state. In a separate experiment, we demonstrate how decoherence-free subspaces (DFSs) may be incorporated into a prototype optical quantum algorithm. Specifically, we present an optical realization of two-qubit Deutsch-Jozsa algorithm in presence of random noise. By introduction of localized turbulent airflow we produce a collective optical dephasing, leading to large error rates and demonstrate that using DFS encoding, the error rate in the presence of decoherence can be reduced from 35 % to essentially its pre

  6. Grey scale, the 'crispening effect', and perceptual linearization

    NARCIS (Netherlands)

    Belaïd, N.; Martens, J.B.

    1998-01-01

    One way of optimizing a display is to maximize the number of distinguishable grey levels, which in turn is equivalent to perceptually linearizing the display. Perceptual linearization implies that equal steps in grey value evoke equal steps in brightness sensation. The key to perceptual

  7. Non-linear elastic thermal stress analysis with phase changes

    International Nuclear Information System (INIS)

    Amada, S.; Yang, W.H.

    1978-01-01

    The non-linear elastic, thermal stress analysis with temperature induced phase changes in the materials is presented. An infinite plate (or body) with a circular hole (or tunnel) is subjected to a thermal loading on its inner surface. The peak temperature around the hole reaches beyond the melting point of the material. The non-linear diffusion equation is solved numerically using the finite difference method. The material properties change rapidly at temperatures where the change of crystal structures and solid-liquid transition occur. The elastic stresses induced by the transient non-homogeneous temperature distribution are calculated. The stresses change remarkably when the phase changes occur and there are residual stresses remaining in the plate after one cycle of thermal loading. (Auth.)

  8. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  9. Cosmological large-scale structures beyond linear theory in modified gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bernardeau, Francis; Brax, Philippe, E-mail: francis.bernardeau@cea.fr, E-mail: philippe.brax@cea.fr [CEA, Institut de Physique Théorique, 91191 Gif-sur-Yvette Cédex (France)

    2011-06-01

    We consider the effect of modified gravity on the growth of large-scale structures at second order in perturbation theory. We show that modified gravity models changing the linear growth rate of fluctuations are also bound to change, although mildly, the mode coupling amplitude in the density and reduced velocity fields. We present explicit formulae which describe this effect. We then focus on models of modified gravity involving a scalar field coupled to matter, in particular chameleons and dilatons, where it is shown that there exists a transition scale around which the existence of an extra scalar degree of freedom induces significant changes in the coupling properties of the cosmic fields. We obtain the amplitude of this effect for realistic dilaton models at the tree-order level for the bispectrum, finding them to be comparable in amplitude to those obtained in the DGP and f(R) models.

  10. Scaling linear colliders to 5 TeV and above

    International Nuclear Information System (INIS)

    Wilson, P.B.

    1997-04-01

    Detailed designs exist at present for linear colliders in the 0.5-1.0 TeV center-of-mass energy range. For linear colliders driven by discrete rf sources (klystrons), the rf operating frequencies range from 1.3 GHz to 14 GHz, and the unloaded accelerating gradients from 21 MV/m to 100 MV/m. Except for the collider design at 1.3 GHz (TESLA) which uses superconducting accelerating structures, the accelerating gradients vary roughly linearly with the rf frequency. This correlation between gradient and frequency follows from the necessity to keep the ac open-quotes wall plugclose quotes power within reasonable bounds. For linear colliders at energies of 5 TeV and above, even higher accelerating gradients and rf operating frequencies will be required if both the total machine length and ac power are to be kept within reasonable limits. An rf system for a 5 TeV collider operating at 34 GHz is outlined, and it is shown that there are reasonable candidates for microwave tube sources which, together with rf pulse compression, are capable of supplying the required rf power. Some possibilities for a 15 TeV collider at 91 GHz are briefly discussed

  11. An {Mathematical expression} iteration bound primal-dual cone affine scaling algorithm for linear programmingiteration bound primal-dual cone affine scaling algorithm for linear programming

    NARCIS (Netherlands)

    J.F. Sturm; J. Zhang (Shuzhong)

    1996-01-01

    textabstractIn this paper we introduce a primal-dual affine scaling method. The method uses a search-direction obtained by minimizing the duality gap over a linearly transformed conic section. This direction neither coincides with known primal-dual affine scaling directions (Jansen et al., 1993;

  12. On the non-linear scale of cosmological perturbation theory

    CERN Document Server

    Blas, Diego; Konstandin, Thomas

    2013-01-01

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  13. On the non-linear scale of cosmological perturbation theory

    International Nuclear Information System (INIS)

    Blas, Diego; Garny, Mathias; Konstandin, Thomas

    2013-04-01

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  14. On the non-linear scale of cosmological perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Garny, Mathias; Konstandin, Thomas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-04-15

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  15. On Numerical Stability in Large Scale Linear Algebraic Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Liesen, J.

    2005-01-01

    Roč. 85, č. 5 (2005), s. 307-325 ISSN 0044-2267 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear algebraic systems * eigenvalue problems * convergence * numerical stability * backward error * accuracy * Lanczos method * conjugate gradient method * GMRES method Subject RIV: BA - General Mathematics Impact factor: 0.351, year: 2005

  16. Scale-dependent three-dimensional charged black holes in linear and non-linear electrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Rincon, Angel; Koch, Benjamin [Pontificia Universidad Catolica de Chile, Instituto de Fisica, Santiago (Chile); Contreras, Ernesto; Bargueno, Pedro; Hernandez-Arboleda, Alejandro [Universidad de los Andes, Departamento de Fisica, Bogota, Distrito Capital (Colombia); Panotopoulos, Grigorios [Universidade de Lisboa, CENTRA, Instituto Superior Tecnico, Lisboa (Portugal)

    2017-07-15

    In the present work we study the scale dependence at the level of the effective action of charged black holes in Einstein-Maxwell as well as in Einstein-power-Maxwell theories in (2 + 1)-dimensional spacetimes without a cosmological constant. We allow for scale dependence of the gravitational and electromagnetic couplings, and we solve the corresponding generalized field equations imposing the null energy condition. Certain properties, such as horizon structure and thermodynamics, are discussed in detail. (orig.)

  17. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  18. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  19. Non-linear variability in geophysics scaling and fractals

    CERN Document Server

    Lovejoy, S

    1991-01-01

    consequences of broken symmetry -here parity-is studied. In this model, turbulence is dominated by a hierarchy of helical (corkscrew) structures. The authors stress the unique features of such pseudo-scalar cascades as well as the extreme nature of the resulting (intermittent) fluctuations. Intermittent turbulent cascades was also the theme of a paper by us in which we show that universality classes exist for continuous cascades (in which an infinite number of cascade steps occur over a finite range of scales). This result is the multiplicative analogue of the familiar central limit theorem for the addition of random variables. Finally, an interesting paper by Pasmanter investigates the scaling associated with anomolous diffusion in a chaotic tidal basin model involving a small number of degrees of freedom. Although the statistical literature is replete with techniques for dealing with those random processes characterized by both exponentially decaying (non-scaling) autocorrelations and exponentially decaying...

  20. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  1. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  2. Universal Linear Scaling of Permeability and Time for Heterogeneous Fracture Dissolution

    Science.gov (United States)

    Wang, L.; Cardenas, M. B.

    2017-12-01

    Fractures are dynamically changing over geological time scale due to mechanical deformation and chemical reactions. However, the latter mechanism remains poorly understood with respect to the expanding fracture, which leads to a positively coupled flow and reactive transport processes, i.e., as a fracture expands, so does its permeability (k) and thus flow and reactive transport processes. To unravel this coupling, we consider a self-enhancing process that leads to fracture expansion caused by acidic fluid, i.e., CO2-saturated brine dissolving calcite fracture. We rigorously derive a theory, for the first time, showing that fracture permeability increases linearly with time [Wang and Cardenas, 2017]. To validate this theory, we resort to the direct simulation that solves the Navier-Stokes and Advection-Diffusion equations with a moving mesh according to the dynamic dissolution process in two-dimensional (2D) fractures. We find that k slowly increases first until the dissolution front breakthrough the outbound when we observe a rapid k increase, i.e., the linear time-dependence of k occurs. The theory agrees well with numerical observations across a broad range of Peclet and Damkohler numbers through homogeneous and heterogeneous 2D fractures. Moreover, the theory of linear scaling relationship between k and time matches well with experimental observations of three-dimensional (3D) fractures' dissolution. To further attest to our theory's universality for 3D heterogeneous fractures across a broad range of roughness and correlation length of aperture field, we develop a depth-averaged model that simulates the process-based reactive transport. The simulation results show that, regardless of a wide variety of dissolution patterns such as the presence of dissolution fingers and preferential dissolution paths, the linear scaling relationship between k and time holds. Our theory sheds light on predicting permeability evolution in many geological settings when the self

  3. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  4. A Dynamic Linear Modeling Approach to Public Policy Change

    DEFF Research Database (Denmark)

    Loftis, Matthew; Mortensen, Peter Bjerre

    2017-01-01

    Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...

  5. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  6. Mathematical models of non-linear phenomena, processes and systems: from molecular scale to planetary atmosphere

    CERN Document Server

    2013-01-01

    This book consists of twenty seven chapters, which can be divided into three large categories: articles with the focus on the mathematical treatment of non-linear problems, including the methodologies, algorithms and properties of analytical and numerical solutions to particular non-linear problems; theoretical and computational studies dedicated to the physics and chemistry of non-linear micro-and nano-scale systems, including molecular clusters, nano-particles and nano-composites; and, papers focused on non-linear processes in medico-biological systems, including mathematical models of ferments, amino acids, blood fluids and polynucleic chains.

  7. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  8. Successful adaptation to climate change across scales

    International Nuclear Information System (INIS)

    Adger, W.N.; Arnell, N.W.; University of Southampton; Tompkins, E.L.; University of East Anglia, Norwich; University of Southampton

    2005-01-01

    Climate change impacts and responses are presently observed in physical and ecological systems. Adaptation to these impacts is increasingly being observed in both physical and ecological systems as well as in human adjustments to resource availability and risk at different spatial and societal scales. We review the nature of adaptation and the implications of different spatial scales for these processes. We outline a set of normative evaluative criteria for judging the success of adaptations at different scales. We argue that elements of effectiveness, efficiency, equity and legitimacy are important in judging success in terms of the sustainability of development pathways into an uncertain future. We further argue that each of these elements of decision-making is implicit within presently formulated scenarios of socio-economic futures of both emission trajectories and adaptation, though with different weighting. The process by which adaptations are to be judged at different scales will involve new and challenging institutional processes. (author)

  9. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  10. A Sawmill Manager Adapts To Change With Linear Programming

    Science.gov (United States)

    George F. Dutrow; James E. Granskog

    1973-01-01

    Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.

  11. A simplified density matrix minimization for linear scaling self-consistent field theory

    International Nuclear Information System (INIS)

    Challacombe, M.

    1999-01-01

    A simplified version of the Li, Nunes and Vanderbilt [Phys. Rev. B 47, 10891 (1993)] and Daw [Phys. Rev. B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations. The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency. Both orthogonal and nonorthogonal versions are derived. The AINV algorithm of Benzi, Meyer, and Tuma [SIAM J. Sci. Comp. 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations. These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs. For the first time, linear scaling Hartree - Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers. The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation. An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved. Calculations with up to 6000 basis functions are reported. The scaling of errors with system size is investigated for various levels of approximation. copyright 1999 American Institute of Physics

  12. Thresholds, switches and hysteresis in hydrology from the pedon to the catchment scale: a non-linear systems theory

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Hysteresis is a rate-independent non-linearity that is expressed through thresholds, switches, and branches. Exceedance of a threshold, or the occurrence of a turning point in the input, switches the output onto a particular output branch. Rate-independent branching on a very large set of switches with non-local memory is the central concept in the new definition of hysteresis. Hysteretic loops are a special case. A self-consistent mathematical description of hydrological systems with hysteresis demands a new non-linear systems theory of adequate generality. The goal of this paper is to establish this and to show how this may be done. Two results are presented: a conceptual model for the hysteretic soil-moisture characteristic at the pedon scale and a hysteretic linear reservoir at the catchment scale. Both are based on the Preisach model. A result of particular significance is the demonstration that the independent domain model of the soil moisture characteristic due to Childs, Poulavassilis, Mualem and others, is equivalent to the Preisach hysteresis model of non-linear systems theory, a result reminiscent of the reduction of the theory of the unit hydrograph to linear systems theory in the 1950s. A significant reduction in the number of model parameters is also achieved. The new theory implies a change in modelling paradigm.

  13. Hardy inequality on time scales and its application to half-linear dynamic equations

    Directory of Open Access Journals (Sweden)

    Řehák Pavel

    2005-01-01

    Full Text Available A time-scale version of the Hardy inequality is presented, which unifies and extends well-known Hardy inequalities in the continuous and in the discrete setting. An application in the oscillation theory of half-linear dynamic equations is given.

  14. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  15. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  16. Linear arrangement of nano-scale magnetic particles formed in Cu-Fe-Ni alloys

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung, E-mail: k3201s@hotmail.co [Department of Materials Engineering (SEISAN), Yokohama National University, 79-5 Tokiwadai, Hodogayaku, Yokohama, 240-8501 (Japan); Takeda, Mahoto [Department of Materials Engineering (SEISAN), Yokohama National University, 79-5 Tokiwadai, Hodogayaku, Yokohama, 240-8501 (Japan); Takeguchi, Masaki [Advanced Electron Microscopy Group, National Institute for Materials Science (NIMS), Sakura 3-13, Tsukuba, 305-0047 (Japan); Bae, Dong-Sik [School of Nano and Advanced Materials Engineering, Changwon National University, Gyeongnam, 641-773 (Korea, Republic of)

    2010-04-30

    The structural evolution of nano-scale magnetic particles formed in Cu-Fe-Ni alloys on isothermal annealing at 878 K has been investigated by means of transmission electron microscopy (TEM), electron dispersive X-ray spectroscopy (EDS), electron energy-loss spectroscopy (EELS) and field-emission scanning electron microscopy (FE-SEM). Phase decomposition of Cu-Fe-Ni occurred after an as-quenched specimen received a short anneal, and nano-scale magnetic particles were formed randomly in the Cu-rich matrix. A striking feature that two or more nano-scale particles with a cubic shape were aligned linearly along <1,0,0> directions was observed, and the trend was more pronounced at later stages of the precipitation. Large numbers of <1,0,0> linear chains of precipitates extended in three dimensions in late stages of annealing.

  17. Linear-scaling evaluation of the local energy in quantum Monte Carlo

    International Nuclear Information System (INIS)

    Austin, Brian; Aspuru-Guzik, Alan; Salomon-Ferrer, Romelia; Lester, William A. Jr.

    2006-01-01

    For atomic and molecular quantum Monte Carlo calculations, most of the computational effort is spent in the evaluation of the local energy. We describe a scheme for reducing the computational cost of the evaluation of the Slater determinants and correlation function for the correlated molecular orbital (CMO) ansatz. A sparse representation of the Slater determinants makes possible efficient evaluation of molecular orbitals. A modification to the scaled distance function facilitates a linear scaling implementation of the Schmidt-Moskowitz-Boys-Handy (SMBH) correlation function that preserves the efficient matrix multiplication structure of the SMBH function. For the evaluation of the local energy, these two methods lead to asymptotic linear scaling with respect to the molecule size

  18. Energy harvesting with stacked dielectric elastomer transducers: Nonlinear theory, optimization, and linearized scaling law

    Science.gov (United States)

    Tutcuoglu, A.; Majidi, C.

    2014-12-01

    Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.

  19. Three-point phase correlations: A new measure of non-linear large-scale structure

    CERN Document Server

    Wolstenhulme, Richard; Obreschkow, Danail

    2015-01-01

    We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the non-linear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F_2, which governs the non-linear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a very good agreement for separations r>20 Mpc/h. Fitting formulae for the power spectrum and the non-linear coupling kernel at small scales allow us to extend our prediction into the strongly non-linear regime. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the linear bias. Furtherm...

  20. On the interaction of small-scale linear waves with nonlinear solitary waves

    Science.gov (United States)

    Xu, Chengzhu; Stastna, Marek

    2017-04-01

    In the study of environmental and geophysical fluid flows, linear wave theory is well developed and its application has been considered for phenomena of various length and time scales. However, due to the nonlinear nature of fluid flows, in many cases results predicted by linear theory do not agree with observations. One of such cases is internal wave dynamics. While small-amplitude wave motion may be approximated by linear theory, large amplitude waves tend to be solitary-like. In some cases, when the wave is highly nonlinear, even weakly nonlinear theories fail to predict the wave properties correctly. We study the interaction of small-scale linear waves with nonlinear solitary waves using highly accurate pseudo spectral simulations that begin with a fully nonlinear solitary wave and a train of small-amplitude waves initialized from linear waves. The solitary wave then interacts with the linear waves through either an overtaking collision or a head-on collision. During the collision, there is a net energy transfer from the linear wave train to the solitary wave, resulting in an increase in the kinetic energy carried by the solitary wave and a phase shift of the solitary wave with respect to a freely propagating solitary wave. At the same time the linear waves are greatly reduced in amplitude. The percentage of energy transferred depends primarily on the wavelength of the linear waves. We found that after one full collision cycle, the longest waves may retain as much as 90% of the kinetic energy they had initially, while the shortest waves lose almost all of their initial energy. We also found that a head-on collision is more efficient in destroying the linear waves than an overtaking collision. On the other hand, the initial amplitude of the linear waves has very little impact on the percentage of energy that can be transferred to the solitary wave. Because of the nonlinearity of the solitary wave, these results provide us some insight into wave-mean flow

  1. Linear Scaling Solution of the Time-Dependent Self-Consistent-Field Equations

    Directory of Open Access Journals (Sweden)

    Matt Challacombe

    2014-03-01

    Full Text Available A new approach to solving the Time-Dependent Self-Consistent-Field equations is developed based on the double quotient formulation of Tsiper 2001 (J. Phys. B. Dual channel, quasi-independent non-linear optimization of these quotients is found to yield convergence rates approaching those of the best case (single channel Tamm-Dancoff approximation. This formulation is variational with respect to matrix truncation, admitting linear scaling solution of the matrix-eigenvalue problem, which is demonstrated for bulk excitons in the polyphenylene vinylene oligomer and the (4,3 carbon nanotube segment.

  2. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    International Nuclear Information System (INIS)

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  3. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  4. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  5. Self-consistent field theory based molecular dynamics with linear system-size scaling

    Energy Technology Data Exchange (ETDEWEB)

    Richters, Dorothee [Institute of Mathematics and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 9, D-55128 Mainz (Germany); Kühne, Thomas D., E-mail: kuehne@uni-mainz.de [Institute of Physical Chemistry and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 7, D-55128 Mainz (Germany); Technical and Macromolecular Chemistry, University of Paderborn, Warburger Str. 100, D-33098 Paderborn (Germany)

    2014-04-07

    We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.

  6. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DEFF Research Database (Denmark)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    2015-01-01

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...

  7. ONETEP: linear-scaling density-functional theory with plane-waves

    International Nuclear Information System (INIS)

    Haynes, P D; Mostof, A A; Skylaris, C-K; Payne, M C

    2006-01-01

    This paper provides a general overview of the methodology implemented in onetep (Order-N Electronic Total Energy Package), a parallel density-functional theory code for largescale first-principles quantum-mechanical calculations. The distinctive features of onetep are linear-scaling in both computational effort and resources, obtained by making well-controlled approximations which enable simulations to be performed with plane-wave accuracy. Titanium dioxide clusters of increasing size designed to mimic surfaces are studied to demonstrate the accuracy and scaling of onetep

  8. Towards TeV-scale electron-positron collisions: the Compact Linear Collider (CLIC)

    Science.gov (United States)

    Doebert, Steffen; Sicking, Eva

    2018-02-01

    The Compact Linear Collider (CLIC), a future electron-positron collider at the energy frontier, has the potential to change our understanding of the universe. Proposed to follow the Large Hardron Collider (LHC) programme at CERN, it is conceived for precision measurements as well as for searches for new phenomena.

  9. Linear trend and abrupt changes of climate indices in the arid region of northwestern China

    Science.gov (United States)

    Wang, Huaijun; Pan, Yingping; Chen, Yaning; Ye, Zhengwei

    2017-11-01

    In recent years, climate extreme events have caused increasing direct economic and social losses in the arid region of northwestern China. Based on daily temperature and precipitation data from 1960 to 2010, this paper discussed the linear trend and abrupt changes of climate indices. The general evolution was obtained by the empirical orthogonal function (EOF), the Mann-Kendall test, and the distribution-free cumulative sum chart (CUSUM) test. The results are as follows: (1) climate showed a warming trend at annual and seasonal scale, with all temperature indices exhibiting statistically significant changes. The warm indices have increased, with 1.37%days/decade of warm days (TX90p), 0.17 °C/decade of warmest days (TXx) and 1.97 days/decade of warm spell duration indicator (WSDI), respectively. The cold indices have decreased, with - 1.89%days/decade, 0.65 °C/decade and - 0.66 days/decade for cold nights (TN10p), coldest nights (TNn) and cold spell duration indicator (CSDI), respectively. The precipitation indices have also increased significantly, coupled with the changes of magnitude (max 1-day precipitation amount (RX1day)), frequency (rain day (R0.1)), and duration (consecutive dry days (CDD)). (2) Abrupt changes of the annual regional precipitation indices and the minimum temperature indices were observed around 1986, and that of the maximum temperature indices were observed in 1996. (3) The EOF1 indicated the overall coherent distribution for the whole study area, and its principal component (PC1) was also observed, showing a significant linear trend with an abrupt change, which were in accordance with the regional observation results. EOF2 and EOF3 show contrasts between the southern and northern study areas, and between the eastern and western study areas, respectively, whereas no significant tendency was observed for their PCs. Hence, the climate indices have changed significantly, with linear trends and abrupt changes noted for all climate indices

  10. Non-linear modelling of monthly mean vorticity time changes: an application to the western Mediterranean

    Directory of Open Access Journals (Sweden)

    M. Finizio

    Full Text Available Starting from a number of observables in the form of time-series of meteorological elements in various areas of the northern hemisphere, a model capable of fitting past records and predicting monthly vorticity time changes in the western Mediterranean is implemented. A new powerful statistical methodology is introduced (MARS in order to capture the non-linear dynamics of time-series representing the available 40-year history of the hemispheric circulation. The developed model is tested on a suitable independent data set. An ensemble forecast exercise is also carried out to check model stability in reference to the uncertainty of input quantities.

    Key words. Meteorology and atmospheric dynamics · General circulation ocean-atmosphere interactions · Synoptic-scale meteorology

  11. Linear and Nonlinear Optical Properties of Micrometer-Scale Gold Nanoplates

    International Nuclear Information System (INIS)

    Liu Xiao-Lan; Peng Xiao-Niu; Yang Zhong-Jian; Li Min; Zhou Li

    2011-01-01

    Micrometer-scale gold nanoplates have been synthesized in high yield through a polyol process. The morphology, crystal structure and linear optical extinction of the gold nanoplates have been characterized. These gold nanoplates are single-crystalline with triangular, truncated triangular and hexagonal shapes, exhibiting strong surface plasmon resonance (SPR) extinction in the visible and near-infrared (NIR) region. The linear optical properties of gold nanoplates are also investigated by theoretical calculations. We further investigate the nonlinear optical properties of the gold nanoplates in solution by Z-scan technique. The nonlinear absorption (NLA) coefficient and nonlinear refraction (NLR) index are measured to be 1.18×10 2 cm/GW and −1.04×10 −3 cm 2 /GW, respectively. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  12. Linear-scaling implementation of the direct random-phase approximation

    International Nuclear Information System (INIS)

    Kállay, Mihály

    2015-01-01

    We report the linear-scaling implementation of the direct random-phase approximation (dRPA) for closed-shell molecular systems. As a bonus, linear-scaling algorithms are also presented for the second-order screened exchange extension of dRPA as well as for the second-order Møller–Plesset (MP2) method and its spin-scaled variants. Our approach is based on an incremental scheme which is an extension of our previous local correlation method [Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The approach extensively uses local natural orbitals to reduce the size of the molecular orbital basis of local correlation domains. In addition, we also demonstrate that using natural auxiliary functions [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], the size of the auxiliary basis of the domains and thus that of the three-center Coulomb integral lists can be reduced by an order of magnitude, which results in significant savings in computation time. The new approach is validated by extensive test calculations for energies and energy differences. Our benchmark calculations also demonstrate that the new method enables dRPA calculations for molecules with more than 1000 atoms and 10 000 basis functions on a single processor

  13. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  14. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    Science.gov (United States)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  15. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  16. A critical oscillation constant as a variable of time scales for half-linear dynamic equations

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel

    2010-01-01

    Roč. 60, č. 2 (2010), s. 237-256 ISSN 0139-9918 R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : dynamic equation * time scale * half-linear equation * (non)oscillation criteria * Hille-Nehari criteria * Kneser criteria * critical constant * oscillation constant * Hardy inequality Subject RIV: BA - General Mathematics Impact factor: 0.316, year: 2010 http://link.springer.com/article/10.2478%2Fs12175-010-0009-7

  17. Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems

    DEFF Research Database (Denmark)

    Elden, Lars; Hansen, Per Christian; Rojas, Marielba

    2003-01-01

    The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...

  18. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Directory of Open Access Journals (Sweden)

    Xiaocui Wu

    2015-02-01

    Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  19. Scaling versus asymptotic scaling in the non-linear σ-model in 2D. Continuum version

    International Nuclear Information System (INIS)

    Flyvbjerg, H.

    1990-01-01

    The two-point function of the O(N)-symmetric non-linear σ-model in two dimensions is large-N expanded and renormalized, neglecting terms of O(1/N 2 ). At finite cut-off, universal, analytical expressions relate the magnetic susceptibility and the dressed mass to the bare coupling. Removing the cut-off, a similar relation gives the renormalized coupling as a function of the mass gap. In the weak-coupling limit these relations reproduce the results of renormalization group improved weak-coupling perturbation theory to two-loop order. The constant left unknown, when the renormalization group is integrated, is determined here. The approach to asymptotic scaling is studied for various values of N. (orig.)

  20. Non-linear dielectric signatures of entropy changes in liquids subject to time dependent electric fields

    Energy Technology Data Exchange (ETDEWEB)

    Richert, Ranko [School of Molecular Sciences, Arizona State University, Tempe, Arizona 85287-1604 (United States)

    2016-03-21

    A model of non-linear dielectric polarization is studied in which the field induced entropy change is the source of polarization dependent retardation time constants. Numerical solutions for the susceptibilities of the system are obtained for parameters that represent the dynamic and thermodynamic behavior of glycerol. The calculations for high amplitude sinusoidal fields show a significant enhancement of the steady state loss for frequencies below that of the low field loss peak. Also at relatively low frequencies, the third harmonic susceptibility spectrum shows a “hump,” i.e., a maximum, with an amplitude that increases with decreasing temperature. Both of these non-linear effects are consistent with experimental evidence. While such features have been used to conclude on a temperature dependent number of dynamically correlated particles, N{sub corr}, the present result demonstrates that the third harmonic susceptibility display a peak with an amplitude that tracks the variation of the activation energy in a model that does not involve dynamical correlations or spatial scales.

  1. Linear and Non-linear Numerical Sea-keeping Evaluation of a Fast Monohull Ferry Compared to Full Scale Measurements

    DEFF Research Database (Denmark)

    Wang, Zhaohui; Folsø, Rasmus; Bondini, Francesca

    1999-01-01

    , full-scale measurements have been performed on board a 128 m monohull fast ferry. This paper deals with the results from these full-scale measurements. The primary results considered are pitch motion, midship vertical bending moment and vertical acceleration at the bow. Previous comparisons between...

  2. Linearly scaling and almost Hamiltonian dielectric continuum molecular dynamics simulations through fast multipole expansions

    Energy Technology Data Exchange (ETDEWEB)

    Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)

    2015-11-14

    Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.

  3. On Feature Extraction from Large Scale Linear LiDAR Data

    Science.gov (United States)

    Acharjee, Partha Pratim

    Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are

  4. Large-scale dynamo action due to α fluctuations in a linear shear flow

    Science.gov (United States)

    Sridhar, S.; Singh, Nishant K.

    2014-12-01

    We present a model of large-scale dynamo action in a shear flow that has stochastic, zero-mean fluctuations of the α parameter. This is based on a minimal extension of the Kraichnan-Moffatt model, to include a background linear shear and Galilean-invariant α-statistics. Using the first-order smoothing approximation we derive a linear integro-differential equation for the large-scale magnetic field, which is non-perturbative in the shearing rate S , and the α-correlation time τα . The white-noise case, τα = 0 , is solved exactly, and it is concluded that the necessary condition for dynamo action is identical to the Kraichnan-Moffatt model without shear; this is because white-noise does not allow for memory effects, whereas shear needs time to act. To explore memory effects we reduce the integro-differential equation to a partial differential equation, valid for slowly varying fields when τα is small but non-zero. Seeking exponential modal solutions, we solve the modal dispersion relation and obtain an explicit expression for the growth rate as a function of the six independent parameters of the problem. A non-zero τα gives rise to new physical scales, and dynamo action is completely different from the white-noise case; e.g. even weak α fluctuations can give rise to a dynamo. We argue that, at any wavenumber, both Moffatt drift and Shear always contribute to increasing the growth rate. Two examples are presented: (a) a Moffatt drift dynamo in the absence of shear and (b) a Shear dynamo in the absence of Moffatt drift.

  5. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    Science.gov (United States)

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  6. Recent advances toward a general purpose linear-scaling quantum force field.

    Science.gov (United States)

    Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M

    2014-09-16

    Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to

  7. The Front-End Readout as an Encoder IC for Magneto-Resistive Linear Scale Sensors

    Directory of Open Access Journals (Sweden)

    Trong-Hieu Tran

    2016-09-01

    Full Text Available This study proposes a front-end readout circuit as an encoder chip for magneto-resistance (MR linear scales. A typical MR sensor consists of two major parts: one is its base structure, also called the magnetic scale, which is embedded with multiple grid MR electrodes, while another is an “MR reader” stage with magnets inside and moving on the rails of the base. As the stage is in motion, the magnetic interaction between the moving stage and the base causes the variation of the magneto-resistances of the grid electrodes. In this study, a front-end readout IC chip is successfully designed and realized to acquire temporally-varying resistances in electrical signals as the stage is in motions. The acquired signals are in fact sinusoids and co-sinusoids, which are further deciphered by the front-end readout circuit via newly-designed programmable gain amplifiers (PGAs and analog-to-digital converters (ADCs. The PGA is particularly designed to amplify the signals up to full dynamic ranges and up to 1 MHz. A 12-bit successive approximation register (SAR ADC for analog-to-digital conversion is designed with linearity performance of ±1 in the least significant bit (LSB over the input range of 0.5–2.5 V from peak to peak. The chip was fabricated by the Taiwan Semiconductor Manufacturing Company (TSMC 0.35-micron complementary metal oxide semiconductor (CMOS technology for verification with a chip size of 6.61 mm2, while the power consumption is 56 mW from a 5-V power supply. The measured integral non-linearity (INL is −0.79–0.95 LSB while the differential non-linearity (DNL is −0.68–0.72 LSB. The effective number of bits (ENOB of the designed ADC is validated as 10.86 for converting the input analog signal to digital counterparts. Experimental validation was conducted. A digital decoder is orchestrated to decipher the harmonic outputs from the ADC via interpolation to the position of the moving stage. It was found that the displacement

  8. A large-scale linear complementarity model of the North American natural gas market

    International Nuclear Information System (INIS)

    Gabriel, Steven A.; Jifang Zhuang; Kiet, Supat

    2005-01-01

    The North American natural gas market has seen significant changes recently due to deregulation and restructuring. For example, third party marketers can contract for transportation and purchase of gas to sell to end-users. While the intent was a more competitive market, the potential for market power exists. We analyze this market using a linear complementarity equilibrium model including producers, storage and peak gas operators, third party marketers and four end-use sectors. The marketers are depicted as Nash-Cournot players determining supply to meet end-use consumption, all other players are in perfect competition. Results based on National Petroleum Council scenarios are presented. (Author)

  9. Chapter 3: Climate change at multiple scales

    Science.gov (United States)

    Constance Millar; Ron Neilson; Dominique Bachelet; Ray Drapek; Jim Lenihan

    2006-01-01

    Concepts about the natural world influence approaches to forest management. In the popular press, climate change inevitably refers to global warming, greenhouse gas impacts, novel anthropogenic (human-induced) threats, and international politics. There is, however, a larger context that informs our understanding of changes that are occurring - that is, Earth’...

  10. Effect of cellulosic fiber scale on linear and non-linear mechanical performance of starch-based composites.

    Science.gov (United States)

    Karimi, Samaneh; Abdulkhani, Ali; Tahir, Paridah Md; Dufresne, Alain

    2016-10-01

    Cellulosic nanofibers (NFs) from kenaf bast were used to reinforce glycerol plasticized thermoplastic starch (TPS) matrices with varying contents (0-10wt%). The composites were prepared by casting/evaporation method. Raw fibers (RFs) reinforced TPS films were prepared with the same contents and conditions. The aim of study was to investigate the effects of filler dimension and loading on linear and non-linear mechanical performance of fabricated materials. Obtained results clearly demonstrated that the NF-reinforced composites had significantly greater mechanical performance than the RF-reinforced counterparts. This was attributed to the high aspect ratio and nano dimension of the reinforcing agents, as well as their compatibility with the TPS matrix, resulting in strong fiber/matrix interaction. Tensile strength and Young's modulus increased by 313% and 343%, respectively, with increasing NF content from 0 to 10wt%. Dynamic mechanical analysis (DMA) revealed an elevational trend in the glass transition temperature of amylopectin-rich domains in composites. The most eminent record was +18.5°C shift in temperature position of the film reinforced with 8% NF. This finding implied efficient dispersion of nanofibers in the matrix and their ability to form a network and restrict mobility of the system. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  12. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Corsini, Niccolò R. C., E-mail: niccolo.corsini@imperial.ac.uk; Greco, Andrea; Haynes, Peter D. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Hine, Nicholas D. M. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Cavendish Laboratory, J. J. Thompson Avenue, Cambridge CB3 0HE (United Kingdom); Molteni, Carla [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett.94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  13. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Science.gov (United States)

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  14. Land use change impacts on floods at the catchment scale

    NARCIS (Netherlands)

    Rogger, M.; Agnoletti, M.; Alaoui, A.; Bathurst, J.C.; Bodner, G.; Borga, M.; Chaplot, Vincent; Gallart, F.; Glatzel, G.; Hall, J.; Holden, J.; Holko, L.; Horn, R.; Kiss, A.; Kohnová, S.; Leitinger, G.; Lennartz, B.; Parajka, J.; Perdigão, R.; Peth, S.; Plavcová, L.; Quinton, John N.; Robinson, Matthew R.; Salinas, J.L.; Santoro, A.; Szolgay, J.; Tron, S.; Akker, van den J.J.H.; Viglione, A.; Blöschl, G.

    2017-01-01

    Research gaps in understanding flood changes at the catchment scale caused by changes in forest management, agricultural practices, artificial drainage, and terracing are identified. Potential strategies in addressing these gaps are proposed, such as complex systems approaches to link processes

  15. Climate change at global and regional scale

    International Nuclear Information System (INIS)

    Dufresne, J.L.; Royer, J.F.

    2008-01-01

    In support of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) that should appear in early 2007, modelling groups world-wide have performed a huge coordinated exercise of climate change runs for the 20. and 21. century. In this paper we present the results of the two french climate models, from CNRM and IPSL. In particular we emphasize the progress made since the previous IPCC report and we identify which results are comparable among models and which strongly differ. (authors)

  16. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  17. Simulation of electron energy loss spectra of nanomaterials with linear-scaling density functional theory

    International Nuclear Information System (INIS)

    Tait, E W; Payne, M C; Ratcliff, L E; Haynes, P D; Hine, N D M

    2016-01-01

    Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree with those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. Finally, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable. (paper)

  18. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    Science.gov (United States)

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  19. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  20. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

    International Nuclear Information System (INIS)

    Rahman, M. A.; Basarudin, T.

    1997-01-01

    This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

  1. Measuring Change with the Rating Scale Model.

    Science.gov (United States)

    Ludlow, Larry H.; And Others

    The Rehabilitation Research and Development Laboratory at the United States Veterans Administration Hines Hospital is engaged in a long-term evaluation of blind rehabilitation. One aspect of the evaluation project focuses on the measurement of attitudes toward blindness. Our aim is to measure changes in attitudes toward blindness from…

  2. Design changes of device to investigation of alloys linear contraction and shrinkage stresses

    Directory of Open Access Journals (Sweden)

    J. Mutwil

    2009-07-01

    Full Text Available Some design changes in device elaborated by author to examination of linear contraction and shrinkage stresses progress of metals and alloys during– and after solidification have been described. The introduced changes have been focused on design of closing of shrinkage test rod mould. The introduced changes have been allowed to simplify a mounting procedure of thermocouples measuring a temperature of the shrinkage rod casting (in 6 points. Exemplary investigation results of linear contraction and shrinkage stresses development in Al-Si13.5% alloy have been presented.

  3. Orbital scale vegetation change in Africa

    Science.gov (United States)

    Dupont, Lydie

    2011-12-01

    Palynological records of Middle and Late Pleistocene marine sediments off African shores is reviewed in order to reveal long-term patterns of vegetation change during climate cycles. Whether the transport of pollen and spores from the source areas on the continent to the ocean floor is mainly by wind or predominantly by rivers depends on the region. Despite the differences in transportation, accumulation rates in the marine sediments decline exponentially with distance to the shore. The marine sediments provide well-dated records presenting the vegetation history of the main biomes of western and southern Africa. The extent of different biomes varied with the climate changes of the glacial interglacial cycle. The Mediterranean forest area expanded during interglacials, the northern Saharan desert during glacials, and the semi-desert area in between during the transitions. In the sub-Saharan mountains ericaceous scrubland spread mainly during glacials and the mountainous forest area often increased during intermediate periods. Savannahs extended or shifted to lower latitudes during glacials. While the representation of the tropical rain forest fluctuated with summer insolation and precession, that of the subtropical biomes showed more obliquity variability or followed the pattern of glacial and interglacials.

  4. The Study of Non-Linear Acceleration of Particles during Substorms Using Multi-Scale Simulations

    International Nuclear Information System (INIS)

    Ashour-Abdalla, Maha

    2011-01-01

    To understand particle acceleration during magnetospheric substorms we must consider the problem on multple scales ranging from the large scale changes in the entire magnetosphere to the microphysics of wave particle interactions. In this paper we present two examples that demonstrate the complexity of substorm particle acceleration and its multi-scale nature. The first substorm provided us with an excellent example of ion acceleration. On March 1, 2008 four THEMIS spacecraft were in a line extending from 8 R E to 23 R E in the magnetotail during a very large substorm during which ions were accelerated to >500 keV. We used a combination of a global magnetohydrodynamic and large scale kinetic simulations to model the ion acceleration and found that the ions gained energy by non-adiabatic trajectories across the substorm electric field in a narrow region extending across the magnetotail between x = -10 R E and x = -15 R E . In this strip called the 'wall region' the ions move rapidly in azimuth and gain 100s of keV. In the second example we studied the acceleration of electrons associated with a pair of dipolarization fronts during a substorm on February 15, 2008. During this substorm three THEMIS spacecraft were grouped in the near-Earth magnetotail (x ∼-10 R E ) and observed electron acceleration of >100 keV accompanied by intense plasma waves. We used the MHD simulations and analytic theory to show that adiabatic motion (betatron and Fermi acceleration) was insufficient to account for the electron acceleration and that kinetic processes associated with the plasma waves were important.

  5. Linear correlation of interfacial tension at water-solvent interface, solubility of water in organic solvents, and SE* scale parameters

    International Nuclear Information System (INIS)

    Mezhov, E.A.; Khananashvili, N.L.; Shmidt, V.S.

    1988-01-01

    A linear correlation has been established between the solubility of water in water-immiscible organic solvents and the interfacial tension at the water-solvent interface on the one hand and the parameters of the SE* and π* scales for these solvents on the other hand. This allows us, using the known tabulated SE* or π* parameters for each solvent, to predict the values of the interfacial tension and the solubility of water for the corresponding systems. We have shown that the SE* scale allows us to predict these values more accurately than other known solvent scales, since in contrast to other scales it characterizes solvents found in equilibrium with water

  6. Canonical-ensemble extended Lagrangian Born-Oppenheimer molecular dynamics for the linear scaling density functional theory.

    Science.gov (United States)

    Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi

    2017-10-11

    We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.

  7. Inference regarding multiple structural changes in linear models with endogenous regressors

    NARCIS (Netherlands)

    Boldea, O.; Hall, A.R.; Han, S.

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares

  8. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    Science.gov (United States)

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  9. Non-linear regime shifts in Holocene Asian monsoon variability: potential impacts on cultural change and migratory patterns

    Science.gov (United States)

    Donges, J. F.; Donner, R. V.; Marwan, N.; Breitenbach, S. F. M.; Rehfeld, K.; Kurths, J.

    2015-05-01

    The Asian monsoon system is an important tipping element in Earth's climate with a large impact on human societies in the past and present. In light of the potentially severe impacts of present and future anthropogenic climate change on Asian hydrology, it is vital to understand the forcing mechanisms of past climatic regime shifts in the Asian monsoon domain. Here we use novel recurrence network analysis techniques for detecting episodes with pronounced non-linear changes in Holocene Asian monsoon dynamics recorded in speleothems from caves distributed throughout the major branches of the Asian monsoon system. A newly developed multi-proxy methodology explicitly considers dating uncertainties with the COPRA (COnstructing Proxy Records from Age models) approach and allows for detection of continental-scale regime shifts in the complexity of monsoon dynamics. Several epochs are characterised by non-linear regime shifts in Asian monsoon variability, including the periods around 8.5-7.9, 5.7-5.0, 4.1-3.7, and 3.0-2.4 ka BP. The timing of these regime shifts is consistent with known episodes of Holocene rapid climate change (RCC) and high-latitude Bond events. Additionally, we observe a previously rarely reported non-linear regime shift around 7.3 ka BP, a timing that matches the typical 1.0-1.5 ky return intervals of Bond events. A detailed review of previously suggested links between Holocene climatic changes in the Asian monsoon domain and the archaeological record indicates that, in addition to previously considered longer-term changes in mean monsoon intensity and other climatic parameters, regime shifts in monsoon complexity might have played an important role as drivers of migration, pronounced cultural changes, and the collapse of ancient human societies.

  10. Strength and reversibility of stereotypes for a rotary control with linear scales.

    Science.gov (United States)

    Chan, Alan H S; Chan, W H

    2008-02-01

    Using real mechanical controls, this experiment studied strength and reversibility of direction-of-motion stereotypes and response times for a rotary control with horizontal and vertical scales. Thirty-eight engineering undergraduates (34 men and 4 women) ages 23 to 47 years (M=29.8, SD=7.7) took part in the experiment voluntarily. The effects of instruction of change of pointer position and control plane on movement compatibility were analyzed with precise quantitative measures of strength and a reversibility index of stereotype. Comparisons of the strength and reversibility values of these two configurations with those of rotary control-circular display, rotary control-digital counter, four-way lever-circular display, and four-way lever-digital counter were made. The results of this study provided significant implications for the industrial design of control panels for improved human performance.

  11. Quantifying feedforward control: a linear scaling model for fingertip forces and object weight.

    Science.gov (United States)

    Lu, Ying; Bilaloglu, Seda; Aluru, Viswanath; Raghavan, Preeti

    2015-07-01

    The ability to predict the optimal fingertip forces according to object properties before the object is lifted is known as feedforward control, and it is thought to occur due to the formation of internal representations of the object's properties. The control of fingertip forces to objects of different weights has been studied extensively by using a custom-made grip device instrumented with force sensors. Feedforward control is measured by the rate of change of the vertical (load) force before the object is lifted. However, the precise relationship between the rate of change of load force and object weight and how it varies across healthy individuals in a population is not clearly understood. Using sets of 10 different weights, we have shown that there is a log-linear relationship between the fingertip load force rates and weight among neurologically intact individuals. We found that after one practice lift, as the weight increased, the peak load force rate (PLFR) increased by a fixed percentage, and this proportionality was common among the healthy subjects. However, at any given weight, the level of PLFR varied across individuals and was related to the efficiency of the muscles involved in lifting the object, in this case the wrist and finger extensor muscles. These results quantify feedforward control during grasp and lift among healthy individuals and provide new benchmarks to interpret data from neurologically impaired populations as well as a means to assess the effect of interventions on restoration of feedforward control and its relationship to muscular control. Copyright © 2015 the American Physiological Society.

  12. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  13. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    Science.gov (United States)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  14. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    International Nuclear Information System (INIS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  15. Non-linear laws of echoic memory and auditory change detection in humans.

    Science.gov (United States)

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-07-03

    The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms), while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms). The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  16. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    International Nuclear Information System (INIS)

    Gene Golub; Kwok Ko

    2009-01-01

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  17. Continent-scale global change attribution in European birds - combining annual and decadal time scales

    DEFF Research Database (Denmark)

    Jørgensen, Peter Søgaard; Böhning-Gaese, Katrin; Thorup, Kasper

    2016-01-01

    foundation for attributing species responses to global change may be achieved by complementing an attributes-based approach by one estimating the relationship between repeated measures of organismal and environmental changes over short time scales. To assess the benefit of this multiscale perspective, we...... on or in the peak of the breeding season with the largest effect sizes observed in cooler parts of species' climatic ranges. Our results document the potential of combining time scales and integrating both species attributes and environmental variables for global change attribution. We suggest such an approach......Species attributes are commonly used to infer impacts of environmental change on multiyear species trends, e.g. decadal changes in population size. However, by themselves attributes are of limited value in global change attribution since they do not measure the changing environment. A broader...

  18. Impact of climate change on Taiwanese power market determined using linear complementarity model

    International Nuclear Information System (INIS)

    Tung, Ching-Pin; Tseng, Tze-Chi; Huang, An-Lei; Liu, Tzu-Ming; Hu, Ming-Che

    2013-01-01

    Highlights: ► Impact of climate change on average temperature is estimated. ► Temperature elasticity of demand is measured. ► Impact of climate change on Taiwanese power market determined. -- Abstract: The increase in the greenhouse gas concentration in the atmosphere causes significant changes in climate patterns. In turn, this climate change affects the environment, ecology, and human behavior. The emission of greenhouse gases from the power industry has been analyzed in many studies. However, the impact of climate change on the electricity market has received less attention. Hence, the purpose of this research is to determine the impact of climate change on the electricity market, and a case study involving the Taiwanese power market is conducted. First, the impact of climate change on temperature is estimated. Next, because electricity demand can be expressed as a function of temperature, the temperature elasticity of demand is measured. Then, a linear complementarity model is formulated to simulate the Taiwanese power market and climate change scenarios are discussed. Therefore, this paper establishes a simulation framework for calculating the impact of climate change on electricity demand change. In addition, the impact of climate change on the Taiwanese market is examined and presented.

  19. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  20. Monitoring of full scale tensegrity skeletons under temperature change

    OpenAIRE

    KAWAGUCHI, Ken'ichi; OHYA, Shunji

    2009-01-01

    p. 224-231 Strain change in the members of full-scale tensegrity skeletons has been monitored for eight years. The one-day data of one of the tensegrity frame on the hottest and the coldest day in the record are reported and discussed. Kawaguchi, K.; Ohya, S. (2009). Monitoring of full scale tensegrity skeletons under temperature change. Symposium of the International Association for Shell and Spatial Structures. Editorial Universitat Politècnica de València. http://hdl.handle.net/10...

  1. Non-linear optics of nano-scale pentacene thin film

    Science.gov (United States)

    Yahia, I. S.; Alfaify, S.; Jilani, Asim; Abdel-wahab, M. Sh.; Al-Ghamdi, Attieh A.; Abutalib, M. M.; Al-Bassam, A.; El-Naggar, A. M.

    2016-07-01

    We have found the new ways to investigate the linear/non-linear optical properties of nanostructure pentacene thin film deposited by thermal evaporation technique. Pentacene is the key material in organic semiconductor technology. The existence of nano-structured thin film was confirmed by atomic force microscopy and X-ray diffraction. The wavelength-dependent transmittance and reflectance were calculated to observe the optical behavior of the pentacene thin film. It has been observed the anomalous dispersion at wavelength λ 800. The non-linear refractive index of the deposited films was investigated. The linear optical susceptibility of pentacene thin film was calculated, and we observed the non-linear optical susceptibility of pentacene thin film at about 6 × 10-13 esu. The advantage of this work is to use of spectroscopic method to calculate the liner and non-liner optical response of pentacene thin films rather than expensive Z-scan. The calculated optical behavior of the pentacene thin films could be used in the organic thin films base advanced optoelectronic devices such as telecommunications devices.

  2. Detection of kinetic change points in piece-wise linear single molecule motion

    Science.gov (United States)

    Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.

    2018-03-01

    Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.

  3. The renormalization group: scale transformations and changes of scheme

    International Nuclear Information System (INIS)

    Roditi, I.

    1983-01-01

    Starting from a study of perturbation theory, the renormalization group is expressed, not only for changes of scale but also within the original view of Stueckelberg and Peterman, for changes of renormalization scheme. The consequences that follow from using that group are investigated. Following a more general point of view a method to obtain an improvement of the perturbative results for physical quantities is proposed. The results obtained with this method are compared with those of other existing methods. (L.C.) [pt

  4. CHANG-ES. IX. Radio scale heights and scale lengths of a consistent sample of 13 spiral galaxies seen edge-on and their correlations

    Science.gov (United States)

    Krause, Marita; Irwin, Judith; Wiegert, Theresa; Miskolczi, Arpad; Damas-Segovia, Ancor; Beck, Rainer; Li, Jiang-Tao; Heald, George; Müller, Peter; Stein, Yelena; Rand, Richard J.; Heesen, Volker; Walterbos, Rene A. M.; Dettmar, Ralf-Jürgen; Vargas, Carlos J.; English, Jayanne; Murphy, Eric J.

    2018-03-01

    Aim. The vertical halo scale height is a crucial parameter to understand the transport of cosmic-ray electrons (CRE) and their energy loss mechanisms in spiral galaxies. Until now, the radio scale height could only be determined for a few edge-on galaxies because of missing sensitivity at high resolution. Methods: We developed a sophisticated method for the scale height determination of edge-on galaxies. With this we determined the scale heights and radial scale lengths for a sample of 13 galaxies from the CHANG-ES radio continuum survey in two frequency bands. Results: The sample average values for the radio scale heights of the halo are 1.1 ± 0.3 kpc in C-band and 1.4 ± 0.7 kpc in L-band. From the frequency dependence analysis of the halo scale heights we found that the wind velocities (estimated using the adiabatic loss time) are above the escape velocity. We found that the halo scale heights increase linearly with the radio diameters. In order to exclude the diameter dependence, we defined a normalized scale height h˜ which is quite similar for all sample galaxies at both frequency bands and does not depend on the star formation rate or the magnetic field strength. However, h˜ shows a tight anticorrelation with the mass surface density. Conclusions: The sample galaxies with smaller scale lengths are more spherical in the radio emission, while those with larger scale lengths are flatter. The radio scale height depends mainly on the radio diameter of the galaxy. The sample galaxies are consistent with an escape-dominated radio halo with convective cosmic ray propagation, indicating that galactic winds are a widespread phenomenon in spiral galaxies. While a higher star formation rate or star formation surface density does not lead to a higher wind velocity, we found for the first time observational evidence of a gravitational deceleration of CRE outflow, e.g. a lowering of the wind velocity from the galactic disk.

  5. Non-linear temperature-dependent curvature of a phase change composite bimorph beam

    Science.gov (United States)

    Blonder, Greg

    2017-06-01

    Bimorph films curl in response to temperature. The degree of curvature typically varies in proportion to the difference in thermal expansion of the individual layers, and linearly with temperature. In many applications, such as controlling a thermostat, this gentle linear behavior is acceptable. In other cases, such as opening or closing a valve or latching a deployable column into place, an abrupt motion at a fixed temperature is preferred. To achieve this non-linear motion, we describe the fabrication and performance of a new bilayer structure we call a ‘phase change composite bimorph (PCBM)’. In a PCBM, one layer in the bimorph is a composite containing small inclusions of phase change materials. When the inclusions melt, their large (generally positive and  >1%) expansion coefficient induces a strong, reversible step function jump in bimorph curvature. The measured jump amplitude and thermal response is consistent with theory, and can be harnessed by a new class of actuators and sensors.

  6. Testing linear growth rate formulas of non-scale endogenous growth models

    NARCIS (Netherlands)

    Ziesemer, Thomas

    2017-01-01

    Endogenous growth theory has produced formulas for steady-state growth rates of income per capita which are linear in the growth rate of the population. Depending on the details of the models, slopes and intercepts are positive, zero or negative. Empirical tests have taken over the assumption of

  7. High-performance small-scale solvers for linear Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    , with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...

  8. Climate change adaptation strategies by small-scale farmers in ...

    African Journals Online (AJOL)

    Mburu

    SPSS) ... were financial constraints (93.4%), lack of relevant skills (74.5%) and lack of ... Key words: Climate change, small-scale farmers, adaptation strategies. ... investment in irrigation infrastructure, high post-harvest ..... 72.0 School drop out.

  9. Changes in channel morphology over human time scales [Chapter 32

    Science.gov (United States)

    John M. Buffington

    2012-01-01

    Rivers are exposed to changing environmental conditions over multiple spatial and temporal scales, with the imposed environmental conditions and response potential of the river modulated to varying degrees by human activity and our exploitation of natural resources. Watershed features that control river morphology include topography (valley slope and channel...

  10. Designing for scale: How relationships shape curriculum change

    NARCIS (Netherlands)

    Pareja Roblin, Natalie; Corbalan, Gemma; McKenney, Susan; Nieveen, Nienke; Van den Akker, Jan

    2012-01-01

    Pareja Roblin, N., Corbalan Perez, G., McKenney, S., Nieveen, N., & Van den Akker, J. (2012, 13-17 April). Designing for scale: How relationships shape curriculum change. Presentation at the AERA annual meeting, Vancouver, Canada. Please see also http://hdl.handle.net/1820/4679

  11. Designing for scale: How relationships shape curriculum change

    NARCIS (Netherlands)

    Pareja Roblin, Natalie; Corbalan, Gemma; McKenney, Susan; Nieveen, Nienke; Van den Akker, Jan

    2012-01-01

    Pareja Roblin, N., Corbalan Perez, G., McKenney, S., Nieveen, N., & Van den Akker, J. (2012, 13-17 April). Designing for scale: How relationships shape curriculum change. Paper presentation at the AERA annual meeting, Vancouver, Canada. Please see also: http://hdl.handle.net/1820/4678

  12. Non-linear laws of echoic memory and auditory change detection in humans

    Directory of Open Access Journals (Sweden)

    Takeshima Yasuyuki

    2010-07-01

    Full Text Available Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1 of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms, while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms. The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. Conclusions The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  13. Imprint of non-linear effects on HI intensity mapping on large scales

    Energy Technology Data Exchange (ETDEWEB)

    Umeh, Obinna, E-mail: umeobinna@gmail.com [Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535 (South Africa)

    2017-06-01

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.

  14. Inference regarding multiple structural changes in linear models with endogenous regressors☆

    Science.gov (United States)

    Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021

  15. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  16. Pattern recognition invariant under changes of scale and orientation

    Science.gov (United States)

    Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain

    1997-08-01

    We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.

  17. Vanishing-Overhead Linear-Scaling Random Phase Approximation by Cholesky Decomposition and an Attenuated Coulomb-Metric.

    Science.gov (United States)

    Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian

    2017-04-11

    A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.

  18. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    International Nuclear Information System (INIS)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y Y

    2008-01-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency

  19. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    Science.gov (United States)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y. Y.

    2008-07-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency.

  20. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  1. Linear stability of liquid films with phase change at the interface

    International Nuclear Information System (INIS)

    Spindler, Bertrand

    1980-01-01

    The objective of this research thesis is to study the linear stability of the flow of a liquid film on an inclined plane with a heat flow on the wall and an interfacial phase change, and to highlight the influence of the phase change on the flow stability. In order to do so, the author first proposed a rational simplification of equations by studying the order of magnitude of different terms, and based on some simple hypotheses regarding flow physics. Two stability studies are then addressed, one regarding a flow with a pre-existing film, and the other regarding the flow of a condensation film. In both cases, it is assumed that there is no imposed heat flow, but that the driving effect of vapour by the liquid film is taken into account [fr

  2. Study of load change control in PWRs using the methods of linear optimal control

    International Nuclear Information System (INIS)

    Yang, T.

    1983-01-01

    This thesis investigates the application of modern control theory to the problem of controlling load changes in PWR power plants. A linear optimal state feedback scheme resulting from linear optimal control theory with a quadratic cost function is reduced to a partially decentralized control system using mode preservation techniques. Minimum information transfer among major components of the plant is investigated to provide an adequate coordination, simple implementation, and a reliable control system. Two control approaches are proposed: servo and model following. Each design considers several information structures for performance comparison. Integrated output error has been included in the control systems to accommodate external and plant parameter disturbances. In addition, the cross limit feature, specific to certain modern reactor control systems, is considered in the study to prevent low pressure reactor trip conditions. An 11th order nonlinear model for the reactor and boiler is derived based on theoretical principles, and simulation tests are performed for 10% load change as an illustration of system performance

  3. The effect of disinfection of alginate impressions with 35% beetle juice spray on stone model linear dimensional changes

    Directory of Open Access Journals (Sweden)

    Anggra Yudha Ramadianto

    2007-07-01

    Full Text Available Dimensional stability of alginate impression is very important for treatment in dentistry. This study was to find the effect of the beetle juice spray procedure on alginate impression on gypsum model linear dimensional changes. This experimental study used 25 samples, divided into 5 groups. The first group, as control, were the alginate impressions filled with dental stone immediately after forming. The other four groups were the alginate impressions gel spray each 1,2,3, and 4 times with 35% beetle juice and then filled with dental stone. Dimensional changes were measured in the lower part of the plaster model from buccal-lingual and mesial-distal direction and also measured in the outer distance between the upper part of the stone model by using Mitutoyo digital micrometre and profile projector scaled 0,001 mm. The results of mesial-distal diameter average of the control group and group 2,3,4, and 5 were 9.909 mm, 9.852 mm, 9.845 mm, 9.824 mm, and 9.754 mm. Meanwhile, the results of buccal-lingual diameter average were 9.847 mm, 9.841 mm, 9.826 mm, 9.776 mm, and 9.729 mm. The results of the outer distance between the upper part of the stone model were 31.739 mm, 31.689 mm, 31.682 mm, 31.670 mm, and 31.670 mm. The data of this study was evaluated statistically based on the variant analysis. The conclusion of this study was statistically, there was no significant effect on gypsum model linear dimensional changes obtained from alginate impressions sprayed with 35% beetle juice.

  4. Genome-scale regression analysis reveals a linear relationship for promoters and enhancers after combinatorial drug treatment

    KAUST Repository

    Rapakoulia, Trisevgeni

    2017-08-09

    Motivation: Drug combination therapy for treatment of cancers and other multifactorial diseases has the potential of increasing the therapeutic effect, while reducing the likelihood of drug resistance. In order to reduce time and cost spent in comprehensive screens, methods are needed which can model additive effects of possible drug combinations. Results: We here show that the transcriptional response to combinatorial drug treatment at promoters, as measured by single molecule CAGE technology, is accurately described by a linear combination of the responses of the individual drugs at a genome wide scale. We also find that the same linear relationship holds for transcription at enhancer elements. We conclude that the described approach is promising for eliciting the transcriptional response to multidrug treatment at promoters and enhancers in an unbiased genome wide way, which may minimize the need for exhaustive combinatorial screens.

  5. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  6. Study of vibrations and stabilization of linear collider final doublets at the sub-nanometer scale

    International Nuclear Information System (INIS)

    Bolzon, B.

    2007-11-01

    CLIC is one of the current projects of high energy linear colliders. Vertical beam sizes of 0.7 nm at the time of the collision and fast ground motion of a few nanometers impose an active stabilization of the final doublets at a fifth of nanometer above 4 Hz. The majority of my work concerned vibrations and active stabilization study of cantilever and slim beams in order to be representative of the final doublets of CLIC. In a first part, measured performances of different types of vibration sensors associated to an appropriate instrumentation showed that accurate measurements of ground motion are possible from 0.1 Hz up to 2000 Hz on a quiet site. Also, electrochemical sensors answering a priori the specifications of CLIC can be incorporated in the active stabilization at a fifth of nanometer. In a second part, an experimental and numerical study of beam vibrations enabled to validate the efficiency of the numerical prediction incorporated then in the simulation of the active stabilization. Also, a study of the impact of ground motion and of acoustic noise on beam vibrations showed that an active stabilization is necessary at least up to 1000 Hz. In a third part, results on the active stabilization of a beam at its two first resonances are shown down to amplitudes of a tenth of nanometer above 4 Hz by using in parallel a commercial system performing passive and active stabilization of the clamping. The last part is related to a study of a support for the final doublets of a linear collider prototype in phase of finalization, the ATF2 prototype. This work showed that relative motion between this support and the ground is below imposed tolerances (6 nm above 0.1 Hz) with appropriate boundary conditions. (author)

  7. Introducing PROFESS 2.0: A parallelized, fully linear scaling program for orbital-free density functional theory calculations

    Science.gov (United States)

    Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.

    2010-12-01

    Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer

  8. Exact spectrum of non-linear chirp scaling and its application in geosynchronous synthetic aperture radar imaging

    Directory of Open Access Journals (Sweden)

    Chen Qi

    2013-07-01

    Full Text Available Non-linear chirp scaling (NLCS is a feasible method to deal with time-variant frequency modulation (FM rate problem in synthetic aperture radar (SAR imaging. However, approximations in derivation of NLCS spectrum lead to performance decline in some cases. Presented is the exact spectrum of the NLCS function. Simulation with a geosynchronous synthetic aperture radar (GEO-SAR configuration is implemented. The results show that using the presented spectrum can significantly improve imaging performance, and the NLCS algorithm is suitable for GEO-SAR imaging after modification.

  9. Linear perturbation theory for tidal streams and the small-scale CDM power spectrum

    Science.gov (United States)

    Bovy, Jo; Erkal, Denis; Sanders, Jason L.

    2017-04-01

    Tidal streams in the Milky Way are sensitive probes of the population of low-mass dark matter subhaloes predicted in cold dark matter (CDM) simulations. We present a new calculus for computing the effect of subhalo fly-bys on cold streams based on the action-angle representation of streams. The heart of this calculus is a line-of-parallel-angle approach that calculates the perturbed distribution function of a stream segment by undoing the effect of all relevant impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 105 M⊙, accounting for the stream's internal dispersion and overlapping impacts. We study the statistical properties of density and track fluctuations with large suites of simulations of the effect of subhalo fly-bys. The one-dimensional density and track power spectra along the stream trace the subhalo mass function, with higher mass subhaloes producing power only on large scales, while lower mass subhaloes cause structure on smaller scales. We also find significant density and track bispectra that are observationally accessible. We further demonstrate that different projections of the track all reflect the same pattern of perturbations, facilitating their observational measurement. We apply this formalism to data for the Pal 5 stream and make a first rigorous determination of 10^{+11}_{-6} dark matter subhaloes with masses between 106.5 and 109 M⊙ within 20 kpc from the Galactic centre [corresponding to 1.4^{+1.6}_{-0.9} times the number predicted by CDM-only simulations or to fsub(r matter is clumpy on the smallest scales relevant for galaxy formation.

  10. Theoretical explanation of present mirror experiments and linear stability of larger scaled machines

    International Nuclear Information System (INIS)

    Berk, H.L.; Baldwin, D.E.; Cutler, T.A.; Lodestro, L.L.; Maron, N.; Pearlstein, L.D.; Rognlien, T.D.; Stewart, J.J.; Watson, D.C.

    1976-01-01

    A quasilinear model for the evolution of the 2XIIB mirror experiment is presented and shown to reproduce the time evolution of the experiment. From quasilinear theory it follows that the energy lifetime is the Spitzer electron drag time for T/sub e/ approximately less than 0.1T/sub i/. By computing the stability boundary of the DCLC mode, with warm plasma stabilization, the electron temperature is predicted as a function of radial scale length. In addition, the effect of finite length corrections to the Alfven cyclotron mode is assessed

  11. Regional-Scale Forcing and Feedbacks from Alternative Scenarios of Global-Scale Land Use Change

    Science.gov (United States)

    Jones, A. D.; Chini, L. P.; Collins, W.; Janetos, A. C.; Mao, J.; Shi, X.; Thomson, A. M.; Torn, M. S.

    2011-12-01

    Future patterns of land use change depend critically on the degree to which terrestrial carbon management strategies, such as biological carbon sequestration and biofuels, are utilized in order to mitigate global climate change. Furthermore, land use change associated with terrestrial carbon management induces biogeophysical changes to surface energy budgets that perturb climate at regional and possibly global scales, activating different feedback processes depending on the nature and location of the land use change. As a first step in a broader effort to create an integrated earth system model, we examine two scenarios of future anthropogenic activity generated by the Global Change Assessment Model (GCAM) within the full-coupled Community Earth System Model (CESM). Each scenario stabilizes radiative forcing from greenhouse gases and aerosols at 4.5 W/m^2. In the first, stabilization is achieved through a universal carbon tax that values terrestrial carbon equally with fossil carbon, leading to modest afforestation globally and low biofuel utilization. In the second scenario, stabilization is achieved with a tax on fossil fuel and industrial carbon alone. In this case, biofuel utilization increases dramatically and crop area expands to claim approximately 50% of forest cover globally. By design, these scenarios exhibit identical climate forcing from atmospheric constituents. Thus, differences among them can be attributed to the biogeophysical effects of land use change. In addition, we utilize offline radiative transfer and offline land model simulations to identify forcing and feedback mechanisms operating in different regions. We find that boreal deforestation has a strong climatic signature due to significant albedo change coupled with a regional-scale water vapor feedback. Tropical deforestation, on the other hand, has more subtle effects on climate. Globally, the two scenarios yield warming trends over the 21st century that differ by 0.5 degrees Celsius. This

  12. Development of the Systems Thinking Scale for Adolescent Behavior Change.

    Science.gov (United States)

    Moore, Shirley M; Komton, Vilailert; Adegbite-Adeniyi, Clara; Dolansky, Mary A; Hardin, Heather K; Borawski, Elaine A

    2018-03-01

    This report describes the development and psychometric testing of the Systems Thinking Scale for Adolescent Behavior Change (STS-AB). Following item development, initial assessments of understandability and stability of the STS-AB were conducted in a sample of nine adolescents enrolled in a weight management program. Exploratory factor analysis of the 16-item STS-AB and internal consistency assessments were then done with 359 adolescents enrolled in a weight management program. Test-retest reliability of the STS-AB was .71, p = .03; internal consistency reliability was .87. Factor analysis of the 16-item STS-AB indicated a one-factor solution with good factor loadings, ranging from .40 to .67. Evidence of construct validity was supported by significant correlations with established measures of variables associated with health behavior change. We provide beginning evidence of the reliability and validity of the STS-AB to measure systems thinking for health behavior change in young adolescents.

  13. Scaling behavior of ground-state energy cluster expansion for linear polyenes

    Science.gov (United States)

    Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.

    Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.

  14. Multiple linear regression to develop strength scaled equations for knee and elbow joints based on age, gender and segment mass

    DEFF Research Database (Denmark)

    D'Souza, Sonia; Rasmussen, John; Schwirtz, Ansgar

    2012-01-01

    and valuable ergonomic tool. Objective: To investigate age and gender effects on the torque-producing ability in the knee and elbow in older adults. To create strength scaled equations based on age, gender, upper/lower limb lengths and masses using multiple linear regression. To reduce the number of dependent...... flexors. Results: Males were signifantly stronger than females across all age groups. Elbow peak torque (EPT) was better preserved from 60s to 70s whereas knee peak torque (KPT) reduced significantly (PGender, thigh mass and age best...... predicted KPT (R2=0.60). Gender, forearm mass and age best predicted EPT (R2=0.75). Good crossvalidation was established for both elbow and knee models. Conclusion: This cross-sectional study of muscle strength created and validated strength scaled equations of EPT and KPT using only gender, segment mass...

  15. The Logarithmic-to-Linear Shift: One Learning Sequence, Many Tasks, Many Time Scales

    Science.gov (United States)

    Siegler, Robert S.; Thompson, Clarissa A.; Opfer, John E.

    2009-01-01

    The relation between short-term and long-term change (also known as learning and development) has been of great interest throughout the history of developmental psychology. Werner and Vygotsky believed that the two involved basically similar progressions of qualitatively distinct knowledge states; behaviorists such as Kendler and Kendler believed…

  16. Dryland responses to global change suggest the potential for rapid non-linear responses to some changes but resilience to others

    Science.gov (United States)

    Reed, S.; Ferrenberg, S.; Tucker, C.; Rutherford, W. A.; Wertin, T. M.; McHugh, T. A.; Morrissey, E.; Kuske, C.; Belnap, J.

    2017-12-01

    Drylands represent our planet's largest terrestrial biome, making up over 35% of Earth's land surface. In the context of this vast areal extent, it is no surprise that recent research suggests dryland inter-annual variability and responses to change have the potential to drive biogeochemical cycles and climate at the global-scale. Further, the data we do have suggest drylands can respond rapidly and non-linearly to change. Nevertheless, our understanding of the cross-system consistency of and mechanisms behind dryland responses to a changed environment remains relatively poor. This poor understanding hinders not only our larger understanding of terrestrial ecosystem function, but also our capacity to forecast future global biogeochemical cycles and climate. Here we present data from a series of Colorado Plateau manipulation experiments - including climate, land use, and nitrogen deposition manipulations - to explore how vascular plants, microbial communities, and biological soil crusts (a community of mosses, lichens, and/or cyanobacteria living in the interspace among vascular plants in arid and semiarid ecosystems worldwide) respond to a host of environmental changes. These responses include not only assessments of community composition, but of their function as well. We will explore photosynthesis, net soil CO2 exchange, soil carbon stocks and chemistry, albedo, and nutrient cycling. The experiments were begun with independent questions and cover a range of environmental change drivers and scientific approaches, but together offer a relatively holistic picture of how some drylands can change their structure and function in response to change. In particular, the data show very high ecosystem vulnerability to particular drivers, but surprising resilience to others, suggesting a multi-faceted response of these diverse systems.

  17. Near-linear cost increase to reduce climate-change risk

    Energy Technology Data Exchange (ETDEWEB)

    Schaeffer, M. [Environmental Systems Analysis Group, Wageningen University and Research Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Kram, T.; Van Vuuren, D.P. [Climate and Global Sustainability Group, Netherlands Environmental Assessment Agency, P.O. Box 303, 3720 AH Bilthoven (Netherlands); Meinshausen, M.; Hare, W.L. [Potsdam Institute for Climate Impact Research, P.O. Box 60 12 03, 14412 Potsdam (Germany); Schneider, S.H. (ed.) [Stanford University, Stanford, CA (United States)

    2008-12-30

    One approach in climate-change policy is to set normative long-term targets first and then infer the implied emissions pathways. An important example of a normative target is to limit the global-mean temperature change to a certain maximum. In general, reported cost estimates for limiting global warming often rise rapidly, even exponentially, as the scale of emission reductions from a reference level increases. This rapid rise may suggest that more ambitious policies may be prohibitively expensive. Here, we propose a probabilistic perspective, focused on the relationship between mitigation costs and the likelihood of achieving a climate target. We investigate the qualitative, functional relationship between the likelihood of achieving a normative target and the costs of climate-change mitigation. In contrast to the example of exponentially rising costs for lowering concentration levels, we show that the mitigation costs rise proportionally to the likelihood of meeting a temperature target, across a range of concentration levels. In economic terms investing in climate mitigation to increase the probability of achieving climate targets yields 'constant returns to scale', because of a counterbalancing rapid rise in the probabilities of meeting a temperature target as concentration is lowered.

  18. Measurement of changes in linear accelerator photon energy through flatness variation using an ion chamber array

    International Nuclear Information System (INIS)

    Gao Song; Balter, Peter A.; Rose, Mark; Simon, William E.

    2013-01-01

    Purpose: To compare the use of flatness versus percent depth dose (PDD) for determining changes in photon beam energy for a megavoltage linear accelerator. Methods: Energy changes were accomplished by adjusting the bending magnet current by up to ±15% in 5% increments away from the value used clinically. Two metrics for flatness, relative flatness in the central 80% of the field (Flat) and average maximum dose along the diagonals normalized by central axis dose (F DN ), were measured using a commercially available planner ionization chamber array. PDD was measured in water at depths of 5 and 10 cm in 3 × 3 cm 2 and 10 × 10 cm 2 fields using a cylindrical chamber. Results: PDD was more sensitive to changes in energy when the beam energy was increased than when it was decreased. For the 18-MV beam in particular, PDD was not sensitive to energy reductions below the nominal energy. The value of Flat was found to be more sensitive to decreases in energy than to increases, with little sensitivity to energy increases above the nominal energy for 18-MV beams. F DN was the only metric that was found to be sensitive to both increases and reductions of energy for both the 6- and 18-MV beams. Conclusions: Flatness based metrics were found to be more sensitive to energy changes than PDD, In particular, F DN was found to be the most sensitive metric to energy changes for photon beams of 6 and 18 MV. The ionization chamber array allows this metric to be conveniently measured as part of routine accelerator quality assurance.

  19. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  20. National Scale Rainfall Map Based on Linearly Interpolated Data from Automated Weather Stations and Rain Gauges

    Science.gov (United States)

    Alconis, Jenalyn; Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo; Lester Saddi, Ivan; Mongaya, Candeze; Figueroa, Kathleen Gay

    2014-05-01

    In response to the slew of disasters that devastates the Philippines on a regular basis, the national government put in place a program to address this problem. The Nationwide Operational Assessment of Hazards, or Project NOAH, consolidates the diverse scientific research being done and pushes the knowledge gained to the forefront of disaster risk reduction and management. Current activities of the project include installing rain gauges and water level sensors, conducting LIDAR surveys of critical river basins, geo-hazard mapping, and running information education campaigns. Approximately 700 automated weather stations and rain gauges installed in strategic locations in the Philippines hold the groundwork for the rainfall visualization system in the Project NOAH web portal at http://noah.dost.gov.ph. The system uses near real-time data from these stations installed in critical river basins. The sensors record the amount of rainfall in a particular area as point data updated every 10 to 15 minutes. The sensor sends the data to a central server either via GSM network or satellite data transfer for redundancy. The web portal displays the sensors as a placemarks layer on a map. When a placemark is clicked, it displays a graph of the rainfall data for the past 24 hours. The rainfall data is harvested by batch determined by a one-hour time frame. The program uses linear interpolation as the methodology implemented to visually represent a near real-time rainfall map. The algorithm allows very fast processing which is essential in near real-time systems. As more sensors are installed, precision is improved. This visualized dataset enables users to quickly discern where heavy rainfall is concentrated. It has proven invaluable on numerous occasions, such as last August 2013 when intense to torrential rains brought about by the enhanced Southwest Monsoon caused massive flooding in Metro Manila. Coupled with observations from Doppler imagery and water level sensors along the

  1. Naturalness in low-scale SUSY models and "non-linear" MSSM

    CERN Document Server

    Antoniadis, I; Ghilencea, D M

    2014-01-01

    In MSSM models with various boundary conditions for the soft breaking terms (m_{soft}) and for a higgs mass of 126 GeV, there is a (minimal) electroweak fine-tuning Delta\\approx 800 to 1000 for the constrained MSSM and Delta\\approx 500 for non-universal gaugino masses. These values, often regarded as unacceptably large, may indicate a problem of supersymmetry (SUSY) breaking, rather than of SUSY itself. A minimal modification of these models is to lower the SUSY breaking scale in the hidden sector (\\sqrt f) to few TeV, which we show to restore naturalness to more acceptable levels Delta\\approx 80 for the most conservative case of low tan_beta and ultraviolet boundary conditions as in the constrained MSSM. This is done without introducing additional fields in the visible sector, unlike other models that attempt to reduce Delta. In the present case Delta is reduced due to additional (effective) quartic higgs couplings proportional to the ratio m_{soft}/(\\sqrt f) of the visible to the hidden sector SUSY breaking...

  2. The large-scale gravitational bias from the quasi-linear regime.

    Science.gov (United States)

    Bernardeau, F.

    1996-08-01

    It is known that in gravitational instability scenarios the nonlinear dynamics induces non-Gaussian features in cosmological density fields that can be investigated with perturbation theory. Here, I derive the expression of the joint moments of cosmological density fields taken at two different locations. The results are valid when the density fields are filtered with a top-hat filter window function, and when the distance between the two cells is large compared to the smoothing length. In particular I show that it is possible to get the generating function of the coefficients C_p,q_ defined by _c_=C_p,q_ ^p+q-2^ where δ({vec}(x)) is the local smoothed density field. It is then possible to reconstruct the joint density probability distribution function (PDF), generalizing for two points what has been obtained previously for the one-point density PDF. I discuss the validity of the large separation approximation in an explicit numerical Monte Carlo integration of the C_2,1_ parameter as a function of |{vec}(x)_1_-{vec}(x)_2_|. A straightforward application is the calculation of the large-scale ``bias'' properties of the over-dense (or under-dense) regions. The properties and the shape of the bias function are presented in details and successfully compared with numerical results obtained in an N-body simulation with CDM initial conditions.

  3. Full scale experimental analysis of wind direction changes (EOD)

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose

    2007-01-01

    wind direction gust amplitudes associated with the investigated European sites are low compared to the recommended IEC- values. However, these values, as function of the mean wind speed, are difficult to validate thoroughly due to the limited number of fully correlated measurements....... the magnitudes of a joint gust event defined by a simultaneously wind speed- and direction change in order to obtain an indication of the validity of the magnitudes specified in the IEC code. The analysis relates to pre-specified recurrence periods and is based on full-scale wind field measurements. The wind......A coherent wind speed and wind direction change (ECD) load case is defined in the wind turbine standard. This load case is an essential extreme load case that e.g. may be design driving for flap defection of active stall controlled wind turbines. The present analysis identifies statistically...

  4. Assessment of Change in Psychoanalysis: Another Way of Using the Change After Psychotherapy Scales.

    Science.gov (United States)

    Pires, António Pazo; Gonçalves, João; Sá, Vânia; Silva, Andrea; Sandell, Rolf

    2016-04-01

    A systematic method is presented whereby material from a full course of psychoanalytic treatment is analyzed to assess changes and identify patterns of change. Through an analysis of session notes, changes were assessed using the CHange After Psychotherapy scales (CHAP; Sandell 1987a), which evaluate changes in five rating variables (symptoms, adaptive capacity, insight, basic conflicts, and extratherapeutic factors). Change incidents were identified in nearly every session. Early in the analysis, relatively more change incidents related to insight were found than were found for the other types of change. By contrast, in the third year and part of the fourth year, relatively more change incidents related to basic conflicts and adaptive capacity were found. While changes related to symptoms occurred throughout the course of treatment, such changes were never more frequent than other types of change. A content analysis of the change incidents allowed a determination of when in the treatment the patient's main conflicts (identified clinically) were overcome. A crossing of quantitative data with clinical and qualitative data allowed a better understanding of the patterns of change. © 2016 by the American Psychoanalytic Association.

  5. Spatial modeling of agricultural land use change at global scale

    Science.gov (United States)

    Meiyappan, P.; Dalton, M.; O'Neill, B. C.; Jain, A. K.

    2014-11-01

    Long-term modeling of agricultural land use is central in global scale assessments of climate change, food security, biodiversity, and climate adaptation and mitigation policies. We present a global-scale dynamic land use allocation model and show that it can reproduce the broad spatial features of the past 100 years of evolution of cropland and pastureland patterns. The modeling approach integrates economic theory, observed land use history, and data on both socioeconomic and biophysical determinants of land use change, and estimates relationships using long-term historical data, thereby making it suitable for long-term projections. The underlying economic motivation is maximization of expected profits by hypothesized landowners within each grid cell. The model predicts fractional land use for cropland and pastureland within each grid cell based on socioeconomic and biophysical driving factors that change with time. The model explicitly incorporates the following key features: (1) land use competition, (2) spatial heterogeneity in the nature of driving factors across geographic regions, (3) spatial heterogeneity in the relative importance of driving factors and previous land use patterns in determining land use allocation, and (4) spatial and temporal autocorrelation in land use patterns. We show that land use allocation approaches based solely on previous land use history (but disregarding the impact of driving factors), or those accounting for both land use history and driving factors by mechanistically fitting models for the spatial processes of land use change do not reproduce well long-term historical land use patterns. With an example application to the terrestrial carbon cycle, we show that such inaccuracies in land use allocation can translate into significant implications for global environmental assessments. The modeling approach and its evaluation provide an example that can be useful to the land use, Integrated Assessment, and the Earth system modeling

  6. How preservation time changes the linear viscoelastic properties of porcine liver.

    Science.gov (United States)

    Wex, C; Stoll, A; Fröhlich, M; Arndt, S; Lippert, H

    2013-01-01

    The preservation time of a liver graft is one of the crucial factors for the success of a liver transplantation. Grafts are kept in a preservation solution to delay cell destruction and cellular edema and to maximize organ function after transplantation. However, longer preservation times are not always avoidable. In this paper we focus on the mechanical changes of porcine liver with increasing preservation time, in order to establish an indicator for the quality of a liver graft dependent on preservation time. A time interval of 26 h was covered and the rheological properties of liver tissue studied using a stress-controlled rheometer. For samples of 1 h preservation time 0.8% strain was found as the limit of linear viscoelasticity. With increasing preservation time a decrease in the complex shear modulus as an indicator for stiffness was observed for the frequency range from 0.1 to 10 Hz. A simple fractional derivative representation of the Kelvin Voigt model was applied to gain further information about the changes of the mechanical properties of liver with increasing preservation time. Within the small shear rate interval of 0.0001-0.01 s⁻¹ the liver showed Newtonian-like flow behavior.

  7. Designing for Change: Interoperability in a scaling and adapting environment

    Science.gov (United States)

    Yarmey, L.

    2015-12-01

    The Earth Science cyberinfrastructure landscape is constantly changing. Technologies advance and technical implementations are refined or replaced. Data types, volumes, packaging, and use cases evolve. Scientific requirements emerge and mature. Standards shift while systems scale and adapt. In this complex and dynamic environment, interoperability remains a critical component of successful cyberinfrastructure. Through the resource- and priority-driven iterations on systems, interfaces, and content, questions fundamental to stable and useful Earth Science cyberinfrastructure arise. For instance, how are sociotechnical changes planned, tracked, and communicated? How should operational stability balance against 'new and shiny'? How can ongoing maintenance and mitigation of technical debt be managed in an often short-term resource environment? The Arctic Data Explorer is a metadata brokering application developed to enable discovery of international, interdisciplinary Arctic data across distributed repositories. Completely dependent on interoperable third party systems, the Arctic Data Explorer publicly launched in 2013 with an original 3000+ data records from four Arctic repositories. Since then the search has scaled to 25,000+ data records from thirteen repositories at the time of writing. In the final months of original project funding, priorities shift to lean operations with a strategic eye on the future. Here we present lessons learned from four years of Arctic Data Explorer design, development, communication, and maintenance work along with remaining questions and potential directions.

  8. Mechanical aspects of allotropic phase change at the mesoscopic scale

    International Nuclear Information System (INIS)

    Valance, St.

    2007-12-01

    The prediction of the mechanical state of steel structures submit to thermo-mechanical loading must take into account consequences of allotropic phase change. Indeed, phase change induce, at least for steels, a mechanism of TRansformation Induced Plasticity (TRIP) leading to irreversible deformation even for loading less than elastic yield limit. Homogenized analytical models generally fail to achieve a correct prediction for complex loading. In order to overcome these difficulties, we present a model achieving a sharper description of the phenomenon. The mesoscopic working scale we adopt here is the grain scale size. Hence, we consider that the behaviour of each phase is homogenous in the sense of continuous media mechanic, whereas the front is explicitly described. We work both experimentally and numerically. Experimentally, we designed a test facility enabling thermo mechanical loading of the sample under partial vacuum. Acquisition of sample surface while martensitic transformation is happening leads, under some hypothesis and thanks to Digital Image Correlation, to the partial identification of area affected by transformation. Numerically, the eXtended Finite Element Method is applied for weakly discontinuous displacement fields. Used of this method needs to numerically track the transformation front -discontinuity support. In that goal, based on level set method, we develop FEM numerical scheme enabling recognition and propagation of discontinuity support. Finally, this work is complete by an approach of driving forces introduced through Eshelbian mechanics which are dual of front velocity. (author)

  9. Does scale matter? A systematic review of incorporating biological realism when predicting changes in species distributions.

    Science.gov (United States)

    Record, Sydne; Strecker, Angela; Tuanmu, Mao-Ning; Beaudrot, Lydia; Zarnetske, Phoebe; Belmaker, Jonathan; Gerstner, Beth

    2018-01-01

    There is ample evidence that biotic factors, such as biotic interactions and dispersal capacity, can affect species distributions and influence species' responses to climate change. However, little is known about how these factors affect predictions from species distribution models (SDMs) with respect to spatial grain and extent of the models. Understanding how spatial scale influences the effects of biological processes in SDMs is important because SDMs are one of the primary tools used by conservation biologists to assess biodiversity impacts of climate change. We systematically reviewed SDM studies published from 2003-2015 using ISI Web of Science searches to: (1) determine the current state and key knowledge gaps of SDMs that incorporate biotic interactions and dispersal; and (2) understand how choice of spatial scale may alter the influence of biological processes on SDM predictions. We used linear mixed effects models to examine how predictions from SDMs changed in response to the effects of spatial scale, dispersal, and biotic interactions. There were important biases in studies including an emphasis on terrestrial ecosystems in northern latitudes and little representation of aquatic ecosystems. Our results suggest that neither spatial extent nor grain influence projected climate-induced changes in species ranges when SDMs include dispersal or biotic interactions. We identified several knowledge gaps and suggest that SDM studies forecasting the effects of climate change should: 1) address broader ranges of taxa and locations; and 1) report the grain size, extent, and results with and without biological complexity. The spatial scale of analysis in SDMs did not affect estimates of projected range shifts with dispersal and biotic interactions. However, the lack of reporting on results with and without biological complexity precluded many studies from our analysis.

  10. Reduced linear noise approximation for biochemical reaction networks with time-scale separation: The stochastic tQSSA+

    Science.gov (United States)

    Herath, Narmada; Del Vecchio, Domitilla

    2018-03-01

    Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.

  11. Quantifying the astronomical contribution to Pleistocene climate change: A non-linear, statistical approach

    Science.gov (United States)

    Crucifix, Michel; Wilkinson, Richard; Carson, Jake; Preston, Simon; Alemeida, Carlos; Rougier, Jonathan

    2013-04-01

    The existence of an action of astronomical forcing on the Pleistocene climate is almost undisputed. However, quantifying this action is not straightforward. In particular, the phenomenon of deglaciation is generally interpreted as a manifestation of instability, which is typical of non-linear systems. As a consequence, explaining the Pleistocene climate record as the addition of an astronomical contribution and noise-as often done using harmonic analysis tools-is potentially deceptive. Rather, we advocate a methodology in which non-linear stochastic dynamical systems are calibrated on the Pleistocene climate record. The exercise, though, requires careful statistical reasoning and state-of-the-art techniques. In fact, the problem has been judged to be mathematically 'intractable and unsolved' and some pragmatism is justified. In order to illustrate the methodology we consider one dynamical system that potentially captures four dynamical features of the Pleistocene climate : the existence of a saddle-node bifurcation in at least one of its slow components, a time-scale separation between a slow and a fast component, the action of astronomical forcing, and the existence a stochastic contribution to the system dynamics. This model is obviously not the only possible representation of Pleistocene dynamics, but it encapsulates well enough both our theoretical and empirical knowledge into a very simple form to constitute a valid starting point. The purpose of this poster is to outline the practical challenges in calibrating such a model on paleoclimate observations. Just as in time series analysis, there is no one single and universal test or criteria that would demonstrate the validity of an approach. Several methods exist to calibrate the model and judgement develops by the confrontation of the results of the different methods. In particular, we consider here the Kalman filter variants, the Particle Monte-Carlo Markov Chain, and two other variants of Sequential Monte

  12. Computerized implementation of higher-order electron-correlation methods and their linear-scaling divide-and-conquer extensions.

    Science.gov (United States)

    Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi

    2017-11-05

    We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    Science.gov (United States)

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  14. Linear Parks along Urban Rivers: Perceptions of Thermal Comfort and Climate Change Adaptation in Cyprus

    Directory of Open Access Journals (Sweden)

    Elias Giannakis

    2016-10-01

    Full Text Available The development of green space along urban rivers could mitigate urban heat island effects, enhance the physical and mental well-being of city dwellers, and improve flood resilience. A linear park has been recently created along the ephemeral Pedieos River in the urban area of Nicosia, Cyprus. Questionnaire surveys and micrometeorological measurements were conducted to explore people’s perceptions and satisfaction regarding the services of the urban park. People’s main reasons to visit the park were physical activity and exercise (67%, nature (13%, and cooling (4%. The micrometeorological measurements in and near the park revealed a relatively low cooling effect (0.5 °C of the park. However, the majority of the visitors (84% were satisfied or very satisfied with the cooling effect of the park. Logistic regression analysis indicated that the odds of individuals feeling very comfortable under a projected 3 °C future increase in temperature would be 0.34 times lower than the odds of feeling less comfortable. The discrepancies between the observed thermal comfort index and people’s perceptions revealed that people in semi-arid environments are adapted to the hot climatic conditions; 63% of the park visitors did not feel uncomfortable at temperatures between 27 °C and 37 °C. Further research is needed to assess other key ecosystems services of this urban green river corridor, such as flood protection, air quality regulation, and biodiversity conservation, to contribute to integrated climate change adaptation planning.

  15. Exploiting the atmosphere's memory for monthly, seasonal and interannual temperature forecasting using Scaling LInear Macroweather Model (SLIMM)

    Science.gov (United States)

    Del Rio Amador, Lenin; Lovejoy, Shaun

    2016-04-01

    Traditionally, most of the models for prediction of the atmosphere behavior in the macroweather and climate regimes follow a deterministic approach. However, modern ensemble forecasting systems using stochastic parameterizations are in fact deterministic/ stochastic hybrids that combine both elements to yield a statistical distribution of future atmospheric states. Nevertheless, the result is both highly complex (both numerically and theoretically) as well as being theoretically eclectic. In principle, it should be advantageous to exploit higher level turbulence type scaling laws. Concretely, in the case for the Global Circulation Models (GCM's), due to sensitive dependence on initial conditions, there is a deterministic predictability limit of the order of 10 days. When these models are coupled with ocean, cryosphere and other process models to make long range, climate forecasts, the high frequency "weather" is treated as a driving noise in the integration of the modelling equations. Following Hasselman, 1976, this has led to stochastic models that directly generate the noise, and model the low frequencies using systems of integer ordered linear ordinary differential equations, the most well-known are the Linear Inverse Models (LIM). For annual global scale forecasts, they are somewhat superior to the GCM's and have been presented as a benchmark for surface temperature forecasts with horizons up to decades. A key limitation for the LIM approach is that it assumes that the temperature has only short range (exponential) decorrelations. In contrast, an increasing body of evidence shows that - as with the models - the atmosphere respects a scale invariance symmetry leading to power laws with potentially enormous memories so that LIM greatly underestimates the memory of the system. In this talk we show that, due to the relatively low macroweather intermittency, the simplest scaling models - fractional Gaussian noise - can be used for making greatly improved forecasts

  16. City scale climate change policies: Do they matter for wellbeing?

    Science.gov (United States)

    Hiscock, Rosemary; Asikainen, Arja; Tuomisto, Jouni; Jantunen, Matti; Pärjälä, Erkki; Sabel, Clive E

    2017-06-01

    Climate change mitigation policies aim to reduce climate change through reducing greenhouse gas (GHG) emissions whereas adaption policies seek to enable humans to live in a world with increasingly variable and more extreme climatic conditions. It is increasingly realised that enacting such policies will have unintended implications for public health, but there has been less focus on their implications for wellbeing. Wellbeing can be defined as a positive mental state which is influenced by living conditions. As part of URGENCHE, an EU funded project to identify health and wellbeing outcomes of city greenhouse gas emission reduction policies, a survey designed to measure these living conditions and levels of wellbeing in Kuopio, Finland was collected in December 2013. Kuopio was the northmost among seven cities in Europe and China studied. Generalised estimating equation modelling was used to determine which living conditions were associated with subjective wellbeing (measured through the WHO-5 Scale). Local greenspace and spending time in nature were associated with higher levels of wellbeing whereas cold housing and poor quality indoor air were associated with lower levels of wellbeing. Thus adaption policies to increase greenspace might, in addition to reducing heat island effects, have the co-benefit of increasing wellbeing and improving housing insulation.

  17. Tools and Techniques for Basin-Scale Climate Change Assessment

    Science.gov (United States)

    Zagona, E.; Rajagopalan, B.; Oakley, W.; Wilson, N.; Weinstein, P.; Verdin, A.; Jerla, C.; Prairie, J. R.

    2012-12-01

    The Department of Interior's WaterSMART Program seeks to secure and stretch water supplies to benefit future generations and identify adaptive measures to address climate change. Under WaterSMART, Basin Studies are comprehensive water studies to explore options for meeting projected imbalances in water supply and demand in specific basins. Such studies could be most beneficial with application of recent scientific advances in climate projections, stochastic simulation, operational modeling and robust decision-making, as well as computational techniques to organize and analyze many alternatives. A new integrated set of tools and techniques to facilitate these studies includes the following components: Future supply scenarios are produced by the Hydrology Simulator, which uses non-parametric K-nearest neighbor resampling techniques to generate ensembles of hydrologic traces based on historical data, optionally conditioned on long paleo reconstructed data using various Markov Chain techniuqes. Resampling can also be conditioned on climate change projections from e.g., downscaled GCM projections to capture increased variability; spatial and temporal disaggregation is also provided. The simulations produced are ensembles of hydrologic inputs to the RiverWare operations/infrastucture decision modeling software. Alternative demand scenarios can be produced with the Demand Input Tool (DIT), an Excel-based tool that allows modifying future demands by groups such as states; sectors, e.g., agriculture, municipal, energy; and hydrologic basins. The demands can be scaled at future dates or changes ramped over specified time periods. Resulting data is imported directly into the decision model. Different model files can represent infrastructure alternatives and different Policy Sets represent alternative operating policies, including options for noticing when conditions point to unacceptable vulnerabilities, which trigger dynamically executing changes in operations or other

  18. The magnitude of linear dichroism of biological tissues as a result of cancer changes

    Science.gov (United States)

    Bojchuk, T. M.; Yermolenko, S. B.; Fedonyuk, L. Y.; Petryshen, O. I.; Guminetsky, S. G.; Prydij, O. G.

    2011-09-01

    The results of studies of linear dichroism values of different types of biological tissues (human prostate, esophageal epithelial human muscle tissue in rats) both healthy and infected tumor at different stages of development are shown here. The significant differences in magnitude of linear dichroism and its spectral dependence in the spectral range λ = 330 - 750 nm both among the objects of study, and between biotissues: healthy (or affected by benign tumors) and cancer patients are established. It is researched that in all cases in biological tissues (prostate gland, esophagus, human muscle tissue in rats) with cancer the linear dichroism arises, the value of which depends on the type of tissue and time of the tumor process. As for healthy tissues linear dichroism is absent, the results may have diagnostic value for detecting and assessing the degree of development of cancer.

  19. THE STRUCTURE AND LINEAR POLARIZATION OF THE KILOPARSEC-SCALE JET OF THE QUASAR 3C 345

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, David H.; Wardle, John F. C.; Marchenko, Valerie V., E-mail: roberts@brandeis.edu [Department of Physics MS-057, Brandeis University, Waltham, MA 02454-0911 (United States)

    2013-02-01

    Deep Very Large Array imaging of the quasar 3C 345 at 4.86 and 8.44 GHz has been used to study the structure and linear polarization of its radio jet on scales ranging from 2 to 30 kpc. There is a 7-8 Jy unresolved core with spectral index {alpha} {approx_equal} -0.24 (I{sub {nu}}{proportional_to}{nu}{sup {alpha}}). The jet (typical intensity 15 mJy beam{sup -1}) consists of a 2.''5 straight section containing two knots, and two additional non-co-linear knots at the end. The jet's total projected length is about 27 kpc. The spectral index of the jet varies over -1.1 {approx}< {alpha} {approx}< -0.5. The jet diverges with a semi-opening angle of about 9 Degree-Sign , and is nearly constant in integrated brightness over its length. A faint feature northeast of the core does not appear to be a true counter-jet, but rather an extended lobe of this FR-II radio source seen in projection. The absence of a counter-jet is sufficient to place modest constraints on the speed of the jet on these scales, requiring {beta} {approx}> 0.5. Despite the indication of jet precession in the total intensity structure, the polarization images suggest instead a jet re-directed at least twice by collisions with the external medium. Surprisingly, the electric vector position angles in the main body of the jet are neither longitudinal nor transverse, but make an angle of about 55 Degree-Sign with the jet axis in the middle while along the edges the vectors are transverse, suggesting a helical magnetic field. There is no significant Faraday rotation in the source, so that is not the cause of the twist. The fractional polarization in the jet averages 25% and is higher at the edges. In a companion paper, Roberts and Wardle show that differential Doppler boosting in a diverging relativistic velocity field can explain the electric vector pattern in the jet.

  20. Non-linear laws of echoic memory and auditory change detection in humans

    OpenAIRE

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-01-01

    Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two to...

  1. Hybrid MPI-OpenMP Parallelism in the ONETEP Linear-Scaling Electronic Structure Code: Application to the Delamination of Cellulose Nanofibrils.

    Science.gov (United States)

    Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton

    2014-11-11

    We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.

  2. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    Energy Technology Data Exchange (ETDEWEB)

    Pavanello, Michele [Department of Chemistry, Rutgers University, Newark, New Jersey 07102-1811 (United States); Van Voorhis, Troy [Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139-4307 (United States); Visscher, Lucas [Amsterdam Center for Multiscale Modeling, VU University, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Neugebauer, Johannes [Theoretische Organische Chemie, Organisch-Chemisches Institut der Westfaelischen Wilhelms-Universitaet Muenster, Corrensstrasse 40, 48149 Muenster (Germany)

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  3. Electrostatic interactions in finite systems treated with periodic boundary conditions: application to linear-scaling density functional theory.

    Science.gov (United States)

    Hine, Nicholas D M; Dziedzic, Jacek; Haynes, Peter D; Skylaris, Chris-Kriton

    2011-11-28

    We present a comparison of methods for treating the electrostatic interactions of finite, isolated systems within periodic boundary conditions (PBCs), within density functional theory (DFT), with particular emphasis on linear-scaling (LS) DFT. Often, PBCs are not physically realistic but are an unavoidable consequence of the choice of basis set and the efficacy of using Fourier transforms to compute the Hartree potential. In such cases the effects of PBCs on the calculations need to be avoided, so that the results obtained represent the open rather than the periodic boundary. The very large systems encountered in LS-DFT make the demands of the supercell approximation for isolated systems more difficult to manage, and we show cases where the open boundary (infinite cell) result cannot be obtained from extrapolation of calculations from periodic cells of increasing size. We discuss, implement, and test three very different approaches for overcoming or circumventing the effects of PBCs: truncation of the Coulomb interaction combined with padding of the simulation cell, approaches based on the minimum image convention, and the explicit use of open boundary conditions (OBCs). We have implemented these approaches in the ONETEP LS-DFT program and applied them to a range of systems, including a polar nanorod and a protein. We compare their accuracy, complexity, and rate of convergence with simulation cell size. We demonstrate that corrective approaches within PBCs can achieve the OBC result more efficiently and accurately than pure OBC approaches.

  4. Assimilating Non-linear Effects of Customized Large-Scale Climate Predictors on Downscaled Precipitation over the Tropical Andes

    Science.gov (United States)

    Molina, J. M.; Zaitchik, B. F.

    2016-12-01

    Recent findings considering high CO2 emission scenarios (RCP8.5) suggest that the tropical Andes may experience a massive warming and a significant precipitation increase (decrease) during the wet (dry) seasons by the end of the 21st century. Variations on rainfall-streamflow relationships and seasonal crop yields significantly affect human development in this region and make local communities highly vulnerable to climate change and variability. We developed an expert-informed empirical statistical downscaling (ESD) algorithm to explore and construct robust global climate predictors to perform skillful RCP8.5 projections of in-situ March-May (MAM) precipitation required for impact modeling and adaptation studies. We applied our framework to a topographically-complex region of the Colombian Andes where a number of previous studies have reported El Niño-Southern Oscillation (ENSO) as the main driver of climate variability. Supervised machine learning algorithms were trained with customized and bias-corrected predictors from NCEP reanalysis, and a cross-validation approach was implemented to assess both predictive skill and model selection. We found weak and not significant teleconnections between precipitation and lagged seasonal surface temperatures over El Niño3.4 domain, which suggests that ENSO fails to explain MAM rainfall variability in the study region. In contrast, series of Sea Level Pressure (SLP) over American Samoa -likely associated with the South Pacific Convergence Zone (SPCZ)- explains more than 65% of the precipitation variance. The best prediction skill was obtained with Selected Generalized Additive Models (SGAM) given their ability to capture linear/nonlinear relationships present in the data. While SPCZ-related series exhibited a positive linear effect in the rainfall response, SLP predictors in the north Atlantic and central equatorial Pacific showed nonlinear effects. A multimodel (MIROC, CanESM2 and CCSM) ensemble of ESD projections revealed

  5. Confirmation of linear system theory prediction: Changes in Herrnstein's k as a function of changes in reinforcer magnitude.

    Science.gov (United States)

    McDowell, J J; Wood, H M

    1984-03-01

    Eight human subjects pressed a lever on a range of variable-interval schedules for 0.25 cent to 35.0 cent per reinforcement. Herrnstein's hyperbola described seven of the eight subjects' response-rate data well. For all subjects, the y-asymptote of the hyperbola increased with increasing reinforcer magnitude and its reciprocal was a linear function of the reciprocal of reinforcer magnitude. These results confirm predictions made by linear system theory; they contradict formal properties of Herrnstein's account and of six other mathematical accounts of single-alternative responding.

  6. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  7. Color change of Blue butterfly wing scales in an air - Vapor ambient

    Science.gov (United States)

    Kertész, Krisztián; Piszter, Gábor; Jakab, Emma; Bálint, Zsolt; Vértesy, Zofia; Biró, László Péter

    2013-09-01

    Photonic crystals are periodic dielectric nanocomposites, which have photonic band gaps that forbid the propagation of light within certain frequency ranges. The optical response of such nanoarchitectures on chemical changes in the environment is determined by the spectral change of the reflected light, and depends on the composition of the ambient atmosphere and on the nanostructure characteristics. We carried out reflectance measurements on closely related Blue lycaenid butterfly males possessing so-called "pepper-pot" type photonic nanoarchitecture in their scales covering their dorsal wing surfaces. Experiments were carried out changing the concentration and nature of test vapors while monitoring the spectral variations in time. All the tests were done with the sample temperature set at, and below the room temperature. The spectral changes were found to be linear with the increasing of concentration and the signal amplitude is higher at lower temperatures. The mechanism of reflectance spectra modification is based on capillary condensation of the vapors penetrating in the nanostructure. These structures of natural origin may serve as cheap, environmentally free and biodegradable sensor elements. The study of these nanoarchitectures of biologic origin could be the source of various new bioinspired systems.

  8. Color change of Blue butterfly wing scales in an air – Vapor ambient

    International Nuclear Information System (INIS)

    Kertész, Krisztián; Piszter, Gábor; Jakab, Emma; Bálint, Zsolt; Vértesy, Zofia; Biró, László Péter

    2013-01-01

    Photonic crystals are periodic dielectric nanocomposites, which have photonic band gaps that forbid the propagation of light within certain frequency ranges. The optical response of such nanoarchitectures on chemical changes in the environment is determined by the spectral change of the reflected light, and depends on the composition of the ambient atmosphere and on the nanostructure characteristics. We carried out reflectance measurements on closely related Blue lycaenid butterfly males possessing so-called “pepper-pot” type photonic nanoarchitecture in their scales covering their dorsal wing surfaces. Experiments were carried out changing the concentration and nature of test vapors while monitoring the spectral variations in time. All the tests were done with the sample temperature set at, and below the room temperature. The spectral changes were found to be linear with the increasing of concentration and the signal amplitude is higher at lower temperatures. The mechanism of reflectance spectra modification is based on capillary condensation of the vapors penetrating in the nanostructure. These structures of natural origin may serve as cheap, environmentally free and biodegradable sensor elements. The study of these nanoarchitectures of biologic origin could be the source of various new bioinspired systems.

  9. Color change of Blue butterfly wing scales in an air – Vapor ambient

    Energy Technology Data Exchange (ETDEWEB)

    Kertész, Krisztián, E-mail: kertesz.krisztian@ttk.mta.hu [Institute of Technical Physics and Materials Science, Centre for Natural Sciences, H-1525 Budapest, PO Box 49, Hungary(http://www.nanotechnology.hu) (Hungary); Piszter, Gábor [Institute of Technical Physics and Materials Science, Centre for Natural Sciences, H-1525 Budapest, PO Box 49, Hungary(http://www.nanotechnology.hu) (Hungary); Jakab, Emma [Institute of Materials and Environmental Chemistry, Centre for Natural Sciences, H-1525 Budapest, PO Box 17 (Hungary); Bálint, Zsolt [Hungarian Natural History Museum, Baross utca 13, H-1088 Budapest (Hungary); Vértesy, Zofia; Biró, László Péter [Institute of Technical Physics and Materials Science, Centre for Natural Sciences, H-1525 Budapest, PO Box 49, Hungary(http://www.nanotechnology.hu) (Hungary)

    2013-09-15

    Photonic crystals are periodic dielectric nanocomposites, which have photonic band gaps that forbid the propagation of light within certain frequency ranges. The optical response of such nanoarchitectures on chemical changes in the environment is determined by the spectral change of the reflected light, and depends on the composition of the ambient atmosphere and on the nanostructure characteristics. We carried out reflectance measurements on closely related Blue lycaenid butterfly males possessing so-called “pepper-pot” type photonic nanoarchitecture in their scales covering their dorsal wing surfaces. Experiments were carried out changing the concentration and nature of test vapors while monitoring the spectral variations in time. All the tests were done with the sample temperature set at, and below the room temperature. The spectral changes were found to be linear with the increasing of concentration and the signal amplitude is higher at lower temperatures. The mechanism of reflectance spectra modification is based on capillary condensation of the vapors penetrating in the nanostructure. These structures of natural origin may serve as cheap, environmentally free and biodegradable sensor elements. The study of these nanoarchitectures of biologic origin could be the source of various new bioinspired systems.

  10. Linear and non-linear optics of nano-scale 2‧,7‧dichloro-fluorescein/FTO optical system: Bandgap and dielectric analysis

    Science.gov (United States)

    Iqbal, Javed; Yahia, I. S.; Zahran, H. Y.; AlFaify, S.; AlBassam, A. M.; El-Naggar, A. M.

    2016-12-01

    2‧,7‧ dichloro-Fluorescein (DCF) is a promising organic semiconductor material in different technological aspects such as solar cell, photodiode, Schottky diode. DCF thin film/conductive glass (FTO glass) was prepared by a low-cost spin coating technique. The spectrophotometric data such as the absorbance, reflectance and transmittance were cogitated in the 350-2500 nm wavelength range, at the normal incidence. The absorption (n) and linear refractive indices (k) were computed using the Fresnel's equations. The optical band gap was evaluated and it was found that there is two band gap described as follows: (1) It is related to the band gap of FTO/glass which is equal 3.4 eV and (2) the second one is related to the absorption edge of DCF equals 2.25 eV. The non-linear parameters such as the refractive index (n2) and optical susceptibility χ(3) were evaluated by the spectroscopic method based on the refractive index. Both (n2) and χ(3) increased rapidly on increasing the wavelength with redshift absorption. Our work represents a new idea about using FTO glass for a new generation of the optical device and technology.

  11. The Non-linear Trajectory of Change in Play Profiles of Three Children in Psychodynamic Play Therapy.

    Science.gov (United States)

    Halfon, Sibel; Çavdar, Alev; Orsucci, Franco; Schiepek, Gunter K; Andreassi, Silvia; Giuliani, Alessandro; de Felice, Giulio

    2016-01-01

    Aim: Even though there is substantial evidence that play based therapies produce significant change, the specific play processes in treatment remain unexamined. For that purpose, processes of change in long-term psychodynamic play therapy are assessed through a repeated systematic assessment of three children's "play profiles," which reflect patterns of organization among play variables that contribute to play activity in therapy, indicative of the children's coping strategies, and an expression of their internal world. The main aims of the study are to investigate the kinds of play profiles expressed in treatment, and to test whether there is emergence of new and more adaptive play profiles using dynamic systems theory as a methodological framework. Methods and Procedures: Each session from the long-term psychodynamic treatment (mean number of sessions = 55) of three 6-year-old good outcome cases presenting with Separation Anxiety were recorded, transcribed and coded using items from the Children's Play Therapy Instrument (CPTI), created to assess the play activity of children in psychotherapy, generating discrete and measurable units of play activity arranged along a continuum of four play profiles: "Adaptive," "Inhibited," "Impulsive," and "Disorganized." The play profiles were clustered through K -means Algorithm, generating seven discrete states characterizing the course of treatment and the transitions between these states were analyzed by Markov Transition Matrix, Recurrence Quantification Analysis (RQA) and odds ratios comparing the first and second halves of psychotherapy. Results: The Markov Transitions between the states scaled almost perfectly and also showed the ergodicity of the system, meaning that the child can reach any state or shift to another one in play. The RQA and odds ratios showed two trends of change, first concerning the decrease in the use of "less adaptive" strategies, second regarding the reduction of play interruptions. Conclusion

  12. Watershed scale impacts of bioenergy, landscape changes, and ecosystem response

    Science.gov (United States)

    Chaubey, Indrajeet; Cibin, Raj; Chiang, Li-Chi

    2013-04-01

    In recent years, high US gasoline prices and national security concerns have prompted a renewed interest in alternative fuel sources to meet increasing energy demands, particularly by the transportation sector. Food and animal feed crops, such as corn and soybean, sugarcane, residue from these crops, and cellulosic perennial crops grown specifically to produce bioenergy (e.g. switchgrass, Miscanthus, mixed grasses), and fast growing trees (e.g. hybrid poplar) are expected to provide the majority of the biofeedstock for energy production. One of the grand challenges in supplying large quantities of grain-based and lignocellulosic materials for the production of biofuels is ensuring that they are produced in environmentally sustainable and economically viable manner. Feedstock selection will vary geographically based on regional adaptability, productivity, and reliability. Changes in land use and management practices related to biofeedstock production may have potential impacts on water quantity and quality, sediments, and pesticides and nutrient losses, and these impacts may be exacerbated by climate variability and change. We have made many improvements in the currently available biophysical models (e.g. Soil and Water Assessment Tool or SWAT model) to evaluate sustainability of energy crop production. We have utilized the improved model to evaluate impacts of both annual (e.g. corn) and perennial bioenergy crops (e.g. Miscanthus and switchgrass at) on hydrology and water quality under the following plausible bioenergy crop production scenarios: (1) at highly erodible areas; (2) at agriculturally marginal areas; (3) at pasture areas; (4) crop residue (corn stover) removal; and (5) combinations of above scenarios. Overall results indicated improvement in water quality with introduction of perennial energy crops. Stream flow at the watershed outlet was reduced under energy crop production scenarios and ranged between 0.3% and 5% across scenarios. Erosion and sediment

  13. On linear correlation between interfacial tension of water-solvent interface solubility of water in organic solvents and parameters of diluent effect scale

    International Nuclear Information System (INIS)

    Mezhov, Eh.A.; Khananashvili, N.L.; Shmidt, V.S.

    1988-01-01

    Presence of linear correlation between water solubility in nonmiscible with it organic solvents, interfacial tension of water-solvent interface, on the one hand, and solvent effect scale parameters and these solvents π* - on the other hand, is established. It allows, using certain tabular parameters of solvent effect or each solvent π*, to predict values of interfacial tension and water solubility for corresponding systems. It is shown, that solvent effect scale allows to predict values more accurately, than other known solvent scales, as it in contrast to other scales characterizes solvents, which are in equilibrium with water

  14. Long Term Large Scale river nutrient changes across the UK

    Science.gov (United States)

    Bell, Victoria; Naden, Pam; Tipping, Ed; Davies, Helen; Davies, Jessica; Dragosits, Ulli; Muhammed, Shibu; Quinton, John; Stuart, Marianne; Whitmore, Andy; Wu, Lianhai

    2017-04-01

    During recent decades and centuries, pools and fluxes of Carbon, Nitrogen and Phosphorus (C, N and P) in UK rivers and ecosystems have been transformed by the spread and fertiliser-based intensification of agriculture (necessary to sustain human populations), by atmospheric pollution, by human waste (rising in line with population growth), and now by climate change. The principal objective of the UK's NERC-funded Macronutrients LTLS research project has been to account for observable terrestrial and aquatic pools, concentrations and fluxes of C, N and P on the basis of past inputs, biotic and abiotic interactions, and transport processes. More specifically, over the last 200 years, what have been the temporal responses of plant and soil nutrient pools in different UK catchments to nutrient enrichment, and what have been the consequent effects on nutrient transfers from land to the atmosphere, freshwaters and estuaries? The work described here addresses the second question by providing an integrated quantitative description of the interlinked land and water pools and annual fluxes of C, N and P for UK catchments over time. A national-scale modelling environment has been developed, combining simple physically-based gridded models that can be parameterised using recent observations before application to long timescales. The LTLS Integrated Model (LTLS-IM) uses readily-available driving data (climate, land-use, nutrient inputs, topography), and model estimates of both terrestrial and freshwater nutrient loads have been compared with measurements from sites across the UK. Here, the focus is on the freshwater nutrient component of the LTLS-IM, but the terrestrial nutrient inputs required for this are provided by models of nutrient processes in semi-natural and agricultural systems, and from simple models of nutrients arising from human waste. In the freshwater model, lateral routing of dissolved and particulate nutrients and within-river processing such as

  15. LANDIS PRO: a landscape model that predicts forest composition and structure changes at regional scales

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Jacob S. Fraser; Frank R. Thompson; Stephen R. Shifley; Martin A. Spetich

    2014-01-01

    LANDIS PRO predicts forest composition and structure changes incorporating species-, stand-, and landscape-scales processes at regional scales. Species-scale processes include tree growth, establishment, and mortality. Stand-scale processes contain density- and size-related resource competition that regulates self-thinning and seedling establishment. Landscapescale...

  16. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    Science.gov (United States)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  17. A linear regression model for predicting PNW estuarine temperatures in a changing climate

    Science.gov (United States)

    Pacific Northwest coastal regions, estuaries, and associated ecosystems are vulnerable to the potential effects of climate change, especially to changes in nearshore water temperature. While predictive climate models simulate future air temperatures, no such projections exist for...

  18. Regional scaling of annual mean precipitation and water availability with global temperature change

    Science.gov (United States)

    Greve, Peter; Gudmundsson, Lukas; Seneviratne, Sonia I.

    2018-03-01

    Changes in regional water availability belong to the most crucial potential impacts of anthropogenic climate change, but are highly uncertain. It is thus of key importance for stakeholders to assess the possible implications of different global temperature thresholds on these quantities. Using a subset of climate model simulations from the fifth phase of the Coupled Model Intercomparison Project (CMIP5), we derive here the sensitivity of regional changes in precipitation and in precipitation minus evapotranspiration to global temperature changes. The simulations span the full range of available emission scenarios, and the sensitivities are derived using a modified pattern scaling approach. The applied approach assumes linear relationships on global temperature changes while thoroughly addressing associated uncertainties via resampling methods. This allows us to assess the full distribution of the simulations in a probabilistic sense. Northern high-latitude regions display robust responses towards wetting, while subtropical regions display a tendency towards drying but with a large range of responses. Even though both internal variability and the scenario choice play an important role in the overall spread of the simulations, the uncertainty stemming from the climate model choice usually accounts for about half of the total uncertainty in most regions. We additionally assess the implications of limiting global mean temperature warming to values below (i) 2 K or (ii) 1.5 K (as stated within the 2015 Paris Agreement). We show that opting for the 1.5 K target might just slightly influence the mean response, but could substantially reduce the risk of experiencing extreme changes in regional water availability.

  19. The effect of changes in sea surface temperature on linear growth of Porites coral in Ambon Bay

    International Nuclear Information System (INIS)

    Corvianawatie, Corry; Putri, Mutiara R.; Cahyarini, Sri Y.

    2015-01-01

    Coral is one of the most important organisms in the coral reef ecosystem. There are several factors affecting coral growth, one of them is changes in sea surface temperature (SST). The purpose of this research is to understand the influence of SST variability on the annual linear growth of Porites coral taken from Ambon Bay. The annual coral linear growth was calculated and compared to the annual SST from the Extended Reconstructed Sea Surface Temperature version 3b (ERSST v3b) model. Coral growth was calculated by using Coral X-radiograph Density System (CoralXDS) software. Coral sample X-radiographs were used as input data. Chronology was developed by calculating the coral’s annual growth bands. A pair of high and low density banding patterns observed in the coral’s X-radiograph represent one year of coral growth. The results of this study shows that Porites coral extents from 2001-2009 and had an average growth rate of 1.46 cm/year. Statistical analysis shows that the annual coral linear growth declined by 0.015 cm/year while the annual SST declined by 0.013°C/year. SST and the annual linear growth of Porites coral in the Ambon Bay is insignificantly correlated with r=0.304 (n=9, p>0.05). This indicates that annual SST variability does not significantly influence the linear growth of Porites coral from Ambon Bay. It is suggested that sedimentation load, salinity, pH or other environmental factors may affect annual linear coral growth

  20. The effect of changes in sea surface temperature on linear growth of Porites coral in Ambon Bay

    Energy Technology Data Exchange (ETDEWEB)

    Corvianawatie, Corry, E-mail: corvianawatie@students.itb.ac.id; Putri, Mutiara R., E-mail: mutiara.putri@fitb.itb.ac.id [Oceanography Study Program, Bandung Institute of Technology (ITB), Jl. Ganesha 10 Bandung (Indonesia); Cahyarini, Sri Y., E-mail: yuda@geotek.lipi.go.id [Research Center for Geotechnology, Indonesian Institute of Sciences (LIPI), Bandung (Indonesia)

    2015-09-30

    Coral is one of the most important organisms in the coral reef ecosystem. There are several factors affecting coral growth, one of them is changes in sea surface temperature (SST). The purpose of this research is to understand the influence of SST variability on the annual linear growth of Porites coral taken from Ambon Bay. The annual coral linear growth was calculated and compared to the annual SST from the Extended Reconstructed Sea Surface Temperature version 3b (ERSST v3b) model. Coral growth was calculated by using Coral X-radiograph Density System (CoralXDS) software. Coral sample X-radiographs were used as input data. Chronology was developed by calculating the coral’s annual growth bands. A pair of high and low density banding patterns observed in the coral’s X-radiograph represent one year of coral growth. The results of this study shows that Porites coral extents from 2001-2009 and had an average growth rate of 1.46 cm/year. Statistical analysis shows that the annual coral linear growth declined by 0.015 cm/year while the annual SST declined by 0.013°C/year. SST and the annual linear growth of Porites coral in the Ambon Bay is insignificantly correlated with r=0.304 (n=9, p>0.05). This indicates that annual SST variability does not significantly influence the linear growth of Porites coral from Ambon Bay. It is suggested that sedimentation load, salinity, pH or other environmental factors may affect annual linear coral growth.

  1. Do changes on MCMI-II personality disorder scales in short-term psychotherapy reflect trait or state changes?

    DEFF Research Database (Denmark)

    Jensen, Hans Henrik; Mortensen, Erik Lykke; Lotz, Martin

    2008-01-01

    The Millon Clinical Multiaxial Inventory (MCMI) has become an important and commonly used instrument to assess personality functioning. Several studies report significant changes on MCMI personality disorder scales after psychological treatment. The aim of the study was to investigate whether pre......-post-treatment changes in 39-session psychodynamic group psychotherapy as measured with the MCMI reflect real personality change or primarily reflect symptomatic state changes. Pre-post-treatment design included 236 psychotherapy outpatients. Personality changes were measured on the MCMI-II and symptomatic state changes...... on the Symptom Check List 90-R (SCL-90-R). The MCMI Schizoid, Avoidant, Self-defeating, and severe personality disorder scales revealed substantial changes, which could be predicted from changes on SCL-90-R global symptomatology (GSI) and on the SCL-90-R Depression scale. The MCMI Dependent personality score...

  2. Climate analysis at local scale in the context of climate change

    International Nuclear Information System (INIS)

    Quenol, H.

    2013-01-01

    Issues related to climate change increasingly concern the functioning of local scale geo-systems. A global change will necessarily affect local climates. In this context, the potential impacts of climate change lead to numerous inter rogations concerning adaptation. Despite numerous studies on the impact of projected global warming on different regions global atmospheric models (GCM) are not adapted to local scales and, as a result, impacts at local scales are still approximate. Although real progress in meso-scale atmospheric modeling was realized over the past years, no operative model is in use yet to simulate climate at local scales (ten or so meters). (author)

  3. The assessment of changes in brain volume using combined linear measurements

    International Nuclear Information System (INIS)

    Gomori, J.M.; Steiner, I.; Melamed, E.; Cooper, G.

    1984-01-01

    All linear measurements employed for evaluation of brain atrophy, were performed on 148 computed tomograms of patients aged 28 to 84 without evidence of any nervous system disorder. These included size of lateral, third and fourth ventricles, width of the Sylvian and frontal interhemispheric fissures and cortical sulci and size of the pre-pontine cistern. Various parameters indicated decrease in brain mass with age. Since the atrophic process is a diffuse phenomenon, integration of several measurements evaluating separate brain regions was made. The bicaudate ratio and the Sylvian fissure ratio (representing both central and cortical atrophy) were combined arithmetically, resulting in a correlation of 0.6390 with age (p<0.0005). With a computed canonical correlation analysis: a formula was obtained which combined measurements of the lateral and third ventricles, the Sylvian fissure and the pre-pontine cistern. This formula yealded a correlation of 0.67795 (p<0.0005). These linear measurements will enable simple and reliable assessment of reduction in brain volume during the normal aging process and in disorders accompanied by brain atrophy. (orig.)

  4. Near-linear cost increase to reduce climate-change risk

    NARCIS (Netherlands)

    Schaeffer, M.; Kram, T.; Meinshausen, M.; Vuuren, van D.P.; Hare, W.L.

    2008-01-01

    One approach in climate-change policy is to set normative long-term targets first and then infer the implied emissions pathways. An important example of a normative target is to limit the global-mean temperature change to a certain maximum. In general, reported cost estimates for limiting global

  5. Assessment of climate change impacts on rainfall using large scale ...

    Indian Academy of Sciences (India)

    Many of the applied techniques in water resources management can be directly or indirectly influenced by ... is based on large scale climate signals data around the world. In order ... predictand relationships are often very complex. .... constraints to solve the optimization problem. ..... social, and environmental sustainability.

  6. Methods for assessment of climate variability and climate changes in different time-space scales

    International Nuclear Information System (INIS)

    Lobanov, V.; Lobanova, H.

    2004-01-01

    Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same

  7. Climate change adaptation strategies by small-scale farmers in ...

    African Journals Online (AJOL)

    Mburu

    Climate change is a great environmental challenge facing humanity today. In Yatta District, residents report frequent crop failures, water shortages and relief food has become a frequent feature of their life. This study examines the adaptation strategies to climate change adopted by the dry-land farming communities in Yatta ...

  8. The Management of Large-Scale Change in Pakistani Education

    Science.gov (United States)

    Razzaq, Jamila; Forde, Christine

    2014-01-01

    This article argues that although there are increasing similarities in priorities across different national education systems, contextual differences raise questions about the replication of sets of change strategies based on particular understandings of the nature of educational change across these different systems. This article begins with an…

  9. Age related neuromuscular changes in sEMG of m. Tibialis Anterior using higher order statistics (Gaussianity & linearity test).

    Science.gov (United States)

    Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K

    2016-08-01

    Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.

  10. Linear disturbances on discontinuous permafrost: implications for thaw-induced changes to land cover and drainage patterns

    International Nuclear Information System (INIS)

    Williams, Tyler J; Quinton, William L; Baltzer, Jennifer L

    2013-01-01

    Within the zone of discontinuous permafrost, linear disturbances such as winter roads and seismic lines severely alter the hydrology, ecology, and ground thermal regime. Continued resource exploration in this environment has created a need to better understand the processes causing permafrost thaw and concomitant changes to the terrain and ground cover, in order to efficiently reduce the environmental impact of future exploration through the development of best management practices. In a peatland 50 km south of Fort Simpson, NWT, permafrost thaw and the resulting ground surface subsidence have produced water-logged linear disturbances that appear not to be regenerating permafrost, and in many cases have altered the land cover type to resemble that of a wetland bog or fen. Subsidence alters the hydrology of plateaus, developing a fill and spill drainage pattern that allows some disturbances to be hydrologically connected with adjacent wetlands via surface flow paths during periods of high water availability. The degree of initial disturbance is an important control on the extent of permafrost thaw and thus the overall potential recovery of the linear disturbance. Low impact techniques that minimize ground surface disturbance and maintain original surface topography by eliminating windrows are needed to minimize the impact of these linear disturbances. (letter)

  11. Scaling of vegetation indices for environmental change studies

    International Nuclear Information System (INIS)

    Qi, J.; Huete, A.; Sorooshian, S.; Chehbouni, A.; Kerr, Y.

    1992-01-01

    The spatial integration of physical parameters in remote sensing studies is of critical concern when evaluating the global biophysical processes on the earth's surface. When high resolution physical parameters, such as vegetation indices, are degraded for integration into global scale studies, they differ from lower spatial resolution data due to spatial variability and the method by which these parameters are integrated. In this study, multi-spatial resolution data sets of SPOT and ground based data obtained at Walnut Gulch Experimental Watershed in southern Arizona, US during MONSOON '90 were used. These data sets were examined to study the variations of the vegetation index parameters when integrated into coarser resolutions. Different integration methods (conventional mean and Geostatistical mean) were used in simulations of high-to-low resolutions. The sensitivity of the integrated parameters were found to vary with both the spatial variability of the area and the integration methods. Modeled equations describing the scale-dependency of the vegetation index are suggested

  12. Regional-Scale Climate Change: Observations and Model Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, Raymond S; Diaz, Henry F

    2010-12-14

    This collaborative proposal addressed key issues in understanding the Earth's climate system, as highlighted by the U.S. Climate Science Program. The research focused on documenting past climatic changes and on assessing future climatic changes based on suites of global and regional climate models. Geographically, our emphasis was on the mountainous regions of the world, with a particular focus on the Neotropics of Central America and the Hawaiian Islands. Mountain regions are zones where large variations in ecosystems occur due to the strong climate zonation forced by the topography. These areas are particularly susceptible to changes in critical ecological thresholds, and we conducted studies of changes in phonological indicators based on various climatic thresholds.

  13. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    Science.gov (United States)

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  14. Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.

    2014-12-01

    The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (Hdecreasing with scale) - not climate - "that you expect". The conventional framework that treats the background as close to white noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the

  15. Assessment of climate change impacts on rainfall using large scale

    Indian Academy of Sciences (India)

    In this model, using the outputs from GCM, the rainfall of Zayandehrood dam is projected under two climate change scenarios. Most effective variables have been identified among 26 predictor variables. Comparison of the results of the two models shows that the developed SVM model has lesser errors in monthly rainfall ...

  16. Scaling Factor Estimation Using an Optimized Mass Change Strategy, Part 1: Theory

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Fernández, Pelayo Fernández; Brincker, Rune

    2007-01-01

    In natural input modal analysis, only un-scaled mode shapes can be obtained. The mass change method is, in many cases, the simplest way to estimate the scaling factors, which involves repeated modal testing after changing the mass in different points of the structure where the mode shapes are known....... The scaling factors are determined using the natural frequencies and mode shapes of both the modified and the unmodified structure. However, the uncertainty on the scaling factor estimation depends on the modal analysis and the mass change strategy (number, magnitude and location of the masses) used to modify...

  17. Scaling of the magnetic entropy change of Fe3−xMnxSi

    International Nuclear Information System (INIS)

    Said, M.R.; Hamam, Y.A.; Abu-Aljarayesh, I.

    2014-01-01

    The magnetic entropy change of Fe 3−x Mn x Si (for x=1.15, 1.3 and 1.5) has been extracted from isothermal magnetization measurements near the Curie temperature. We used the scaling hypotheses of the thermodynamic potentials to scale the magnetic entropy change to a single universal curve for each sample. The effect of the exchange field and the Curie temperature on the maximum entropy change is discussed. - Highlights: • The maximum of the magnetic entropy change occurs at temperatures T>T C . • The exchange field enhances the magnetic entropy change. • The magnetic entropy change at T C is inversely proportional to T C . • Scaling hypothesis is used to scale the magnetic entropy change

  18. Capturing subregional variability in regional-scale climate change vulnerability assessments of natural resources

    Science.gov (United States)

    Polly C. Buotte; David L. Peterson; Kevin S. McKelvey; Jeffrey A. Hicke

    2016-01-01

    Natural resource vulnerability to climate change can depend on the climatology and ecological conditions at a particular site. Here we present a conceptual framework for incorporating spatial variability in natural resource vulnerability to climate change in a regional-scale assessment. The framework was implemented in the first regional-scale vulnerability...

  19. Linear dimensional changes in plaster die models using different elastomeric materials

    Directory of Open Access Journals (Sweden)

    Jefferson Ricardo Pereira

    2010-09-01

    Full Text Available Dental impression is an important step in the preparation of prostheses since it provides the reproduction of anatomic and surface details of teeth and adjacent structures. The objective of this study was to evaluate the linear dimensional alterations in gypsum dies obtained with different elastomeric materials, using a resin coping impression technique with individual shells. A master cast made of stainless steel with fixed prosthesis characteristics with two prepared abutment teeth was used to obtain the impressions. References points (A, B, C, D, E and F were recorded on the occlusal and buccal surfaces of abutments to register the distances. The impressions were obtained using the following materials: polyether, mercaptan-polysulfide, addition silicone, and condensation silicone. The transfer impressions were made with custom trays and an irreversible hydrocolloid material and were poured with type IV gypsum. The distances between identified points in gypsum dies were measured using an optical microscope and the results were statistically analyzed by ANOVA (p < 0.05 and Tukey's test. The mean of the distances were registered as follows: addition silicone (AB = 13.6 µm, CD=15.0 µm, EF = 14.6 µm, GH=15.2 µm, mercaptan-polysulfide (AB = 36.0 µm, CD = 36.0 µm, EF = 39.6 µm, GH = 40.6 µm, polyether (AB = 35.2 µm, CD = 35.6 µm, EF = 39.4 µm, GH = 41.4 µm and condensation silicone (AB = 69.2 µm, CD = 71.0 µm, EF = 80.6 µm, GH = 81.2 µm. All of the measurements found in gypsum dies were compared to those of a master cast. The results demonstrated that the addition silicone provides the best stability of the compounds tested, followed by polyether, polysulfide and condensation silicone. No statistical differences were obtained between polyether and mercaptan-polysulfide materials.

  20. What is Changing and When - Post Linear Pottery Culture Life in Central Europe

    Czech Academy of Sciences Publication Activity Database

    Řídký, Jaroslav; Květina, Petr; Stäuble, H.; Pavlů, Ivan

    2015-01-01

    Roč. 53, č. 3 (2015), s. 333-339 ISSN 0323-1119. [Annual Meeting of the European Association of Archaeologists /19./. Plzeň, 04.09.2013-08.09.2013] R&D Projects: GA MK(CZ) DF12P01OVV032 Keywords : archaeological culture * culture change * Final LBK * Neolithic * Post-LBK * site layout * social complexity Subject RIV: AC - Archeology, Anthropology, Ethnology

  1. Linear DNA vaccine prepared by large-scale PCR provides protective immunity against H1N1 influenza virus infection in mice.

    Science.gov (United States)

    Wang, Fei; Chen, Quanjiao; Li, Shuntang; Zhang, Chenyao; Li, Shanshan; Liu, Min; Mei, Kun; Li, Chunhua; Ma, Lixin; Yu, Xiaolan

    2017-06-01

    Linear DNA vaccines provide effective vaccination. However, their application is limited by high cost and small scale of the conventional polymerase chain reaction (PCR) generally used to obtain sufficient amounts of DNA effective against epidemic diseases. In this study, a two-step, large-scale PCR was established using a low-cost DNA polymerase, RKOD, expressed in Pichia pastoris. Two linear DNA vaccines encoding influenza H1N1 hemagglutinin (HA) 1, LEC-HA, and PTO-LEC-HA (with phosphorothioate-modified primers), were produced by the two-step PCR. Protective effects of the vaccines were evaluated in a mouse model. BALB/c mice were immunized three times with the vaccines or a control DNA fragment. All immunized animals were challenged by intranasal administration of a lethal dose of influenza H1N1 virus 2 weeks after the last immunization. Sera of the immunized animals were tested for the presence of HA-specific antibodies, and the total IFN-γ responses induced by linear DNA vaccines were measured. The results showed that the DNA vaccines but not the control DNA induced strong antibody and IFN-γ responses. Additionally, the PTO-LEC-HA vaccine effectively protected the mice against the lethal homologous mouse-adapted virus, with a survival rate of 100% versus 70% in the LEC-HA-vaccinated group, showing that the PTO-LEC-HA vaccine was more effective than LEC-HA. In conclusion, the results indicated that the linear H1N1 HA-coding DNA vaccines induced significant immune responses and protected mice against a lethal virus challenge. Thus, the low-cost, two-step, large-scale PCR can be considered a potential tool for rapid manufacturing of linear DNA vaccines against emerging infectious diseases. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Nuclear resonant scattering measurements on (57)Fe by multichannel scaling with a 64-pixel silicon avalanche photodiode linear-array detector.

    Science.gov (United States)

    Kishimoto, S; Mitsui, T; Haruki, R; Yoda, Y; Taniguchi, T; Shimazaki, S; Ikeno, M; Saito, M; Tanaka, M

    2014-11-01

    We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm(2)) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10(7) cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrum of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on (57)Fe.

  3. Deforestation Induced Climate Change: Effects of Spatial Scale.

    Science.gov (United States)

    Longobardi, Patrick; Montenegro, Alvaro; Beltrami, Hugo; Eby, Michael

    2016-01-01

    Deforestation is associated with increased atmospheric CO2 and alterations to the surface energy and mass balances that can lead to local and global climate changes. Previous modelling studies show that the global surface air temperature (SAT) response to deforestation depends on latitude, with most simulations showing that high latitude deforestation results in cooling, low latitude deforestation causes warming and that the mid latitude response is mixed. These earlier conclusions are based on simulated large scal land cover change, with complete removal of trees from whole latitude bands. Using a global climate model we examine the effects of removing fractions of 5% to 100% of forested areas in the high, mid and low latitudes. All high latitude deforestation scenarios reduce mean global SAT, the opposite occurring for low latitude deforestation, although a decrease in SAT is simulated over low latitude deforested areas. Mid latitude SAT response is mixed. In all simulations deforested areas tend to become drier and have lower SAT, although soil temperatures increase over deforested mid and low latitude grid cells. For high latitude deforestation fractions of 45% and above, larger net primary productivity, in conjunction with colder and drier conditions after deforestation cause an increase in soil carbon large enough to produce a net decrease of atmospheric CO2. Our results reveal the complex interactions between soil carbon dynamics and other climate subsystems in the energy partition responses to land cover change.

  4. Deforestation Induced Climate Change: Effects of Spatial Scale.

    Directory of Open Access Journals (Sweden)

    Patrick Longobardi

    Full Text Available Deforestation is associated with increased atmospheric CO2 and alterations to the surface energy and mass balances that can lead to local and global climate changes. Previous modelling studies show that the global surface air temperature (SAT response to deforestation depends on latitude, with most simulations showing that high latitude deforestation results in cooling, low latitude deforestation causes warming and that the mid latitude response is mixed. These earlier conclusions are based on simulated large scal land cover change, with complete removal of trees from whole latitude bands. Using a global climate model we examine the effects of removing fractions of 5% to 100% of forested areas in the high, mid and low latitudes. All high latitude deforestation scenarios reduce mean global SAT, the opposite occurring for low latitude deforestation, although a decrease in SAT is simulated over low latitude deforested areas. Mid latitude SAT response is mixed. In all simulations deforested areas tend to become drier and have lower SAT, although soil temperatures increase over deforested mid and low latitude grid cells. For high latitude deforestation fractions of 45% and above, larger net primary productivity, in conjunction with colder and drier conditions after deforestation cause an increase in soil carbon large enough to produce a net decrease of atmospheric CO2. Our results reveal the complex interactions between soil carbon dynamics and other climate subsystems in the energy partition responses to land cover change.

  5. Communication: An effective linear-scaling atomic-orbital reformulation of the random-phase approximation using a contracted double-Laplace transformation

    International Nuclear Information System (INIS)

    Schurkus, Henry F.; Ochsenfeld, Christian

    2016-01-01

    An atomic-orbital (AO) reformulation of the random-phase approximation (RPA) correlation energy is presented allowing to reduce the steep computational scaling to linear, so that large systems can be studied on simple desktop computers with fully numerically controlled accuracy. Our AO-RPA formulation introduces a contracted double-Laplace transform and employs the overlap-metric resolution-of-the-identity. First timings of our pilot code illustrate the reduced scaling with systems comprising up to 1262 atoms and 10 090 basis functions. 

  6. Future Arctic climate changes: Adaptation and mitigation time scales

    Science.gov (United States)

    Overland, James E.; Wang, Muyin; Walsh, John E.; Stroeve, Julienne C.

    2014-02-01

    The climate in the Arctic is changing faster than in midlatitudes. This is shown by increased temperatures, loss of summer sea ice, earlier snow melt, impacts on ecosystems, and increased economic access. Arctic sea ice volume has decreased by 75% since the 1980s. Long-lasting global anthropogenic forcing from carbon dioxide has increased over the previous decades and is anticipated to increase over the next decades. Temperature increases in response to greenhouse gases are amplified in the Arctic through feedback processes associated with shifts in albedo, ocean and land heat storage, and near-surface longwave radiation fluxes. Thus, for the next few decades out to 2040, continuing environmental changes in the Arctic are very likely, and the appropriate response is to plan for adaptation to these changes. For example, it is very likely that the Arctic Ocean will become seasonally nearly sea ice free before 2050 and possibly within a decade or two, which in turn will further increase Arctic temperatures, economic access, and ecological shifts. Mitigation becomes an important option to reduce potential Arctic impacts in the second half of the 21st century. Using the most recent set of climate model projections (CMIP5), multimodel mean temperature projections show an Arctic-wide end of century increase of +13°C in late fall and +5°C in late spring for a business-as-usual emission scenario (RCP8.5) in contrast to +7°C in late fall and +3°C in late spring if civilization follows a mitigation scenario (RCP4.5). Such temperature increases demonstrate the heightened sensitivity of the Arctic to greenhouse gas forcing.

  7. Mycorrhizas and global environmental change: Research at different scales

    DEFF Research Database (Denmark)

    Staddon, P.L.; Heinemeyer, A.; Fitter, A.H.

    2002-01-01

    Global environmental change (GEC), in particular rising atmospheric CO2 concentration and temperature, will affect most ecosystems. The varied responses of plants to these aspects of GEC are well documented. As with other key below-ground components of terrestrial ecosystems, the response...... of the ubiquitous mycorrhizal fungal root symbionts has received limited attention. Most of the research on the effects of GEC on mycorrhizal fungi has been pot-based with a few field (especially monoculture) studies. A major question that arises in all these studies is whether the GEC effects on the mycorrhizal...

  8. Pore-scale modeling of phase change in porous media

    Science.gov (United States)

    Juanes, Ruben; Cueto-Felgueroso, Luis; Fu, Xiaojing

    2017-11-01

    One of the main open challenges in pore-scale modeling is the direct simulation of flows involving multicomponent mixtures with complex phase behavior. Reservoir fluid mixtures are often described through cubic equations of state, which makes diffuse interface, or phase field theories, particularly appealing as a modeling framework. What is still unclear is whether equation-of-state-driven diffuse-interface models can adequately describe processes where surface tension and wetting phenomena play an important role. Here we present a diffuse interface model of single-component, two-phase flow (a van der Waals fluid) in a porous medium under different wetting conditions. We propose a simplified Darcy-Korteweg model that is appropriate to describe flow in a Hele-Shaw cell or a micromodel, with a gap-averaged velocity. We study the ability of the diffuse-interface model to capture capillary pressure and the dynamics of vaporization/condensation fronts, and show that the model reproduces pressure fluctuations that emerge from abrupt interface displacements (Haines jumps) and from the break-up of wetting films.

  9. Retrieval of collision kernels from the change of droplet size distributions with linear inversion

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Ryo; Takahashi, Keiko [Earth Simulator Center, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama Kanagawa 236-0001 (Japan); Matsuda, Keigo; Kurose, Ryoichi; Komori, Satoru [Department of Mechanical Engineering and Science, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: onishi.ryo@jamstec.go.jp, E-mail: matsuda.keigo@t03.mbox.media.kyoto-u.ac.jp, E-mail: takahasi@jamstec.go.jp, E-mail: kurose@mech.kyoto-u.ac.jp, E-mail: komori@mech.kyoto-u.ac.jp

    2008-12-15

    We have developed a new simple inversion scheme for retrieving collision kernels from the change of droplet size distribution due to collision growth. Three-dimensional direct numerical simulations (DNS) of steady isotropic turbulence with colliding droplets are carried out in order to investigate the validity of the developed inversion scheme. In the DNS, air turbulence is calculated using a quasi-spectral method; droplet motions are tracked in a Lagrangian manner. The initial droplet size distribution is set to be equivalent to that obtained in a wind tunnel experiment. Collision kernels retrieved by the developed inversion scheme are compared to those obtained by the DNS. The comparison shows that the collision kernels can be retrieved within 15% error. This verifies the feasibility of retrieving collision kernels using the present inversion scheme.

  10. The Abridgment and Relaxation Time for a Linear Multi-Scale Model Based on Multiple Site Phosphorylation.

    Directory of Open Access Journals (Sweden)

    Shuo Wang

    Full Text Available Random effect in cellular systems is an important topic in systems biology and often simulated with Gillespie's stochastic simulation algorithm (SSA. Abridgment refers to model reduction that approximates a group of reactions by a smaller group with fewer species and reactions. This paper presents a theoretical analysis, based on comparison of the first exit time, for the abridgment on a linear chain reaction model motivated by systems with multiple phosphorylation sites. The analysis shows that if the relaxation time of the fast subsystem is much smaller than the mean firing time of the slow reactions, the abridgment can be applied with little error. This analysis is further verified with numerical experiments for models of bistable switch and oscillations in which linear chain system plays a critical role.

  11. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety

    DEFF Research Database (Denmark)

    Yeh, P.H.; Gazdzinski, S.; Durazzo, T.C.

    2007-01-01

    faster brain volume gains, which were also related to greater smoking and drinking severities. Over 7 months of abstinence from alcohol, sALC compared to nsALC showed less improvements in visuospatial learning and memory despite larger brain volume gains and ventricular shrinkage. Conclusions: Different......)-derived brain volume changes and cognitive changes in abstinent alcohol-dependent individuals as a function of smoking status, smoking severity, and drinking quantities. Methods: Twenty non-smoking recovering alcoholics (nsALC) and 30 age-matched smoking recovering alcoholics (sALC) underwent quantitative MRI...... time points. Using HLM, we modeled volumetric and cognitive outcome measures as a function of cigarette and alcohol use variables. Results: Different hierarchical linear models with unique model structures are presented and discussed. The results show that smaller brain volumes at baseline predict...

  12. Confirmation of linear system theory prediction: Rate of change of Herrnstein's κ as a function of response-force requirement

    Science.gov (United States)

    McDowell, J. J; Wood, Helena M.

    1985-01-01

    Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes (¢/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's κ were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) κ increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of κ was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of κ was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's κ. PMID:16812408

  13. Confirmation of linear system theory prediction: Rate of change of Herrnstein's kappa as a function of response-force requirement.

    Science.gov (United States)

    McDowell, J J; Wood, H M

    1985-01-01

    Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes ( cent/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's kappa were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) kappa increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of kappa was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of kappa was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's kappa.

  14. Large-scale impact of climate change vs. land-use change on future biome shifts in Latin America

    NARCIS (Netherlands)

    Boit, Alice; Sakschewski, Boris; Boysen, Lena; Cano-Crespo, Ana; Clement, Jan; Garcia-alaniz, Nashieli; Kok, Kasper; Kolb, Melanie; Langerwisch, Fanny; Rammig, Anja; Sachse, René; Eupen, van Michiel; Bloh, von Werner; Clara Zemp, Delphine; Thonicke, Kirsten

    2016-01-01

    Climate change and land-use change are two major drivers of biome shifts causing habitat and biodiversity loss. What is missing is a continental-scale future projection of the estimated relative impacts of both drivers on biome shifts over the course of this century. Here, we provide such a

  15. The economic impacts of climate change on the Chilean agricultural sector: A non-linear agricultural supply model

    Directory of Open Access Journals (Sweden)

    Roberto Ponce

    2014-12-01

    Full Text Available Agriculture could be one of the most vulnerable economic sectors to the impacts of climate change in the coming decades, with impacts threatening agricultural production in general and food security in particular. Within this context, climate change will impose a challenge to policy makers, especially in those countries that based their development on primary sectors. In this paper we present a non-linear agricultural supply model for the analysis of the economic impacts of changes in crop yields due to climate change. The model accounts for uncertainty through the use of Monte Carlo simulations about crop yields. According to our results, climate change impacts on the Chilean agricultural sector are widespread, with considerable distributional consequences across regions, and with fruits producers being worst-off than crops producers. In general, the results reported here are consistent with those reported by previous studies showing large economic impacts on the northern zone. However, our model does not simulate remarkable economic consequences at the country level as previous studies did.

  16. Modified ocean circulation, albedo instability and ice-flow instability. Risks of non-linear climate change

    Energy Technology Data Exchange (ETDEWEB)

    Ham, J. van; Beer, R.J. van; Builtjes, P.J.H.; Roemer, M.G.M. [TNO Inst. of Environmental Sciences, Delft (Netherlands); Koennen, G.P. [KNMI, Royal Netherlands Meteorological Inst., de Bilt (Netherlands); Oerlemans, J. [Utrecht Univ. (Netherlands). Inst. for Meteorological and Atmospheric Research

    1995-12-31

    In this presentation part of an investigation is described into risks for climate change which are presently not adequately covered in General Circulation Models. In the concept of climate change as a result of the enhanced greenhouse effect it is generally assumed that the radiative forcings from increased concentrations of greenhouse gases (GHG) will result in a proportional or quasilinear global warming. Though correlations of this kind are known from palaeoclimate research, the variability of the climate seems to prevent the direct proof of a causal relation between recent greenhouse gas concentrations and temperature observations. In order to resolve the issue the use of General Circulation Models (GCMs), though still inadequate at present, is indispensable. Around the world some 10 leading GCMs exist which have been the subject of evaluation and intercomparison in a number of studies. Their results are regularly assessed in the IPCC process. A discussion on their performance in simulating present or past climates and the causes of their weak points shows that the depiction of clouds is a major weakness of GCMs. A second element which is virtually absent in GCMs are the feedbacks from natural biogeochemical cycles. These cycles are influenced by man in a number of ways. GCMs have a limited performance in simulating regional effects on climate. Moreover, albedo instability, in part due to its interaction with cloudiness, is only roughly represented. Apparently, not all relevant processes have been included in the GCMs. That situation constitutes a risk, since it cannot be ruled out that a missing process could cause or trigger a non-linear climate change. In the study non-linear climate change is connected with those processes which could provide feedbacks with a risk for non-monotonous or discontinuous behaviour of the climate system, or which are unpredictable or could cause rapid transitions

  17. Modified ocean circulation, albedo instability and ice-flow instability. Risks of non-linear climate change

    Energy Technology Data Exchange (ETDEWEB)

    Ham, J van; Beer, R.J. van; Builtjes, P J.H.; Roemer, M G.M. [TNO Inst. of Environmental Sciences, Delft (Netherlands); Koennen, G P [KNMI, Royal Netherlands Meteorological Inst., de Bilt (Netherlands); Oerlemans, J [Utrecht Univ. (Netherlands). Inst. for Meteorological and Atmospheric Research

    1996-12-31

    In this presentation part of an investigation is described into risks for climate change which are presently not adequately covered in General Circulation Models. In the concept of climate change as a result of the enhanced greenhouse effect it is generally assumed that the radiative forcings from increased concentrations of greenhouse gases (GHG) will result in a proportional or quasilinear global warming. Though correlations of this kind are known from palaeoclimate research, the variability of the climate seems to prevent the direct proof of a causal relation between recent greenhouse gas concentrations and temperature observations. In order to resolve the issue the use of General Circulation Models (GCMs), though still inadequate at present, is indispensable. Around the world some 10 leading GCMs exist which have been the subject of evaluation and intercomparison in a number of studies. Their results are regularly assessed in the IPCC process. A discussion on their performance in simulating present or past climates and the causes of their weak points shows that the depiction of clouds is a major weakness of GCMs. A second element which is virtually absent in GCMs are the feedbacks from natural biogeochemical cycles. These cycles are influenced by man in a number of ways. GCMs have a limited performance in simulating regional effects on climate. Moreover, albedo instability, in part due to its interaction with cloudiness, is only roughly represented. Apparently, not all relevant processes have been included in the GCMs. That situation constitutes a risk, since it cannot be ruled out that a missing process could cause or trigger a non-linear climate change. In the study non-linear climate change is connected with those processes which could provide feedbacks with a risk for non-monotonous or discontinuous behaviour of the climate system, or which are unpredictable or could cause rapid transitions

  18. Do Quercus ilex woodlands undergo abrupt non-linear functional changes in response to human disturbance along a climatic gradient?

    Science.gov (United States)

    Bochet, Esther; García-Fayos, Patricio; José Molina, Maria; Moreno de las Heras, Mariano; Espigares, Tíscar; Nicolau, Jose Manuel; Monleon, Vicente

    2017-04-01

    Theoretical models predict that drylands are particularly prone to suffer critical transitions with abrupt non-linear changes in their structure and functions as a result of the existing complex interactions between climatic fluctuations and human disturbances. However, so far, few studies provide empirical data to validate these models. We aim at determining how holm oak (Quercus ilex) woodlands undergo changes in their functions in response to human disturbance along an aridity gradient (from semi-arid to sub-humid conditions), in eastern Spain. For that purpose, we used (a) remote-sensing estimations of precipitation-use-efficiency (PUE) from enhanced vegetation index (EVI) observations performed in 231x231 m plots of the Moderate Resolution Imaging Spectroradiometer (MODIS); (b) biological and chemical soil parameter determinations (extracellular soil enzyme activity, soil respiration, nutrient cycling processes) from soil sampled in the same plots; (c) vegetation parameter determinations (ratio of functional groups) from vegetation surveys performed in the same plots. We analyzed and compared the shape of the functional change (in terms of PUE and soil and vegetation parameters) in response to human disturbance intensity for our holm oak sites along the aridity gradient. Overall, our results evidenced important differences in the shape of the functional change in response to human disturbance between climatic conditions. Semi-arid areas experienced a more accelerated non-linear decrease with an increasing disturbance intensity than sub-humid ones. The proportion of functional groups (herbaceous vs. woody cover) played a relevant role in the shape of the functional response of the holm oak sites to human disturbance.

  19. Linear scale bounds on dark matter--dark radiation interactions and connection with the small scale crisis of cold dark matter

    DEFF Research Database (Denmark)

    Hannestad, Steen; Archidiacono, Maria; Bohr, Sebastian

    2017-01-01

    One of the open questions in modern cosmology is the small scale crisis of the cold dark matter paradigm. Increasing attention has recently been devoted to self-interacting dark matter models as a possible answer. However, solving the so-called "missing satellites" problem requires in addition...... the presence of an extra relativistic particle (dubbed dark radiation) scattering with dark matter in the early universe. Here we investigate the impact of different theoretical models devising dark matter dark radiation interactions on large scale cosmological observables. We use cosmic microwave background...... data to put constraints on the dark radiation component and its coupling to dark matter. We find that the values of the coupling allowed by the data imply a cut-off scale of the halo mass function consistent with the one required to match the observations of satellites in the Milky Way....

  20. Changing the scale of hydrogeophysical aquifer heterogeneity characterization

    Science.gov (United States)

    Paradis, Daniel; Tremblay, Laurie; Ruggeri, Paolo; Brunet, Patrick; Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Holliger, Klaus; Irving, James; Molson, John; Lefebvre, Rene

    2015-04-01

    Contaminant remediation and management require the quantitative predictive capabilities of groundwater flow and mass transport numerical models. Such models have to encompass source zones and receptors, and thus typically cover several square kilometers. To predict the path and fate of contaminant plumes, these models have to represent the heterogeneous distribution of hydraulic conductivity (K). However, hydrogeophysics has generally been used to image relatively restricted areas of the subsurface (small fractions of km2), so there is a need for approaches defining heterogeneity at larger scales and providing data to constrain conceptual and numerical models of aquifer systems. This communication describes a workflow defining aquifer heterogeneity that was applied over a 12 km2 sub-watershed surrounding a decommissioned landfill emitting landfill leachate. The aquifer is a shallow, 10 to 20 m thick, highly heterogeneous and anisotropic assemblage of littoral sand and silt. Field work involved the acquisition of a broad range of data: geological, hydraulic, geophysical, and geochemical. The emphasis was put on high resolution and continuous hydrogeophysical data, the use of direct-push fully-screened wells and the acquisition of targeted high-resolution hydraulic data covering the range of observed aquifer materials. The main methods were: 1) surface geophysics (ground-penetrating radar and electrical resistivity); 2) direct-push operations with a geotechnical drilling rig (cone penetration tests with soil moisture resistivity CPT/SMR; full-screen well installation); and 3) borehole operations, including high-resolution hydraulic tests and geochemical sampling. New methods were developed to acquire high vertical resolution hydraulic data in direct-push wells, including both vertical and horizontal K (Kv and Kh). Various data integration approaches were used to represent aquifer properties in 1D, 2D and 3D. Using relevant vector machines (RVM), the mechanical and

  1. Matching Social and Biophysical Scales in Extensive Livestock Production as a Basis for Adaptation to Global Change

    Science.gov (United States)

    Sayre, N. F.; Bestelmeyer, B.

    2015-12-01

    Global livestock production is heterogeneous, and its benefits and costs vary widely across global contexts. Extensive grazing lands (or rangelands) constitute the vast majority of the land dedicated to livestock production globally, but they are relatively minor contributors to livestock-related environmental impacts. Indeed, the greatest potential for environmental damage in these lands lies in their potential for conversion to other uses, including agriculture, mining, energy production and urban development. Managing such conversion requires improving the sustainability of livestock production in the face of fragmentation, ecological and economic marginality and climate change. We present research from Mongolia and the United States demonstrating methods of improving outcomes on rangelands by improving the fit between the scales of social and biophysical processes. Especially in arid and semi-arid settings, rangelands exhibit highly variable productivity over space and time and non-linear or threshold dynamics in vegetation; climate change is projected to exacerbate these challenges and, in some cases, diminish overall productivity. Policy and governance frameworks that enable landscape-scale management and administration enable range livestock producers to adapt to these conditions. Similarly, livestock breeds that have evolved to withstand climate and vegetation change improve producers' prospects in the face of increasing variability and declining productivity. A focus on the relationships among primary production, animal production, spatial connectivity, and scale must underpin adaptation strategies in rangelands.

  2. Reliability, validity, and sensitivity to change of the lower extremity functional scale in individuals affected by stroke.

    Science.gov (United States)

    Verheijde, Joseph L; White, Fred; Tompkins, James; Dahl, Peder; Hentz, Joseph G; Lebec, Michael T; Cornwall, Mark

    2013-12-01

    To investigate reliability, validity, and sensitivity to change of the Lower Extremity Functional Scale (LEFS) in individuals affected by stroke. The secondary objective was to test the validity and sensitivity of a single-item linear analog scale (LAS) of function. Prospective cohort reliability and validation study. A single rehabilitation department in an academic medical center. Forty-three individuals receiving neurorehabilitation for lower extremity dysfunction after stroke were studied. Their ages ranged from 32 to 95 years, with a mean of 70 years; 77% were men. Test-retest reliability was assessed by calculating the classical intraclass correlation coefficient, and the Bland-Altman limits of agreement. Validity was assessed by calculating the Pearson correlation coefficient between the instruments. Sensitivity to change was assessed by comparing baseline scores with end of treatment scores. Measurements were taken at baseline, after 1-3 days, and at 4 and 8 weeks. The LEFS, Short-Form-36 Physical Function Scale, Berg Balance Scale, Six-Minute Walk Test, Five-Meter Walk Test, Timed Up-and-Go test, and the LAS of function were used. The test-retest reliability of the LEFS was found to be excellent (ICC = 0.96). Correlated with the 6 other measures of function studied, the validity of the LEFS was found to be moderate to high (r = 0.40-0.71). Regarding the sensitivity to change, the mean LEFS scores from baseline to study end increased 1.2 SD and for LAS 1.1 SD. LEFS exhibits good reliability, validity, and sensitivity to change in patients with lower extremity impairments secondary to stroke. Therefore, the LEFS can be a clinically efficient outcome measure in the rehabilitation of patients with subacute stroke. The LAS is shown to be a time-saving and reasonable option to track changes in a patient's functional status. Copyright © 2013 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  3. A new type of change blindness: smooth, isoluminant color changes are monitored on a coarse spatial scale.

    Science.gov (United States)

    Goddard, Erin; Clifford, Colin W G

    2013-04-22

    Attending selectively to changes in our visual environment may help filter less important, unchanging information within a scene. Here, we demonstrate that color changes can go unnoticed even when they occur throughout an otherwise static image. The novelty of this demonstration is that it does not rely upon masking by a visual disruption or stimulus motion, nor does it require the change to be very gradual and restricted to a small section of the image. Using a two-interval, forced-choice change-detection task and an odd-one-out localization task, we showed that subjects were slowest to respond and least accurate (implying that change was hardest to detect) when the color changes were isoluminant, smoothly varying, and asynchronous with one another. This profound change blindness offers new constraints for theories of visual change detection, implying that, in the absence of transient signals, changes in color are typically monitored at a coarse spatial scale.

  4. Scaling Factor Estimation Using Optimized Mass Change Strategy, Part 2: Experimental Results

    DEFF Research Database (Denmark)

    Fernández, Pelayo Fernández; Aenlle, Manuel López; Garcia, Luis M. Villa

    2007-01-01

    The mass change method is used to estimate the scaling factors, the uncertainty is reduced when, for each mode, the frequency shift is maximized and the changes in the mode shapes are minimized, which in turn, depends on the mass change strategy chosen to modify the dynamic behavior of the struct...

  5. Simultaneous inference for multilevel linear mixed models - with an application to a large-scale school meal study

    DEFF Research Database (Denmark)

    Ritz, Christian; Laursen, Rikke Pilmann; Damsgaard, Camilla Trab

    2017-01-01

    of a school meal programme. We propose a novel and versatile framework for simultaneous inference on parameters estimated from linear mixed models that were fitted separately for several outcomes from the same study, but did not necessarily contain the same fixed or random effects. By combining asymptotic...... sizes of practical relevance we studied simultaneous coverage through simulation, which showed that the approach achieved acceptable coverage probabilities even for small sample sizes (10 clusters) and for 2–16 outcomes. The approach also compared favourably with a joint modelling approach. We also...

  6. Groundwater decline and tree change in floodplain landscapes: Identifying non-linear threshold responses in canopy condition

    Directory of Open Access Journals (Sweden)

    J. Kath

    2014-12-01

    Full Text Available Groundwater decline is widespread, yet its implications for natural systems are poorly understood. Previous research has revealed links between groundwater depth and tree condition; however, critical thresholds which might indicate ecological ‘tipping points’ associated with rapid and potentially irreversible change have been difficult to quantify. This study collated data for two dominant floodplain species, Eucalyptus camaldulensis (river red gum and E. populnea (poplar box from 118 sites in eastern Australia where significant groundwater decline has occurred. Boosted regression trees, quantile regression and Threshold Indicator Taxa Analysis were used to investigate the relationship between tree condition and groundwater depth. Distinct non-linear responses were found, with groundwater depth thresholds identified in the range from 12.1 m to 22.6 m for E. camaldulensis and 12.6 m to 26.6 m for E. populnea beyond which canopy condition declined abruptly. Non-linear threshold responses in canopy condition in these species may be linked to rooting depth, with chronic groundwater decline decoupling trees from deep soil moisture resources. The quantification of groundwater depth thresholds is likely to be critical for management aimed at conserving groundwater dependent biodiversity. Identifying thresholds will be important in regions where water extraction and drying climates may contribute to further groundwater decline. Keywords: Canopy condition, Dieback, Drought, Tipping point, Ecological threshold, Groundwater dependent ecosystems

  7. Leading Educational Change and Improvement at Scale: Some Inconvenient Truths about System Performance

    Science.gov (United States)

    Harris, Alma; Jones, Michelle

    2017-01-01

    The challenges of securing educational change and transformation, at scale, remain considerable. While sustained progress has been made in some education systems (Fullan, 2009; Hargreaves & Shirley, 2009) generally, it remains the case that the pathway to large-scale, system improvement is far from easy or straightforward. While large-scale…

  8. Study on TVD parameters sensitivity of a crankshaft using multiple scale and state space method considering quadratic and cubic non-linearities

    Directory of Open Access Journals (Sweden)

    R. Talebitooti

    Full Text Available In this paper the effect of quadratic and cubic non-linearities of the system consisting of the crankshaft and torsional vibration damper (TVD is taken into account. TVD consists of non-linear elastomer material used for controlling the torsional vibration of crankshaft. The method of multiple scales is used to solve the governing equations of the system. Meanwhile, the frequency response of the system for both harmonic and sub-harmonic resonances is extracted. In addition, the effects of detuning parameters and other dimensionless parameters for a case of harmonic resonance are investigated. Moreover, the external forces including both inertia and gas forces are simultaneously applied into the model. Finally, in order to study the effectiveness of the parameters, the dimensionless governing equations of the system are solved, considering the state space method. Then, the effects of the torsional damper as well as all corresponding parameters of the system are discussed.

  9. Polaron effects on the linear and the nonlinear optical absorption coefficients and refractive index changes in cylindrical quantum dots with applied magnetic field

    International Nuclear Information System (INIS)

    Wu Qingjie; Guo Kangxian; Liu Guanghui; Wu Jinghe

    2013-01-01

    Polaron effects on the linear and the nonlinear optical absorption coefficients and refractive index changes in cylindrical quantum dots with the radial parabolic potential and the z-direction linear potential with applied magnetic field are theoretically investigated. The optical absorption coefficients and refractive index changes are presented by using the compact-density-matrix approach and iterative method. Numerical calculations are presented for GaAs/AlGaAs. It is found that taking into account the electron-LO-phonon interaction, not only are the linear, the nonlinear and the total optical absorption coefficients and refractive index changes enhanced, but also the total optical absorption coefficients are more sensitive to the incident optical intensity. It is also found that no matter whether the electron-LO-phonon interaction is considered or not, the absorption coefficients and refractive index changes above are strongly dependent on the radial frequency, the magnetic field and the linear potential coefficient.

  10. Role of band 3 in the erythrocyte membrane structural changes under thermal fluctuations -multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana

    2015-12-01

    An attempt was made to discuss and connect various modeling approaches on various time and space scales which have been proposed in the literature in order to shed further light on the erythrocyte membrane rearrangement caused by the cortex-lipid bilayer coupling under thermal fluctuations. Roles of the main membrane constituents: (1) the actin-spectrin cortex, (2) the lipid bilayer, and (3) the trans membrane protein band 3 and their course-consequence relations were considered in the context of the cortex non linear stiffening and corresponding anomalous nature of energy dissipation. The fluctuations induce alternating expansion and compression of the membrane parts in order to ensure surface and volume conservation. The membrane structural changes were considered within two time regimes. The results indicate that the cortex non linear stiffening and corresponding anomalous nature of energy dissipation are related to the spectrin flexibility distribution and the rate of its changes. The spectrin flexibility varies from purely flexible to semi flexible. It is influenced by: (1) the number of band 3 molecules attached to single spectrin filaments, and (2) phosphorylation of the actin-junctions. The rate of spectrin flexibility changes depends on the band 3 molecules rearrangement.

  11. The application of two-step linear temperature program to thermal analysis for monitoring the lipid induction of Nostoc sp. KNUA003 in large scale cultivation.

    Science.gov (United States)

    Kang, Bongmun; Yoon, Ho-Sung

    2015-02-01

    Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Change Analysis and Decision Tree Based Detection Model for Residential Objects across Multiple Scales

    Directory of Open Access Journals (Sweden)

    CHEN Liyan

    2018-03-01

    Full Text Available Change analysis and detection plays important role in the updating of multi-scale databases.When overlap an updated larger-scale dataset and a to-be-updated smaller-scale dataset,people usually focus on temporal changes caused by the evolution of spatial entities.Little attention is paid to the representation changes influenced by map generalization.Using polygonal building data as an example,this study examines the changes from different perspectives,such as the reasons for their occurrence,their performance format.Based on this knowledge,we employ decision tree in field of machine learning to establish a change detection model.The aim of the proposed model is to distinguish temporal changes that need to be applied as updates to the smaller-scale dataset from representation changes.The proposed method is validated through tests using real-world building data from Guangzhou city.The experimental results show the overall precision of change detection is more than 90%,which indicates our method is effective to identify changed objects.

  13. Land use change impacts on floods at the catchment scale: Challenges and opportunities for future research

    Science.gov (United States)

    Rogger, M.; Agnoletti, M.; Alaoui, A.; Bathurst, J. C.; Bodner, G.; Borga, M.; Chaplot, V.; Gallart, F.; Glatzel, G.; Hall, J.; Holden, J.; Holko, L.; Horn, R.; Kiss, A.; Kohnová, S.; Leitinger, G.; Lennartz, B.; Parajka, J.; Perdigão, R.; Peth, S.; Plavcová, L.; Quinton, J. N.; Robinson, M.; Salinas, J. L.; Santoro, A.; Szolgay, J.; Tron, S.; van den Akker, J. J. H.; Viglione, A.; Blöschl, G.

    2017-07-01

    Research gaps in understanding flood changes at the catchment scale caused by changes in forest management, agricultural practices, artificial drainage, and terracing are identified. Potential strategies in addressing these gaps are proposed, such as complex systems approaches to link processes across time scales, long-term experiments on physical-chemical-biological process interactions, and a focus on connectivity and patterns across spatial scales. It is suggested that these strategies will stimulate new research that coherently addresses the issues across hydrology, soil and agricultural sciences, forest engineering, forest ecology, and geomorphology.

  14. Large-Scale Ocean Circulation-Cloud Interactions Reduce the Pace of Transient Climate Change

    Science.gov (United States)

    Trossman, D. S.; Palter, J. B.; Merlis, T. M.; Huang, Y.; Xia, Y.

    2016-01-01

    Changes to the large scale oceanic circulation are thought to slow the pace of transient climate change due, in part, to their influence on radiative feedbacks. Here we evaluate the interactions between CO2-forced perturbations to the large-scale ocean circulation and the radiative cloud feedback in a climate model. Both the change of the ocean circulation and the radiative cloud feedback strongly influence the magnitude and spatial pattern of surface and ocean warming. Changes in the ocean circulation reduce the amount of transient global warming caused by the radiative cloud feedback by helping to maintain low cloud coverage in the face of global warming. The radiative cloud feedback is key in affecting atmospheric meridional heat transport changes and is the dominant radiative feedback mechanism that responds to ocean circulation change. Uncertainty in the simulated ocean circulation changes due to CO2 forcing may contribute a large share of the spread in the radiative cloud feedback among climate models.

  15. ANALYSING ORGANIZATIONAL CHANGES - THE CONNECTION BETWEEN THE SCALE OF CHANGE AND EMPLOYEES ATTITUDES

    Directory of Open Access Journals (Sweden)

    Ujhelyi Maria

    2015-07-01

    Full Text Available In the 21st century all organizations have to cope with challenges caused by trigger events in the environment. The key to organizational success is how fast and efficiently they are able to react. In 2014 we conducted a research survey on this topic with the contribution of Hungarian students on Bachelor courses in Business Administration and Management. They visited organizations which had gone through a significant programme of change within the last 5 years. The owners, managers or HR managers responsible for changes were asked to fill in the questionnaires about the features of these organisational changes. Several issues regarding change management were covered, besides general information about the companies. Respondents were asked about the trigger events and the nature of changes, and about the process of change and participation in it. One group of questions asked leaders about employees’ attitude to change, another section sought information about the methods used in the process. In this paper, after a short literature review, we will analyse the adaptation methods used by organizations and the connection between the scope of change and employees’ attitude toward change.

  16. Scale changes in air quality modelling and assessment of associated uncertainties

    International Nuclear Information System (INIS)

    Korsakissok, Irene

    2009-01-01

    After an introduction of issues related to a scale change in the field of air quality (existing scales for emissions, transport, turbulence and loss processes, hierarchy of data and models, methods of scale change), the author first presents Gaussian models which have been implemented within the Polyphemus modelling platform. These models are assessed by comparison with experimental observations and with other commonly used Gaussian models. The second part reports the coupling of the puff-based Gaussian model with the Eulerian Polair3D model for the sub-mesh processing of point sources. This coupling is assessed at the continental scale for a passive tracer, and at the regional scale for photochemistry. Different statistical methods are assessed

  17. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    Science.gov (United States)

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  18. Adapting crop management practices to climate change: Modeling optimal solutions at the field scale

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.; Walter, A.

    2013-01-01

    Climate change will alter the environmental conditions for crop growth and require adjustments in management practices at the field scale. In this paper, we analyzed the impacts of two different climate change scenarios on optimal field management practices in winterwheat and grain maize production

  19. Extreme daily precipitation in Western Europe with climate change at appropriate spatial scales

    NARCIS (Netherlands)

    Booij, Martijn J.

    2002-01-01

    Extreme daily precipitation for the current and changed climate at appropriate spatial scales is assessed. This is done in the context of the impact of climate change on flooding in the river Meuse in Western Europe. The objective is achieved by determining and comparing extreme precipitation from

  20. Relationship between linear velocity and tangential push force while turning to change the direction of the manual wheelchair.

    Science.gov (United States)

    Hwang, Seonhong; Lin, Yen-Sheng; Hogaboom, Nathan S; Wang, Lin-Hwa; Koontz, Alicia M

    2017-08-28

    Wheelchair propulsion is a major cause of upper limb pain and injuries for manual wheelchair users with spinal cord injuries (SCIs). Few studies have investigated wheelchair turning biomechanics on natural ground surfaces. The purpose of this study was to investigate the relationship between tangential push force and linear velocity of the wheelchair during the turning portions of propulsion. Using an instrumented handrim, velocity and push force data were recorded for 25 subjects while they propel their own wheelchairs on a concrete floor along a figure-eight-shaped course at a maximum velocity. The braking force (1.03 N) of the inside wheel while turning was the largest of all other push forces (p<0.05). Larger changes in squared velocity while turning were significantly correlated with higher propulsive and braking forces used at the pre-turning, turning, and post-turning phases (p<0.05). Subjects with less change of velocity while turning needed less braking force to maneuver themselves successfully and safely around the turns. Considering the magnitude and direction of tangential force applied to the wheel, it seems that there are higher risks of injury and instability for upper limb joints when braking the inside wheel to turn. The results provide insight into wheelchair setup and mobility skills training for wheelchair users.

  1. Soil organic carbon distribution in Mediterranean areas under a climate change scenario via multiple linear regression analysis.

    Science.gov (United States)

    Olaya-Abril, Alfonso; Parras-Alcántara, Luis; Lozano-García, Beatriz; Obregón-Romero, Rafael

    2017-08-15

    Over time, the interest on soil studies has increased due to its role in carbon sequestration in terrestrial ecosystems, which could contribute to decreasing atmospheric CO 2 rates. In many studies, independent variables were related to soil organic carbon (SOC) alone, however, the contribution degree of each variable with the experimentally determined SOC content were not considered. In this study, samples from 612 soil profiles were obtained in a natural protected (Red Natura 2000) of Sierra Morena (Mediterranean area, South Spain), considering only the topsoil 0-25cm, for better comparison between results. 24 independent variables were used to define it relationship with SOC content. Subsequently, using a multiple linear regression analysis, the effects of these variables on the SOC correlation was considered. Finally, the best parameters determined with the regression analysis were used in a climatic change scenario. The model indicated that SOC in a future scenario of climate change depends on average temperature of coldest quarter (41.9%), average temperature of warmest quarter (34.5%), annual precipitation (22.2%) and annual average temperature (1.3%). When the current and future situations were compared, the SOC content in the study area was reduced a 35.4%, and a trend towards migration to higher latitude and altitude was observed. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    International Nuclear Information System (INIS)

    Lipparini, Filippo; Scalmani, Giovanni; Frisch, Michael J.; Lagardère, Louis; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Piquemal, Jean-Philip; Mennucci, Benedetta

    2014-01-01

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute

  3. Quantum, classical, and hybrid QM/MM calculations in solution: General implementation of the ddCOSMO linear scaling strategy

    Energy Technology Data Exchange (ETDEWEB)

    Lipparini, Filippo, E-mail: flippari@uni-mainz.de [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Scalmani, Giovanni; Frisch, Michael J. [Gaussian, Inc., 340 Quinnipiac St. Bldg. 40, Wallingford, Connecticut 06492 (United States); Lagardère, Louis [Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Stamm, Benjamin [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Cancès, Eric [Université Paris-Est, CERMICS, Ecole des Ponts and INRIA, 6 and 8 avenue Blaise Pascal, 77455 Marne-la-Vallée Cedex 2 (France); Maday, Yvon [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Institut Universitaire de France, Paris, France and Division of Applied Maths, Brown University, Providence, Rhode Island 02912 (United States); Piquemal, Jean-Philip [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Mennucci, Benedetta [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Via Risorgimento 35, 56126 Pisa (Italy)

    2014-11-14

    We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.

  4. Linear versus Nonlinear Filtering with Scale-Selective Corrections for Balanced Dynamics in a Simple Atmospheric Model

    KAUST Repository

    Subramanian, Aneesh C.

    2012-11-01

    This paper investigates the role of the linear analysis step of the ensemble Kalman filters (EnKF) in disrupting the balanced dynamics in a simple atmospheric model and compares it to a fully nonlinear particle-based filter (PF). The filters have a very similar forecast step but the analysis step of the PF solves the full Bayesian filtering problem while the EnKF analysis only applies to Gaussian distributions. The EnKF is compared to two flavors of the particle filter with different sampling strategies, the sequential importance resampling filter (SIRF) and the sequential kernel resampling filter (SKRF). The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode. It can also be configured either to evolve on a so-called slow manifold, where the fast motion is suppressed, or such that the fast-varying variables are diagnosed from the slow-varying variables as slaved modes. Identical twin experiments show that EnKF and PF capture the variables on the slow manifold well as the dynamics is very stable. PFs, especially the SKRF, capture slaved modes better than the EnKF, implying that a full Bayesian analysis estimates the nonlinear model variables better. The PFs perform significantly better in the fully coupled nonlinear model where fast and slow variables modulate each other. This suggests that the analysis step in the PFs maintains the balance in both variables much better than the EnKF. It is also shown that increasing the ensemble size generally improves the performance of the PFs but has less impact on the EnKF after a sufficient number of members have been used.

  5. Linear versus Nonlinear Filtering with Scale-Selective Corrections for Balanced Dynamics in a Simple Atmospheric Model

    KAUST Repository

    Subramanian, Aneesh C.; Hoteit, Ibrahim; Cornuelle, Bruce; Miller, Arthur J.; Song, Hajoon

    2012-01-01

    This paper investigates the role of the linear analysis step of the ensemble Kalman filters (EnKF) in disrupting the balanced dynamics in a simple atmospheric model and compares it to a fully nonlinear particle-based filter (PF). The filters have a very similar forecast step but the analysis step of the PF solves the full Bayesian filtering problem while the EnKF analysis only applies to Gaussian distributions. The EnKF is compared to two flavors of the particle filter with different sampling strategies, the sequential importance resampling filter (SIRF) and the sequential kernel resampling filter (SKRF). The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode. It can also be configured either to evolve on a so-called slow manifold, where the fast motion is suppressed, or such that the fast-varying variables are diagnosed from the slow-varying variables as slaved modes. Identical twin experiments show that EnKF and PF capture the variables on the slow manifold well as the dynamics is very stable. PFs, especially the SKRF, capture slaved modes better than the EnKF, implying that a full Bayesian analysis estimates the nonlinear model variables better. The PFs perform significantly better in the fully coupled nonlinear model where fast and slow variables modulate each other. This suggests that the analysis step in the PFs maintains the balance in both variables much better than the EnKF. It is also shown that increasing the ensemble size generally improves the performance of the PFs but has less impact on the EnKF after a sufficient number of members have been used.

  6. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  7. Velocity Gradient Across the San Andreas Fault and Changes in Slip Behavior as Outlined by Full non Linear Tomography

    Science.gov (United States)

    Chiarabba, C.; Giacomuzzi, G.; Piana Agostinetti, N.

    2017-12-01

    The San Andreas Fault (SAF) near Parkfield is the best known fault section which exhibit a clear transition in slip behavior from stable to unstable. Intensive monitoring and decades of studies permit to identify details of these processes with a good definition of fault structure and subsurface models. Tomographic models computed so far revealed the existence of large velocity contrasts, yielding physical insight on fault rheology. In this study, we applied a recently developed full non-linear tomography method to compute Vp and Vs models which focus on the section of the fault that exhibit fault slip transition. The new tomographic code allows not to impose a vertical seismic discontinuity at the fault position, as routinely done in linearized codes. Any lateral velocity contrast found is directly dictated by the data themselves and not imposed by subjective choices. The use of the same dataset of previous tomographic studies allows a proper comparison of results. We use a total of 861 earthquakes, 72 blasts and 82 shots and the overall arrival time dataset consists of 43948 P- and 29158 S-wave arrival times, accurately selected to take care of seismic anisotropy. Computed Vp and Vp/Vs models, which by-pass the main problems related to linarized LET algorithms, excellently match independent available constraints and show crustal heterogeneities with a high resolution. The high resolution obtained in the fault surroundings permits to infer lateral changes of Vp and Vp/Vs across the fault (velocity gradient). We observe that stable and unstable sliding sections of the SAF have different velocity gradients, small and negligible in the stable slip segment, but larger than 15 % in the unstable slip segment. Our results suggest that Vp and Vp/Vs gradients across the fault control fault rheology and the attitude of fault slip behavior.

  8. Non-linear, non-monotonic effect of nano-scale roughness on particle deposition in absence of an energy barrier: Experiments and modeling

    Science.gov (United States)

    Jin, Chao; Glawdel, Tomasz; Ren, Carolyn L.; Emelko, Monica B.

    2015-12-01

    Deposition of colloidal- and nano-scale particles on surfaces is critical to numerous natural and engineered environmental, health, and industrial applications ranging from drinking water treatment to semi-conductor manufacturing. Nano-scale surface roughness-induced hydrodynamic impacts on particle deposition were evaluated in the absence of an energy barrier to deposition in a parallel plate system. A non-linear, non-monotonic relationship between deposition surface roughness and particle deposition flux was observed and a critical roughness size associated with minimum deposition flux or “sag effect” was identified. This effect was more significant for nanoparticles (<1 μm) than for colloids and was numerically simulated using a Convective-Diffusion model and experimentally validated. Inclusion of flow field and hydrodynamic retardation effects explained particle deposition profiles better than when only the Derjaguin-Landau-Verwey-Overbeek (DLVO) force was considered. This work provides 1) a first comprehensive framework for describing the hydrodynamic impacts of nano-scale surface roughness on particle deposition by unifying hydrodynamic forces (using the most current approaches for describing flow field profiles and hydrodynamic retardation effects) with appropriately modified expressions for DLVO interaction energies, and gravity forces in one model and 2) a foundation for further describing the impacts of more complicated scales of deposition surface roughness on particle deposition.

  9. Phase Behavior of Blends of Linear and Branched Polyethylenes on Micron-Length Scales via Ultra-Small-Angle Neutron Scattering (USANS)

    International Nuclear Information System (INIS)

    Agamalian, M.M.; Alamo, R.G.; Londono, J.D.; Mandelkern, L.; Wignall, G.D.

    1999-01-01

    SANS experiments on blends of linear, high density (HD) and long chain branched, low density (LD) polyethylenes indicate that these systems form a one-phase mixture in the melt. However, the maximum spatial resolution of pinhole cameras is approximately equal to 10 3 and it has therefore been suggested that data might also be interpreted as arising from a bi-phasic melt with large a particle size ( 1 m), because most of the scattering from the different phases would not be resolved. We have addressed this hypothesis by means of USANS experiments, which confirm that HDPEILDPE blends are homogenous in the melt on length scales up to 20 m. We have also studied blends of HDPE and short-chain branched linear low density polyethylenes (LLDPEs), which phase separate when the branch content is sufficiently high. LLDPEs prepared with Ziegler-Natta catalysts exhibit a wide distribution of compositions, and may therefore be thought of as a blend of different species. When the composition distribution is broad enough, a fraction of highly branched chains may phase separate on m-length scales, and USANS has also been used to quantify this phenomenon

  10. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C. [Cavendish Laboratory, J. J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Hine, N. D. M. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Haynes, P. D. [Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thomas Young Centre for Theory and Simulation of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  11. Regional impacts of climate change and atmospheric CO2 on future ocean carbon uptake: a multi model linear feedback analysis

    International Nuclear Information System (INIS)

    Roy, Tilla; Bopp, Laurent; Gehlen, Marion; Cadule, Patricia; Schneider, Birgit; Frolicher, Thomas L.; Segschneider, Joachim; Tjiputra, Jerry; Heinze, Christoph; Joos, Fortunat

    2011-01-01

    The increase in atmospheric CO 2 over this century depends on the evolution of the oceanic air-sea CO 2 uptake, which will be driven by the combined response to rising atmospheric CO 2 itself and climate change. Here, the future oceanic CO 2 uptake is simulated using an ensemble of coupled climate-carbon cycle models. The models are driven by CO 2 emissions from historical data and the Special Report on Emissions Scenarios (SRES) A2 high-emission scenario. A linear feedback analysis successfully separates the regional future (2010-2100) oceanic CO 2 uptake into a CO 2 -induced component, due to rising atmospheric CO 2 concentrations, and a climate-induced component, due to global warming. The models capture the observation based magnitude and distribution of anthropogenic CO 2 uptake. The distributions of the climate-induced component are broadly consistent between the models, with reduced CO 2 uptake in the sub polar Southern Ocean and the equatorial regions, owing to decreased CO 2 solubility; and reduced CO 2 uptake in the mid-latitudes, owing to decreased CO 2 solubility and increased vertical stratification. The magnitude of the climate-induced component is sensitive to local warming in the southern extra-tropics, to large freshwater fluxes in the extra-tropical North Atlantic Ocean, and to small changes in the CO 2 solubility in the equatorial regions. In key anthropogenic CO 2 uptake regions, the climate-induced component offsets the CO 2 - induced component at a constant proportion up until the end of this century. This amounts to approximately 50% in the northern extra-tropics and 25% in the southern extra-tropics and equatorial regions. Consequently, the detection of climate change impacts on anthropogenic CO 2 uptake may be difficult without monitoring additional tracers, such as oxygen. (authors)

  12. Regional impacts of climate change and atmospheric CO2 on future ocean carbon uptake: a multi model linear feedback analysis

    International Nuclear Information System (INIS)

    Roy, Tilla; Bopp, Laurent; Gehlen, Marion; Cadule, Patricia

    2011-01-01

    The increase in atmospheric CO 2 over this century depends on the evolution of the oceanic air-sea CO 2 uptake, which will be driven by the combined response to rising atmospheric CO 2 itself and climate change. Here, the future oceanic CO 2 uptake is simulated using an ensemble of coupled climate-carbon cycle models. The models are driven by CO 2 emissions from historical data and the Special Report on Emissions Scenarios (SRES) A2 high-emission scenario. A linear feedback analysis successfully separates the regional future (2010-2100) oceanic CO 2 uptake into a CO 2 -induced component, due to rising atmospheric CO 2 concentrations, and a climate-induced component, due to global warming. The models capture the observation based magnitude and distribution of anthropogenic CO 2 uptake. The distributions of the climate-induced component are broadly consistent between the models, with reduced CO 2 uptake in the sub-polar Southern Ocean and the equatorial regions, owing to decreased CO 2 solubility; and reduced CO 2 uptake in the mid latitudes, owing to decreased CO 2 solubility and increased vertical stratification. The magnitude of the climate-induced component is sensitive to local warming in the southern extra tropics, to large freshwater fluxes in the extra tropical North Atlantic Ocean, and to small changes in the CO 2 solubility in the equatorial regions. In key anthropogenic CO 2 uptake regions, the climate-induced component offsets the CO 2 - induced component at a constant proportion up until the end of this century. This amounts to approximately 50% in the northern extra tropics and 25% in the southern extra tropics and equatorial regions. Consequently, the detection of climate change impacts on anthropogenic CO 2 uptake may be difficult without monitoring additional tracers, such as oxygen. (authors)

  13. Multi-scale connectivity and graph theory highlight critical areas for conservation under climate change

    Science.gov (United States)

    Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.

    2016-01-01

    Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.

  14. Land cover change or land use intensification: simulating land system change with a global-scale land change model

    NARCIS (Netherlands)

    van Asselen, S.; Verburg, P.H.

    2013-01-01

    Land-use change is both a cause and consequence of many biophysical and socioeconomic changes. The CLUMondo model provides an innovative approach for global land-use change modeling to support integrated assessments. Demands for goods and services are, in the model, supplied by a variety of land

  15. Surface changes of metal alloys and high-strength ceramics after ultrasonic scaling and intraoral polishing.

    Science.gov (United States)

    Yoon, Hyung-In; Noh, Hyo-Mi; Park, Eun-Jin

    2017-06-01

    This study was to evaluate the effect of repeated ultrasonic scaling and surface polishing with intraoral polishing kits on the surface roughness of three different restorative materials. A total of 15 identical discs were fabricated with three different materials. The ultrasonic scaling was conducted for 20 seconds on the test surfaces. Subsequently, a multi-step polishing with recommended intraoral polishing kit was performed for 30 seconds. The 3D profiler and scanning electron microscopy were used to investigate surface integrity before scaling (pristine), after scaling, and after surface polishing for each material. Non-parametric Friedman and Wilcoxon signed rank sum tests were employed to statistically evaluate surface roughness changes of the pristine, scaled, and polished specimens. The level of significance was set at 0.05. Surface roughness values before scaling (pristine), after scaling, and polishing of the metal alloys were 3.02±0.34 µm, 2.44±0.72 µm, and 3.49±0.72 µm, respectively. Surface roughness of lithium disilicate increased from 2.35±1.05 µm (pristine) to 28.54±9.64 µm (scaling), and further increased after polishing (56.66±9.12 µm, P scaling (from 1.65±0.42 µm to 101.37±18.75 µm), while its surface roughness decreased after polishing (29.57±18.86 µm, P scaling significantly changed the surface integrities of lithium disilicate and zirconia. Surface polishing with multi-step intraoral kit after repeated scaling was only effective for the zirconia, while it was not for lithium disilicate.

  16. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  17. Effects of two-scale transverse crack systems on the non-linear behaviour of a 2D SiC-SiC composite

    Energy Technology Data Exchange (ETDEWEB)

    Morvan, J.-M.; Baste, S. [Bordeaux-1 Univ., 33 - Talence (France). Lab. de Mecanique Physique

    1998-07-31

    By using both an ultrasonic device and an extensometer, it is possible to know which stiffness coefficients change during the damage process of a material and which part of the global strain is either elastic or inelastic. The influence of the two damage mechanisms is described for a woven 2D SiC-SiC composite. It appears that the two scales of this composite have a great influence on its behaviour. Two elementary mechanisms occur at both scales of the material: at the mesostructure level consisting of the bundles as well as of the inter-bundle matrix and at the microstructure level made from both the fibres and the intra-bundle matrix. The inelastic strains are sensitive to this two-scale effect: an increment of strain at constant stress that comes to saturation corresponding to the inter-bundle damage process and a strain which needs an increase in stress as cracking occurs at the fibres scale. With the help of a model that predicts the compliance changes caused by a crack system in a solid, it is possible to predict the crack density variation at both scales as well as the geometry of the various crack systems during monotonous loading. Furthermore, when the crack opening is taken into account, it appears that the inelastic strain is governed by the transverse crack density. (orig.) 12 refs.

  18. Shape shifting predicts ontogenetic changes in metabolic scaling in diverse aquatic invertebrates.

    Science.gov (United States)

    Glazier, Douglas S; Hirst, Andrew G; Atkinson, David

    2015-03-07

    Metabolism fuels all biological activities, and thus understanding its variation is fundamentally important. Much of this variation is related to body size, which is commonly believed to follow a 3/4-power scaling law. However, during ontogeny, many kinds of animals and plants show marked shifts in metabolic scaling that deviate from 3/4-power scaling predicted by general models. Here, we show that in diverse aquatic invertebrates, ontogenetic shifts in the scaling of routine metabolic rate from near isometry (bR = scaling exponent approx. 1) to negative allometry (bR < 1), or the reverse, are associated with significant changes in body shape (indexed by bL = the scaling exponent of the relationship between body mass and body length). The observed inverse correlations between bR and bL are predicted by metabolic scaling theory that emphasizes resource/waste fluxes across external body surfaces, but contradict theory that emphasizes resource transport through internal networks. Geometric estimates of the scaling of surface area (SA) with body mass (bA) further show that ontogenetic shifts in bR and bA are positively correlated. These results support new metabolic scaling theory based on SA influences that may be applied to ontogenetic shifts in bR shown by many kinds of animals and plants. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  19. Past and future changes in streamflow in the U.S. Midwest: Bridging across time scales

    Science.gov (United States)

    Villarini, G.; Slater, L. J.; Salvi, K. A.

    2017-12-01

    Streamflows have increased notably across the U.S. Midwest over the past century, principally due to changes in precipitation and land use / land cover. Improving our understanding of the physical drivers that are responsible for the observed changes in discharge may enhance our capability of predicting and projecting these changes, and may have large implications for water resources management over this area. This study will highlight our efforts towards the statistical attribution of changes in discharge across the U.S. Midwest, with analyses performed at the seasonal scale from low to high flows. The main drivers of changing streamflows that we focus on are: urbanization, agricultural land cover, basin-averaged temperature, basin-averaged precipitation, and antecedent soil moisture. Building on the insights from this attribution, we will examine the potential predictability of streamflow across different time scales, with lead times ranging from seasonal to decadal, and discuss a potential path forward for engineering design for future conditions.

  20. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    Science.gov (United States)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  1. A review of downscaling procedures - a contribution to the research on climate change impacts at city scale

    Science.gov (United States)

    Smid, Marek; Costa, Ana; Pebesma, Edzer; Granell, Carlos; Bhattacharya, Devanjan

    2016-04-01

    Human kind is currently predominantly urban based, and the majority of ever continuing population growth will take place in urban agglomerations. Urban systems are not only major drivers of climate change, but also the impact hot spots. Furthermore, climate change impacts are commonly managed at city scale. Therefore, assessing climate change impacts on urban systems is a very relevant subject of research. Climate and its impacts on all levels (local, meso and global scale) and also the inter-scale dependencies of those processes should be a subject to detail analysis. While global and regional projections of future climate are currently available, local-scale information is lacking. Hence, statistical downscaling methodologies represent a potentially efficient way to help to close this gap. In general, the methodological reviews of downscaling procedures cover the various methods according to their application (e.g. downscaling for the hydrological modelling). Some of the most recent and comprehensive studies, such as the ESSEM COST Action ES1102 (VALUE), use the concept of Perfect Prog and MOS. Other examples of classification schemes of downscaling techniques consider three main categories: linear methods, weather classifications and weather generators. Downscaling and climate modelling represent a multidisciplinary field, where researchers from various backgrounds intersect their efforts, resulting in specific terminology, which may be somewhat confusing. For instance, the Polynomial Regression (also called the Surface Trend Analysis) is a statistical technique. In the context of the spatial interpolation procedures, it is commonly classified as a deterministic technique, and kriging approaches are classified as stochastic. Furthermore, the terms "statistical" and "stochastic" (frequently used as names of sub-classes in downscaling methodological reviews) are not always considered as synonymous, even though both terms could be seen as identical since they are

  2. Large scale atmospheric tropical circulation changes and consequences during global warming

    International Nuclear Information System (INIS)

    Gastineau, G.

    2008-01-01

    The changes of the tropical large scale circulation during climate change can have large impacts on human activities. In a first part, the meridional atmospheric tropical circulation was studied in the different coupled models. During climate change, we find, on the one hand, that the Hadley meridional circulation and the subtropical jet are significantly shifted poleward, and on the other hand, that the intensity of the tropical circulation weakens. The slow down of the atmospheric circulation results from the dry static stability changes affecting the tropical troposphere. Secondly, idealized simulations are used to explain the tropical circulation changes. Ensemble simulation using the model LMDZ4 are set up to study the results from the coupled model IPSLCM4. The weakening of the large scale tropical circulation and the poleward shift of the Hadley cells are explained by both the uniform change and the meridional gradient change of the sea surface temperature. Then, we used the atmospheric model LMDZ4 in an aqua-planet configuration. The Hadley circulation changes are explained in a simple framework by the required poleward energy transport. In a last part, we focus on the water vapor distribution and feedback in the climate models. The Hadley circulation changes were shown to have a significant impact on the water vapour feedback during climate change. (author)

  3. Watershed scale response to climate change--Trout Lake Basin, Wisconsin

    Science.gov (United States)

    Walker, John F.; Hunt, Randall J.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    General Circulation Model simulations of future climate through 2099 project a wide range of possible scenarios. To determine the sensitivity and potential effect of long-term climate change on the freshwater resources of the United States, the U.S. Geological Survey Global Change study, "An integrated watershed scale response to global change in selected basins across the United States" was started in 2008. The long-term goal of this national study is to provide the foundation for hydrologically based climate change studies across the nation.

  4. Modeling and Control of Switching Max-Plus-Linear Systems : Rescheduling of railway traffic and changing gaits in legged locomotion

    NARCIS (Netherlands)

    Kersbergen, B.

    2015-01-01

    The operation of many systems can be described by the timing of events. When the system behavior can be described by equations that are "linear'' in the max-plus algebra, which has maximization and addition as its basic operations, the system is called a max-plus-linear system. In many of these

  5. Changes in magnetic properties from solid state to solution in a trinuclear linear copper(II) complex

    NARCIS (Netherlands)

    Koval, I.A.; Akhideno, H.; Tanase, S.; Belle, C.; Duboc, C.; Saint-Aman, E.; Gamez, P.; Tooke, D.M.; Spek, A.L.; Pierre, J.-L.; Reedijk, J.

    2007-01-01

    A linear trinuclear copper(II) complex containing phenoxido- and alkoxido-bridges between the metal centers has been isolated and structurally characterized. The complex cation consists of a linear array of three copper ions, assembled by means of two doubly deprotonated ligands. The octahedral

  6. A new method for large-scale assessment of change in ecosystem functioning in relation to land degradation

    Science.gov (United States)

    Horion, Stephanie; Ivits, Eva; Verzandvoort, Simone; Fensholt, Rasmus

    2017-04-01

    Ongoing pressures on European land are manifold with extreme climate events and non-sustainable use of land resources being amongst the most important drivers altering the functioning of the ecosystems. The protection and conservation of European natural capital is one of the key objectives of the 7th Environmental Action Plan (EAP). The EAP stipulates that European land must be managed in a sustainable way by 2020 and the UN Sustainable development goals define a Land Degradation Neutral world as one of the targets. This implies that land degradation (LD) assessment of European ecosystems must be performed repeatedly allowing for the assessment of the current state of LD as well as changes compared to a baseline adopted by the UNCCD for the objective of land degradation neutrality. However, scientifically robust methods are still lacking for large-scale assessment of LD and repeated consistent mapping of the state of terrestrial ecosystems. Historical land degradation assessments based on various methods exist, but methods are generally non-replicable or difficult to apply at continental scale (Allan et al. 2007). The current lack of research methods applicable at large spatial scales is notably caused by the non-robust definition of LD, the scarcity of field data on LD, as well as the complex inter-play of the processes driving LD (Vogt et al., 2011). Moreover, the link between LD and changes in land use (how land use changes relates to change in vegetation productivity and ecosystem functioning) is not straightforward. In this study we used the segmented trend method developed by Horion et al. (2016) for large-scale systematic assessment of hotspots of change in ecosystem functioning in relation to LD. This method alleviates shortcomings of widely used linear trend model that does not account for abrupt change, nor adequately captures the actual changes in ecosystem functioning (de Jong et al. 2013; Horion et al. 2016). Here we present a new methodology for

  7. Large-scale impact of climate change vs. land-use change on future biome shifts in Latin America.

    Science.gov (United States)

    Boit, Alice; Sakschewski, Boris; Boysen, Lena; Cano-Crespo, Ana; Clement, Jan; Garcia-Alaniz, Nashieli; Kok, Kasper; Kolb, Melanie; Langerwisch, Fanny; Rammig, Anja; Sachse, René; van Eupen, Michiel; von Bloh, Werner; Clara Zemp, Delphine; Thonicke, Kirsten

    2016-11-01

    Climate change and land-use change are two major drivers of biome shifts causing habitat and biodiversity loss. What is missing is a continental-scale future projection of the estimated relative impacts of both drivers on biome shifts over the course of this century. Here, we provide such a projection for the biodiverse region of Latin America under four socio-economic development scenarios. We find that across all scenarios 5-6% of the total area will undergo biome shifts that can be attributed to climate change until 2099. The relative impact of climate change on biome shifts may overtake land-use change even under an optimistic climate scenario, if land-use expansion is halted by the mid-century. We suggest that constraining land-use change and preserving the remaining natural vegetation early during this century creates opportunities to mitigate climate-change impacts during the second half of this century. Our results may guide the evaluation of socio-economic scenarios in terms of their potential for biome conservation under global change. © 2016 John Wiley & Sons Ltd.

  8. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  9. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  10. Comparison of height-diameter models based on geographically weighted regressions and linear mixed modelling applied to large scale forest inventory data

    Energy Technology Data Exchange (ETDEWEB)

    Quirós Segovia, M.; Condés Ruiz, S.; Drápela, K.

    2016-07-01

    Aim of the study: The main objective of this study was to test Geographically Weighted Regression (GWR) for developing height-diameter curves for forests on a large scale and to compare it with Linear Mixed Models (LMM). Area of study: Monospecific stands of Pinus halepensis Mill. located in the region of Murcia (Southeast Spain). Materials and Methods: The dataset consisted of 230 sample plots (2582 trees) from the Third Spanish National Forest Inventory (SNFI) randomly split into training data (152 plots) and validation data (78 plots). Two different methodologies were used for modelling local (Petterson) and generalized height-diameter relationships (Cañadas I): GWR, with different bandwidths, and linear mixed models. Finally, the quality of the estimated models was compared throughout statistical analysis. Main results: In general, both LMM and GWR provide better prediction capability when applied to a generalized height-diameter function than when applied to a local one, with R2 values increasing from around 0.6 to 0.7 in the model validation. Bias and RMSE were also lower for the generalized function. However, error analysis showed that there were no large differences between these two methodologies, evidencing that GWR provides results which are as good as the more frequently used LMM methodology, at least when no additional measurements are available for calibrating. Research highlights: GWR is a type of spatial analysis for exploring spatially heterogeneous processes. GWR can model spatial variation in tree height-diameter relationship and its regression quality is comparable to LMM. The advantage of GWR over LMM is the possibility to determine the spatial location of every parameter without additional measurements. Abbreviations: GWR (Geographically Weighted Regression); LMM (Linear Mixed Model); SNFI (Spanish National Forest Inventory). (Author)

  11. Photogeologic study of small-scale linear features near a potential nuclear-waste repository site at Yucca Mountain, southern Nye County, Nevada

    International Nuclear Information System (INIS)

    Throckmorton, C.K.

    1987-01-01

    Linear features were mapped from 1:2400-scale aerial photographs of the northern half of the potential underground nuclear-waste repository site at Yucca Mountain by means of a Kern PG 2 stereoplotter. These features were thought to be the expression of fractures at the ground surface (fracture traces), and were mapped in the caprock, upper lithophysal, undifferentiated lower lithophysal and hackly units of the Tiva Canyon Member of the Miocene Paintbrush Tuff. To determine if the linear features corresponded to fracture traces observed in the field, stations (areas) were selected on the map where the traces were both abundant and located solely within one unit. These areas were visited in the field, where fracture-trace bearings and fracture-trace lengths were recorded. Additional data on fracture-trace length and fracture abundance, obtained from ground-based studies of cleared pavements located within the study area were used to help evaluate data collected for this study. 16 refs., 4 figs., 2 tabs

  12. Climate change in Inner Mongolia from 1955 to 2005-trends at regional, biome and local scales

    Energy Technology Data Exchange (ETDEWEB)

    Lu, N; Wilske, B; John, R; Chen, J [Department of Environmental Sciences, University of Toledo, Toledo, OH 43606 (United States); Ni, J, E-mail: nan.lu@utoledo.ed, E-mail: burkhard.wilske@utoledo.ed, E-mail: jni@ibcas.ac.c, E-mail: ranjeet.john@utoledo.ed, E-mail: jiquan.chen@utoledo.ed [Alfred Wegener Institute for Polar and Marine Research, Telegrafenberg A43, D-14473 Potsdam (Germany)

    2009-10-15

    This study investigated the climate change in Inner Mongolia based on 51 meteorological stations from 1955 to 2005. The climate data was analyzed at the regional, biome (i.e. forest, grassland and desert) and station scales, with the biome scale as our primary focus. The climate records showed trends of warmer and drier conditions in the region. The annual daily mean, maximum and minimum temperature increased whereas the diurnal temperature range (DTR) decreased. The decreasing trend of annual precipitation was not significant. However, the vapor pressure deficit (VPD) increased significantly. On the decadal scale, the warming and drying trends were more significant in the last 30 years than the preceding 20 years. The climate change varied among biomes, with more pronounced changes in the grassland and the desert biomes than in the forest biome. DTR and VPD showed the clearest inter-biome gradient from the lowest rate of change in the forest biome to the highest rate of change in the desert biome. The rates of change also showed large variations among the individual stations. Our findings correspond with the IPCC predictions that the future climate will vary significantly by location and through time, suggesting that adaptation strategies also need to be spatially viable.

  13. Climate change in Inner Mongolia from 1955 to 2005-trends at regional, biome and local scales

    International Nuclear Information System (INIS)

    Lu, N; Wilske, B; John, R; Chen, J; Ni, J

    2009-01-01

    This study investigated the climate change in Inner Mongolia based on 51 meteorological stations from 1955 to 2005. The climate data was analyzed at the regional, biome (i.e. forest, grassland and desert) and station scales, with the biome scale as our primary focus. The climate records showed trends of warmer and drier conditions in the region. The annual daily mean, maximum and minimum temperature increased whereas the diurnal temperature range (DTR) decreased. The decreasing trend of annual precipitation was not significant. However, the vapor pressure deficit (VPD) increased significantly. On the decadal scale, the warming and drying trends were more significant in the last 30 years than the preceding 20 years. The climate change varied among biomes, with more pronounced changes in the grassland and the desert biomes than in the forest biome. DTR and VPD showed the clearest inter-biome gradient from the lowest rate of change in the forest biome to the highest rate of change in the desert biome. The rates of change also showed large variations among the individual stations. Our findings correspond with the IPCC predictions that the future climate will vary significantly by location and through time, suggesting that adaptation strategies also need to be spatially viable.

  14. Use of multiple linear regression and logistic regression models to investigate changes in birthweight for term singleton infants in Scotland.

    Science.gov (United States)

    Bonellie, Sandra R

    2012-10-01

    To illustrate the use of regression and logistic regression models to investigate changes over time in size of babies particularly in relation to social deprivation, age of the mother and smoking. Mean birthweight has been found to be increasing in many countries in recent years, but there are still a group of babies who are born with low birthweights. Population-based retrospective cohort study. Multiple linear regression and logistic regression models are used to analyse data on term 'singleton births' from Scottish hospitals between 1994-2003. Mothers who smoke are shown to give birth to lighter babies on average, a difference of approximately 0.57 Standard deviations lower (95% confidence interval. 0.55-0.58) when adjusted for sex and parity. These mothers are also more likely to have babies that are low birthweight (odds ratio 3.46, 95% confidence interval 3.30-3.63) compared with non-smokers. Low birthweight is 30% more likely where the mother lives in the most deprived areas compared with the least deprived, (odds ratio 1.30, 95% confidence interval 1.21-1.40). Smoking during pregnancy is shown to have a detrimental effect on the size of infants at birth. This effect explains some, though not all, of the observed socioeconomic birthweight. It also explains much of the observed birthweight differences by the age of the mother.   Identifying mothers at greater risk of having a low birthweight baby as important implications for the care and advice this group receives. © 2012 Blackwell Publishing Ltd.

  15. A climate-change adaptation framework to reduce continental-scale vulnerability across conservation reserves

    Science.gov (United States)

    D.R. Magness; J.M. Morton; F. Huettmann; F.S. Chapin; A.D. McGuire

    2011-01-01

    Rapid climate change, in conjunction with other anthropogenic drivers, has the potential to cause mass species extinction. To minimize this risk, conservation reserves need to be coordinated at multiple spatial scales because the climate envelopes of many species may shift rapidly across large geographic areas. In addition, novel species assemblages and ecological...

  16. Variations in tropical convection as an amplifier of global climate change at the millennial scale

    NARCIS (Netherlands)

    Ivanochkoa, T.S.; Ganeshram, R.S.; Brummer, G.J.A.; Ganssen, G.M.; Jung, S.J.A.; Moreton, S.G.; Kroon, D.

    2005-01-01

    The global expression of millennial-scale climatic change during the glacial period and the persistence of this signal in Holocene records point to atmospheric teleconnections as the mechanism propagating rapid climate variations. We suggest rearrangements in the tropical convection system globally

  17. Impact of water quality change on corrosion scales in full and partially replaced lead service lines

    Science.gov (United States)

    BackgroundChanges in water qualities have been associated with an increase in lead release from full and partial lead service lines (LSLs), such as the cases of Washington D.C. or more recently of Flint (Mi). Water qualities affect the mineralogy of the scales. Furthermore, follo...

  18. Service Providers’ Willingness to Change as Innovation Inductor in Services: Validating a Scale

    Directory of Open Access Journals (Sweden)

    Marina Figueiredo Moreir

    2016-12-01

    Full Text Available This study explores the willingness of service providers to incorporate changes suggested by clients altering previously planned services during its delivery, hereby named Willingness to Change in Services [WCS]. We apply qualitative research techniques to map seven dimensions related to this phenomenon: Client relationship management; Organizational conditions for change; Software characteristics and development; Conditions affecting teams; Administrative procedures and decision-making conditions; Entrepreneurial behavior; Interaction with supporting organizations. These dimensions have been converted into variables composing a WCS scale later submitted to theoretical and semantic validations. A scale with 26 variables resulted from such procedures was applied on a large survey carried out with 351 typical Brazilian software development service companies operating all over the country. Data from our sample have been submitted to multivariate statistical analysis to provide validation for the scale. After factorial analysis procedures, 24 items have been validated and assigned to three factors representative of WCS: Organizational Routines and Values – 12 variables; Organizational Structure for Change – 6 variables; and Service Specificities – 6 variables. As future contributions, we expect to see further testing for the WCS scale on alternative service activities to provide evidence about its limits and contributions to general service innovation theory.

  19. Modelling land change: the issue of use and cover in wide-scale applications

    NARCIS (Netherlands)

    Bakker, M.M.; Veldkamp, A.

    2008-01-01

    In this article, the underlying causes for the apparent mismatch between land cover and land use in the context of wide-scale land change modelling are explored. A land use-land cover (LU/LC) ratio is proposed as a relevant landscape characteristic. The one-to-one ratio between land use and land

  20. Evaluating Change in Behavioral Preferences: Multidimensional Scaling Single-Ideal Point Model

    Science.gov (United States)

    Ding, Cody

    2016-01-01

    The purpose of the article is to propose a multidimensional scaling single-ideal point model as a method to evaluate changes in individuals' preferences under the explicit methodological framework of behavioral preference assessment. One example is used to illustrate the approach for a clear idea of what this approach can accomplish.

  1. Cloud-based computation for accelerating vegetation mapping and change detection at regional to national scales

    Science.gov (United States)

    Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts

    2015-01-01

    Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...

  2. Cross-Cultural Validation of Stages of Exercise Change Scale among Chinese College Students

    Science.gov (United States)

    Keating, Xiaofen D.; Guan, Jianmin; Huang, Yong; Deng, Mingying; Wu, Yifeng; Qu, Shuhua

    2005-01-01

    The purpose of the study was to test the cross-cultural concurrent validity of the stages of exercise change scale (SECS) in Chinese college students. The original SECS was translated into Chinese (C-SECS). Students from four Chinese universities (N = 1843) participated in the study. The leisure-time exercise (LTE) questionnaire was used to…

  3. Large Scale Chromosome Folding Is Stable against Local Changes in Chromatin Structure.

    Directory of Open Access Journals (Sweden)

    Ana-Maria Florescu

    2016-06-01

    Full Text Available Characterizing the link between small-scale chromatin structure and large-scale chromosome folding during interphase is a prerequisite for understanding transcription. Yet, this link remains poorly investigated. Here, we introduce a simple biophysical model where interphase chromosomes are described in terms of the folding of chromatin sequences composed of alternating blocks of fibers with different thicknesses and flexibilities, and we use it to study the influence of sequence disorder on chromosome behaviors in space and time. By employing extensive computer simulations, we thus demonstrate that chromosomes undergo noticeable conformational changes only on length-scales smaller than 105 basepairs and time-scales shorter than a few seconds, and we suggest there might exist effective upper bounds to the detection of chromosome reorganization in eukaryotes. We prove the relevance of our framework by modeling recent experimental FISH data on murine chromosomes.

  4. Future changes in large-scale transport and stratosphere-troposphere exchange

    Science.gov (United States)

    Abalos, M.; Randel, W. J.; Kinnison, D. E.; Garcia, R. R.

    2017-12-01

    Future changes in large-scale transport are investigated in long-term (1955-2099) simulations of the Community Earth System Model - Whole Atmosphere Community Climate Model (CESM-WACCM) under an RCP6.0 climate change scenario. We examine artificial passive tracers in order to isolate transport changes from future changes in emissions and chemical processes. The model suggests enhanced stratosphere-troposphere exchange in both directions (STE), with decreasing tropospheric and increasing stratospheric tracer concentrations in the troposphere. Changes in the different transport processes are evaluated using the Transformed Eulerian Mean continuity equation, including parameterized convective transport. Dynamical changes associated with the rise of the tropopause height are shown to play a crucial role on future transport trends.

  5. Crystallization characteristic and scaling behavior of germanium antimony thin films for phase change memory.

    Science.gov (United States)

    Wu, Weihua; Zhao, Zihan; Shen, Bo; Zhai, Jiwei; Song, Sannian; Song, Zhitang

    2018-04-19

    Amorphous Ge8Sb92 thin films with various thicknesses were deposited by magnetron sputtering. The crystallization kinetics and optical properties of the Ge8Sb92 thin films and related scaling effects were investigated by an in situ thermally induced method and an optical technique. With a decrease in film thickness, the crystallization temperature, crystallization activation energy and data retention ability increased significantly. The changed crystallization behavior may be ascribed to the smaller grain size and larger surface-to-volume ratio as the film thickness decreased. Regardless of whether the state was amorphous or crystalline, the film resistance increased remarkably as the film thickness decreased to 3 nm. The optical band gap calculated from the reflection spectra increases distinctly with a reduction in film thickness. X-ray diffraction patterns confirm that the scaling of the Ge8Sb92 thin film can inhibit the crystallization process and reduce the grain size. The values of exponent indices that were obtained indicate that the crystallization mechanism experiences a series of changes with scaling of the film thickness. The crystallization time was estimated to determine the scaling effect on the phase change speed. The scaling effect on the electrical switching performance of a phase change memory cell was also determined. The current-voltage and resistance-voltage characteristics indicate that phase change memory cells based on a thinner Ge8Sb92 film will exhibit a higher threshold voltage, lower RESET operational voltage and greater pulse width, which implies higher thermal stability, lower power consumption and relatively lower switching velocity.

  6. Introduction to the Special Issue: Across the horizon: scale effects in global change research.

    Science.gov (United States)

    Gornish, Elise S; Leuzinger, Sebastian

    2015-01-01

    As a result of the increasing speed and magnitude in which habitats worldwide are experiencing environmental change, making accurate predictions of the effects of global change on ecosystems and the organisms that inhabit them have become an important goal for ecologists. Experimental and modelling approaches aimed at understanding the linkages between factors of global change and biotic responses have become numerous and increasingly complex in order to adequately capture the multifarious dynamics associated with these relationships. However, constrained by resources, experiments are often conducted at small spatiotemporal scales (e.g. looking at a plot of a few square metres over a few years) and at low organizational levels (looking at organisms rather than ecosystems) in spite of both theoretical and experimental work that suggests ecological dynamics across scales can be dissimilar. This phenomenon has been hypothesized to occur because the mechanisms that drive dynamics across scales differ. A good example is the effect of elevated CO2 on transpiration. While at the leaf level, transpiration can be reduced, at the stand level, transpiration can increase because leaf area per unit ground area increases. The reported net effect is then highly dependent on the spatiotemporal scale. This special issue considers the biological relevancy inherent in the patterns associated with the magnitude and type of response to changing environmental conditions, across scales. This collection of papers attempts to provide a comprehensive treatment of this phenomenon in order to help develop an understanding of the extent of, and mechanisms involved with, ecological response to global change. Published by Oxford University Press on behalf of the Annals of Botany Company.

  7. Preliminary Development of a Free Piston Expander–Linear Generator for Small-Scale Organic Rankine Cycle (ORC Waste Heat Recovery System

    Directory of Open Access Journals (Sweden)

    Gaosheng Li

    2016-04-01

    Full Text Available A novel free piston expander-linear generator (FPE-LG integrated unit was proposed to recover waste heat efficiently from vehicle engine. This integrated unit can be used in a small-scale Organic Rankine Cycle (ORC system and can directly convert the thermodynamic energy of working fluid into electric energy. The conceptual design of the free piston expander (FPE was introduced and discussed. A cam plate and the corresponding valve train were used to control the inlet and outlet valve timing of the FPE. The working principle of the FPE-LG was proven to be feasible using an air test rig. The indicated efficiency of the FPE was obtained from the p–V indicator diagram. The dynamic characteristics of the in-cylinder flow field during the intake and exhaust processes of the FPE were analyzed based on Fluent software and 3D numerical simulation models using a computation fluid dynamics method. Results show that the indicated efficiency of the FPE can reach 66.2% and the maximal electric power output of the FPE-LG can reach 22.7 W when the working frequency is 3 Hz and intake pressure is 0.2 MPa. Two large-scale vortices are formed during the intake process because of the non-uniform distribution of velocity and pressure. The vortex flow will convert pressure energy and kinetic energy into thermodynamic energy for the working fluid, which weakens the power capacity of the working fluid.

  8. Organizational capacity for change in health care: Development and validation of a scale.

    Science.gov (United States)

    Spaulding, Aaron; Kash, Bita A; Johnson, Christopher E; Gamm, Larry

    We do not have a strong understanding of a health care organization's capacity for attempting and completing multiple and sometimes competing change initiatives. Capacity for change implementation is a critical success factor as the health care industry is faced with ongoing demands for change and transformation because of technological advances, market forces, and regulatory environment. The aim of this study was to develop and validate a tool to measure health care organizations' capacity to change by building upon previous conceptualizations of absorptive capacity and organizational readiness for change. A multistep process was used to develop the organizational capacity for change survey. The survey was sent to two populations requesting answers to questions about the organization's leadership, culture, and technologies in use throughout the organization. Exploratory and confirmatory factor analyses were conducted to validate the survey as a measurement tool for organizational capacity for change in the health care setting. The resulting organizational capacity for change measurement tool proves to be a valid and reliable method of evaluating a hospital's capacity for change through the measurement of the population's perceptions related to leadership, culture, and organizational technologies. The organizational capacity for change measurement tool can help health care managers and leaders evaluate the capacity of employees, departments, and teams for change before large-scale implementation.

  9. Selecting quantitative water management measures at the river basin scale in a global change context

    Science.gov (United States)

    Girard, Corentin; Rinaudo, Jean-Daniel; Caballero, Yvan; Pulido-Velazquez, Manuel

    2013-04-01

    One of the main challenges in the implementation of the Water Framework Directive (WFD) in the European Union is the definition of programme of measures to reach the good status of the European water bodies. In areas where water scarcity is an issue, one of these challenges is the selection of water conservation and capacity expansion measures to ensure minimum environmental in-stream flow requirements. At the same time, the WFD calls for the use of economic analysis to identify the most cost-effective combination of measures at the river basin scale to achieve its objective. With this respect, hydro-economic river basin models, by integrating economics, environmental and hydrological aspects at the river basin scale in a consistent framework, represent a promising approach. This article presents a least-cost river basin optimization model (LCRBOM) that selects the combination of quantitative water management measures to meet environmental flows for future scenarios of agricultural and urban demand taken into account the impact of the climate change. The model has been implemented in a case study on a Mediterranean basin in the south of France, the Orb River basin. The water basin has been identified as in need for quantitative water management measures in order to reach the good status of its water bodies. The LCRBOM has been developed using GAMS, applying Mixed Integer Linear Programming. It is run to select the set of measures that minimizes the total annualized cost of the applied measures, while meeting the demands and minimum in-stream flow constraints. For the economic analysis, the programme of measures is composed of water conservation measures on agricultural and urban water demands. It compares them with measures mobilizing new water resources coming from groundwater, inter-basin transfers and improvement in reservoir operating rules. The total annual cost of each measure is calculated for each demand unit considering operation, maintenance and

  10. Effect of Non-linear Velocity Loss Changes in Pumping Stage of Hydraulic Ram Pumps on Pumping Discharge Rate

    Directory of Open Access Journals (Sweden)

    Reza Fatahialkouhi

    2018-03-01

    Full Text Available The ram pump is a device which pumps a portion of input discharge to the pumping system in a significant height by using renewable energy of water hammer. The complexities of flow hydraulic on one hand and on the other hand the use of simplifying assumptions in ram pumps have caused errors in submitted analytical models for analyzing running cycle of these pumps. In this study it has been tried to modify the governing analytical model on hydraulic performance of these pumps in pumping stage. In this study by creating a logical division, the cycle of the ram pump was divided into three stages of acceleration, pumping and recoil and the governing equations on each stage of cycling are presented by using method of characteristics. Since the closing of impulse valve is nonlinear, velocity loss in pumping stage is considered nonlinearly. Also the governing equations in pumping stage were modified by considering disc elasticity of impulse valve and changing volume of the pump body when the water hammer phenomenon is occurred. In order to evaluate results and determine empirical factors of the proposed analytical model, a physical model of the ram pump is made with internal diameter of 51 mm. Results of this study are divided into several parts. In the first part, loss coefficients of the impulse valve were measured experimentally and empirical equations of drag coefficient and friction coefficient of the impulse valve were submitted by using nonlinear regression. In the second part, results were evaluated by using experimental data taken from this study. Evaluation of statistical error functions showed that the proposed model has good accuracy for predicting experimental observations. In the third part, in order to validate the results in pumping stage, the analytical models of Lansford and Dugan (1941 and Tacke (1988 were used and the error functions resulted from prediction of experimental observations were investigated through analytical models of

  11. Identification of the Scale of Changes in Personnel Motivation Techniques at Mechanical-Engineering Enterprises

    Directory of Open Access Journals (Sweden)

    Melnyk Olga G.

    2016-02-01

    Full Text Available The method for identification of the scale of changes in personnel motivation techniques at mechanical-engineering enterprises based on structural and logical sequence of implementation of relevant stages (identification of the mission, strategy and objectives of the enterprise; forecasting the development of the enterprise business environment; SWOT-analysis of actual motivation techniques, deciding on the scale of changes in motivation techniques, choosing providers for changing personnel motivation techniques, choosing an alternative to changing motivation techniques, implementation of changes in motivation techniques; control over changes in motivation techniques. It has been substantiated that the improved method enables providing a systematic and analytical justification for management decisionmaking in this field and choosing the best for the mechanical-engineering enterprise scale and variant of changes in motivation techniques. The method for identification of the scale of changes in motivation techniques at mechanical-engineering enterprises takes into account the previous, current and prospective character. Firstly, the approach is based on considering the past state in the motivational sphere of the mechanical-engineering enterprise; secondly, the method involves identifying the current state of personnel motivation techniques; thirdly, within the method framework the prospective, which is manifested in strategic vision of the enterprise development as well as in forecasting the development of its business environment, is taken into account. The advantage of the proposed method is that the level of its specification may vary depending on the set goals, resource constraints and necessity. Among other things, this method allows integrating various formalized and non-formalized causal relationships in the sphere of personnel motivation at machine-building enterprises and management of relevant processes. This creates preconditions for a

  12. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    Science.gov (United States)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2017-09-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  13. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    Science.gov (United States)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2018-06-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  14. Water limited agriculture in Africa: Climate change sensitivity of large scale land investments

    Science.gov (United States)

    Rulli, M. C.; D'Odorico, P.; Chiarelli, D. D.; Davis, K. F.

    2015-12-01

    The past few decades have seen unprecedented changes in the global agricultural system with a dramatic increase in the rates of food production fueled by an escalating demand for food calories, as a result of demographic growth, dietary changes, and - more recently - new bioenergy policies. Food prices have become consistently higher and increasingly volatile with dramatic spikes in 2007-08 and 2010-11. The confluence of these factors has heightened demand for land and brought a wave of land investment to the developing world: some of the more affluent countries are trying to secure land rights in areas suitable for agriculture. According to some estimates, to date, roughly 38 million hectares have been acquired worldwide by large scale investors, 16 million of which in Africa. More than 85% of large scale land acquisitions in Africa are by foreign investors. Many land deals are motivated not only by the need for fertile land but for the water resources required for crop production. Despite some recent assessments of the water appropriation associated with large scale land investments, their impact on the water resources of the target countries under present conditions and climate change scenarios remains poorly understood. Here we investigate irrigation water requirements by various crops planted in the acquired land as an indicator of the pressure likely placed by land investors on ("blue") water resources of target regions in Africa and evaluate the sensitivity to climate changes scenarios.

  15. Safety Effect Analysis of the Large-Scale Design Changes in a Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun-Chan; Lee, Hyun-Gyo [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2015-05-15

    These activities were predominantly focused on replacing obsolete systems with new systems, and these efforts were not only to prolong the plant life, but also to guarantee the safe operation of the units. This review demonstrates the safety effect evaluation using the probabilistic safety assessment (PSA) of the design changes, system improvements, and Fukushima accident action items for Kori unit 1 (K1). For the large scale of system design changes for K1, the safety effects from the PSA perspective were reviewed using the risk quantification results before and after the system improvements. This evaluation considered the seven significant design changes including the replacement of the control building air conditioning system and the performance improvement of the containment sump using a new filtering system as well as above five system design changes. The analysis results demonstrated that the CDF was reduced by 12% overall from 1.62E-5/y to 1.43E-5/y. The CDF reduction was larger in the transient group than in the loss of coolant accident (LOCA) group. In conclusion, the analysis using the K1 PSA model supports that the plant safety has been appropriately maintained after the large-scale design changes in consideration of the changed operation factors and failure modes due to the system improvements.

  16. Variation of linear and circular polarization persistence for changing field of view and collection area in a forward scattering environment

    Science.gov (United States)

    van der Laan, John D.; Wright, Jeremy B.; Scrymgeour, David A.; Kemme, Shanalyn A.; Dereniak, Eustace L.

    2016-05-01

    We present experimental and simulation results for a laboratory-based forward-scattering environment, where 1 μm diameter polystyrene spheres are suspended in water to model the optical scattering properties of fog. Circular polarization maintains its degree of polarization better than linear polarization as the optical thickness of the scattering environment increases. Both simulation and experiment quantify circular polarization's superior persistence, compared to that of linear polarization, and show that it is much less affected by variations in the field of view and collection area of the optical system. Our experimental environment's lateral extent was physically finite, causing a significant difference between measured and simulated degree of polarization values for incident linearly polarized light, but not for circularly polarized light. Through simulation we demonstrate that circular polarization is less susceptible to the finite environmental extent as well as the collection optic's limiting configuration.

  17. Linear algebra

    CERN Document Server

    Shilov, Georgi E

    1977-01-01

    Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.

  18. Exploring dimensions, scales, and cross-scale dynamics from the perspectives of change agents in social-ecological systems

    NARCIS (Netherlands)

    Vervoort, J.M.; Rutting, L.; Kok, K.; Hermans, F.L.P.; Veldkamp, A.; Bregt, A.K.; Lammeren, van R.J.A.

    2012-01-01

    Issues of scale play a crucial role in the governance of social–ecological systems. Yet, attempts to bridge interdisciplinary perspectives on the role of scale have thus far largely been limited to the science arena. This study has extended the scale vocabulary to allow for the inclusion of

  19. Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale

    Science.gov (United States)

    Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru

    2013-04-01

    Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate

  20. Nanometer-scale temperature measurements of phase change memory and carbon nanomaterials

    Science.gov (United States)

    Grosse, Kyle Lane

    This work investigates nanometer-scale thermometry and thermal transport in new electronic devices to mitigate future electronic energy consumption. Nanometer-scale thermal transport is integral to electronic energy consumption and limits current electronic performance. New electronic devices are required to improve future electronic performance and energy consumption, but heat generation is not well understood in these new technologies. Thermal transport deviates significantly at the nanometer-scale from macroscopic systems as low dimensional materials, grain structure, interfaces, and thermoelectric effects can dominate electronic performance. This work develops and implements an atomic force microscopy (AFM) based nanometer-scale thermometry technique, known as scanning Joule expansion microscopy (SJEM), to measure nanometer-scale heat generation in new graphene and phase change memory (PCM) devices, which have potential to improve performance and energy consumption of future electronics. Nanometer-scale thermometry of chemical vapor deposition (CVD) grown graphene measured the heat generation at graphene wrinkles and grain boundaries (GBs). Graphene is an atomically-thin, two dimensional (2D) carbon material with promising applications in new electronic devices. Comparing measurements and predictions of CVD graphene heating predicted the resistivity, voltage drop, and temperature rise across the one dimensional (1D) GB defects. This work measured the nanometer-scale temperature rise of thin film Ge2Sb2Te5 (GST) based PCM due to Joule, thermoelectric, interface, and grain structure effects. PCM has potential to reduce energy consumption and improve performance of future electronic memory. A new nanometer-scale thermometry technique is developed for independent and direct observation of Joule and thermoelectric effects at the nanometer-scale, and the technique is demonstrated by SJEM measurements of GST devices. Uniform heating and GST properties are observed for

  1. Climate change impacts and adaptations on small-scale livestock production

    Directory of Open Access Journals (Sweden)

    Taruvinga, A.

    2013-06-01

    Full Text Available The paper estimated the impacts of climate change and adaptations on small-scale livestock production. The study is based on a survey of 1484 small-scale livestock rural farmers across the Eastern Cape Province of South Africa. Regression estimates finds that with warming, the probability of choosing the following species increases; goats, dual purpose chicken (DPC, layers, donkeys and ducks. High precipitation increases the probability of choosing the following animals; beef, goats, DPC and donkeys. Further, socio-economic estimates indicate that livestock selection choices are also conditioned by gender, age, marital status, education and household size. The paper therefore concluded that as climate changes, rural farmers switch their livestock combinations as a coping strategy. Unfortunately, rural farmers face a limited preferred livestock selection pool that is combatable to harsh climate which might translate to a bleak future for rural livestock farmers.

  2. Climate Change Impacts on Runoff Regimes at a River Basin Scale in Central Vietnam

    Directory of Open Access Journals (Sweden)

    Do Hoai Nam

    2012-01-01

    Full Text Available Global warming has resulted in significant variability of global climate especially with regard to variation in temperature and precipitation. As a result, it is expected that river flow regimes will be accordingly varied. This study presents a preliminary projection of medium-term and long-term runoff variation caused by climate change at a river basin scale. The large scale precipitation projection at the middle and the end of the 21st century under the A1B scenario simulated by the CGCM model (MRI & JMA, 300 km resolution is statistically downscaled to a basin scale and then used as input for the super-tank model for runoff analysis at the upper Thu Bon River basin in Central Vietnam. Results show that by the middle and the end of this century annual rainfall will increase slightly; together with a rising temperature, potential evapotranspiration is also projected to increase as well. The total annual runoff, as a result, is found to be not distinctly varied relative to the baseline period 1981 - 2000; however, the runoff will decrease in the dry season and increase in the rainy season. The results also indicate the delay tendency of the high river flow period, shifting from Sep-Dec at present to Oct-Jan in the future. The present study demonstrates potential impacts of climate change on streamflow regimes in attempts to propose appropriate adaptation measures and responses at the river basin scales.

  3. Multi-scale MHD analysis of heliotron plasma in change of background field

    International Nuclear Information System (INIS)

    Ichiguchi, K.; Sakakibara, S.; Ohdachi, S.; Carreras, B.A.

    2012-11-01

    A partial collapse observed in the Large Helical Device (LHD) experiments shifting the magnetic axis inwardly with a real time control of the background field is analyzed with a magnetohydrodynamics (MHD) numerical simulation. The simulation is carried out with a multi-scale simulation scheme. In the simulation, the equilibrium also evolves including the change of the pressure and the rotational transform due to the perturbation dynamics. The simulation result agrees with the experiments qualitatively, which shows that the mechanism is attributed to the destabilization of an infernal-like mode. The destabilization is caused by the change of the background field through the enhancement of the magnetic hill. (author)

  4. Sustainability of small reservoirs and large scale water availability under current conditions and climate change

    OpenAIRE

    Krol, Martinus S.; de Vries, Marjella J.; van Oel, P.R.; Carlos de Araújo, José

    2011-01-01

    Semi-arid river basins often rely on reservoirs for water supply. Small reservoirs may impact on large-scale water availability both by enhancing availability in a distributed sense and by subtracting water for large downstream user communities, e.g. served by large reservoirs. Both of these impacts of small reservoirs are subject to climate change. Using a case-study on North-East Brazil, this paper shows that climate change impacts on water availability may be severe, and impacts on distrib...

  5. Linear collider: a preview

    Energy Technology Data Exchange (ETDEWEB)

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.

  6. Linear collider: a preview

    International Nuclear Information System (INIS)

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center

  7. Change in Urban Albedo in London: A Multi-scale Perspective

    Science.gov (United States)

    Susca, T.; Kotthaus, S.; Grimmond, S.

    2013-12-01

    Urbanization-induced change in land use has considerable implications for climate, air quality, resources and ecosystems. Urban-induced warming is one of the most well-known impacts. This directly and indirectly can extend beyond the city. One way to reduce the size of this is to modify the surface atmosphere exchanges through changing the urban albedo. As increased rugosity caused by the morphology of a city results in lower albedo with constant material characteristics, the impacts of changing the albedo has impacts across a range of scales. Here a multi-scale assessment of the potential effects of the increase in albedo in London is presented. This includes modeling at the global and meso-scale informed by local and micro-scale measurements. In this study the first order calculations are conducted for the impact of changing the albedo (e.g. a 0.01 increase) on the radiative exchange. For example, when incoming solar radiation and cloud cover are considered, based on data retrieved from NASA (http://power.larc.nasa.gov/) for ~1600 km2 area of London, would produce a mean decrease in the instantaneous solar radiative forcing on the same surface of 0.40 W m-2. The nature of the surface is critical in terms of considering the impact of changes in albedo. For example, in the Central Activity Zone in London pavement and building can vary from 10 to 100% of the plan area. From observations the albedo is seen to change dramatically with changes in building materials. For example, glass surfaces which are being used increasingly in the central business district results in dramatic changes in albedo. Using the documented albedo variations determined across different scales the impacts are considered. For example, the effect of the increase in urban albedo is translated into the corresponding amount of avoided emission of carbon dioxide that produces the same effect on climate. At local scale, the effect that the increase in urban albedo can potentially have on local

  8. Health Systems Research in a Complex and Rapidly Changing Context: Ethical Implications of Major Health Systems Change at Scale.

    Science.gov (United States)

    MacGregor, Hayley; Bloom, Gerald

    2016-12-01

    This paper discusses health policy and systems research in complex and rapidly changing contexts. It focuses on ethical issues at stake for researchers working with government policy makers to provide evidence to inform major health systems change at scale, particularly when the dynamic nature of the context and ongoing challenges to the health system can result in unpredictable outcomes. We focus on situations where 'country ownership' of HSR is relatively well established and where there is significant involvement of local researchers and close ties and relationships with policy makers are often present. We frame our discussion around two country case studies with which we are familiar, namely China and South Africa and discuss the implications for conducting 'embedded' research. We suggest that reflexivity is an important concept for health system researchers who need to think carefully about positionality and their normative stance and to use such reflection to ensure that they can negotiate to retain autonomy, whilst also contributing evidence for health system change. A research process informed by the notion of reflexive practice and iterative learning will require a longitudinal review at key points in the research timeline. Such review should include the convening of a deliberative process and should involve a range of stakeholders, including those most likely to be affected by the intended and unintended consequences of change. © 2016 The Authors Developing World Bioethics Published by John Wiley & Sons Ltd.

  9. Millennial-scale temperature change velocity in the continental northern Neotropics.

    Science.gov (United States)

    Correa-Metrio, Alexander; Bush, Mark; Lozano-García, Socorro; Sosa-Nájera, Susana

    2013-01-01

    Climate has been inherently linked to global diversity patterns, and yet no empirical data are available to put modern climate change into a millennial-scale context. High tropical species diversity has been linked to slow rates of climate change during the Quaternary, an assumption that lacks an empirical foundation. Thus, there is the need for quantifying the velocity at which the bioclimatic space changed during the Quaternary in the tropics. Here we present rates of climate change for the late Pleistocene and Holocene from Mexico and Guatemala. An extensive modern pollen survey and fossil pollen data from two long sedimentary records (30,000 and 86,000 years for highlands and lowlands, respectively) were used to estimate past temperatures. Derived temperature profiles show a parallel long-term trend and a similar cooling during the Last Glacial Maximum in the Guatemalan lowlands and the Mexican highlands. Temperature estimates and digital elevation models were used to calculate the velocity of isotherm displacement (temperature change velocity) for the time period contained in each record. Our analyses showed that temperature change velocities in Mesoamerica during the late Quaternary were at least four times slower than values reported for the last 50 years, but also at least twice as fast as those obtained from recent models. Our data demonstrate that, given extremely high temperature change velocities, species survival must have relied on either microrefugial populations or persistence of suppressed individuals. Contrary to the usual expectation of stable climates being associated with high diversity, our results suggest that Quaternary tropical diversity was probably maintained by centennial-scale oscillatory climatic variability that forestalled competitive exclusion. As humans have simplified modern landscapes, thereby removing potential microrefugia, and climate change is occurring monotonically at a very high velocity, extinction risk for tropical

  10. Millennial-scale temperature change velocity in the continental northern Neotropics.

    Directory of Open Access Journals (Sweden)

    Alexander Correa-Metrio

    Full Text Available Climate has been inherently linked to global diversity patterns, and yet no empirical data are available to put modern climate change into a millennial-scale context. High tropical species diversity has been linked to slow rates of climate change during the Quaternary, an assumption that lacks an empirical foundation. Thus, there is the need for quantifying the velocity at which the bioclimatic space changed during the Quaternary in the tropics. Here we present rates of climate change for the late Pleistocene and Holocene from Mexico and Guatemala. An extensive modern pollen survey and fossil pollen data from two long sedimentary records (30,000 and 86,000 years for highlands and lowlands, respectively were used to estimate past temperatures. Derived temperature profiles show a parallel long-term trend and a similar cooling during the Last Glacial Maximum in the Guatemalan lowlands and the Mexican highlands. Temperature estimates and digital elevation models were used to calculate the velocity of isotherm displacement (temperature change velocity for the time period contained in each record. Our analyses showed that temperature change velocities in Mesoamerica during the late Quaternary were at least four times slower than values reported for the last 50 years, but also at least twice as fast as those obtained from recent models. Our data demonstrate that, given extremely high temperature change velocities, species survival must have relied on either microrefugial populations or persistence of suppressed individuals. Contrary to the usual expectation of stable climates being associated with high diversity, our results suggest that Quaternary tropical diversity was probably maintained by centennial-scale oscillatory climatic variability that forestalled competitive exclusion. As humans have simplified modern landscapes, thereby removing potential microrefugia, and climate change is occurring monotonically at a very high velocity, extinction risk

  11. Large-scale genome-wide association studies and meta-analyses of longitudinal change in adult lung function.

    Directory of Open Access Journals (Sweden)

    Wenbo Tang

    Full Text Available Genome-wide association studies (GWAS have identified numerous loci influencing cross-sectional lung function, but less is known about genes influencing longitudinal change in lung function.We performed GWAS of the rate of change in forced expiratory volume in the first second (FEV1 in 14 longitudinal, population-based cohort studies comprising 27,249 adults of European ancestry using linear mixed effects model and combined cohort-specific results using fixed effect meta-analysis to identify novel genetic loci associated with longitudinal change in lung function. Gene expression analyses were subsequently performed for identified genetic loci. As a secondary aim, we estimated the mean rate of decline in FEV1 by smoking pattern, irrespective of genotypes, across these 14 studies using meta-analysis.The overall meta-analysis produced suggestive evidence for association at the novel IL16/STARD5/TMC3 locus on chromosome 15 (P  =  5.71 × 10(-7. In addition, meta-analysis using the five cohorts with ≥3 FEV1 measurements per participant identified the novel ME3 locus on chromosome 11 (P  =  2.18 × 10(-8 at genome-wide significance. Neither locus was associated with FEV1 decline in two additional cohort studies. We confirmed gene expression of IL16, STARD5, and ME3 in multiple lung tissues. Publicly available microarray data confirmed differential expression of all three genes in lung samples from COPD patients compared with controls. Irrespective of genotypes, the combined estimate for FEV1 decline was 26.9, 29.2 and 35.7 mL/year in never, former, and persistent smokers, respectively.In this large-scale GWAS, we identified two novel genetic loci in association with the rate of change in FEV1 that harbor candidate genes with biologically plausible functional links to lung function.

  12. Linearity and Non-linearity of Photorefractive effect in Materials ...

    African Journals Online (AJOL)

    In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...

  13. Design of a quasi-flat linear permanent magnet generator for pico-scale wave energy converter in south coast of Yogyakarta, Indonesia

    Science.gov (United States)

    Azhari, Budi; Prawinnetou, Wassy; Hutama, Dewangga Adhyaksa

    2017-03-01

    Indonesia has several potential ocean energies to utilize. One of them is tidal wave energy, which the potential is about 49 GW. To convert the tidal wave energy to electricity, linear permanent magnet generator (LPMG) is considered as the best appliance. In this paper, a pico-scale tidal wave power converter was designed using quasi-flat LPMG. The generator was meant to be applied in southern coast of Yogyakarta, Indonesia and was expected to generate 1 kW output. First, a quasi-flat LPMG was designed based on the expected output power and the wave characteristic at the placement site. The design was then simulated using finite element software of FEMM. Finally, the output values were calculated and the output characteristics were analyzed. The results showed that the designed power plant was able to produce output power of 725.78 Wp for each phase, with electrical efficiency of 64.5%. The output characteristics of the LPMG: output power would increase as the average wave height or wave period increases. Besides, the efficiency would increase if the external load resistance increases. Meanwhile the output power of the generator would be maximum at load resistance equals 11 Ω.

  14. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package.

    Science.gov (United States)

    Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M

    2014-05-01

    Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.

  15. Large-scale fabrication of linear low density polyethylene/layered double hydroxides composite films with enhanced heat retention, thermal, mechanical, optical and water vapor barrier properties

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Jiazhuo; Zhang, Kun; Zhao, Qinghua [College of Chemistry and Material Science, Shandong Agricultural University, 61 Daizong Street, Tai' an 271018 (China); Wang, Qingguo, E-mail: wqgyyy@126.com [College of Food Science and Engineering, Shandong Agricultural University, 61 Daizong Street, Tai' an 271018 (China); Xu, Jing, E-mail: jiaxu@sdau.edu.cn [College of Chemistry and Material Science, Shandong Agricultural University, 61 Daizong Street, Tai' an 271018 (China)

    2016-11-15

    Novel LDH intercalated with organic aliphatic long-chain anion was large-scale synthesized innovatively by high-energy ball milling in one pot. The linear low density polyethylene (LLDPE)/layered double hydroxides (LDH) composite films with enhanced heat retention, thermal, mechanical, optical and water vapor barrier properties were fabricated by melt blending and blowing process. FT IR, XRD, SEM results show that LDH particles were dispersed uniformly in the LLDPE composite films. Particularly, LLDPE composite film with 1% LDH exhibited the optimal performance among all the composite films with a 60.36% enhancement in the water vapor barrier property and a 45.73 °C increase in the temperature of maximum mass loss rate compared with pure LLDPE film. Furthermore, the improved infrared absorbance (1180–914 cm{sup −1}) of LLDPE/LDH films revealed the significant enhancement of heat retention. Therefore, this study prompts the application of LLDPE/LDH films as agricultural films with superior heat retention. - Graphical abstract: The fabrication process of LLDPE/LDH composite films. - Highlights: • LDH with basal spacing of 4.07 nm was synthesized by high-energy ball milling. • LLDPE composite films with homogeneous LDH dispersion were fabricated. • The properties of LLDPE/LDH composite films were improved. • LLDPE/LDH composite films show superior heat retention property.

  16. Accurate macromolecular crystallographic refinement: incorporation of the linear scaling, semiempirical quantum-mechanics program DivCon into the PHENIX refinement package

    Energy Technology Data Exchange (ETDEWEB)

    Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I. [QuantumBio Inc., 2790 West College Avenue, State College, PA 16801 (United States); Merz, Kenneth M. Jr [University of Florida, Gainesville, Florida (United States); Westerhoff, Lance M., E-mail: lance@quantumbioinc.com [QuantumBio Inc., 2790 West College Avenue, State College, PA 16801 (United States)

    2014-05-01

    Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.

  17. Large-scale fabrication of linear low density polyethylene/layered double hydroxides composite films with enhanced heat retention, thermal, mechanical, optical and water vapor barrier properties

    International Nuclear Information System (INIS)

    Xie, Jiazhuo; Zhang, Kun; Zhao, Qinghua; Wang, Qingguo; Xu, Jing

    2016-01-01

    Novel LDH intercalated with organic aliphatic long-chain anion was large-scale synthesized innovatively by high-energy ball milling in one pot. The linear low density polyethylene (LLDPE)/layered double hydroxides (LDH) composite films with enhanced heat retention, thermal, mechanical, optical and water vapor barrier properties were fabricated by melt blending and blowing process. FT IR, XRD, SEM results show that LDH particles were dispersed uniformly in the LLDPE composite films. Particularly, LLDPE composite film with 1% LDH exhibited the optimal performance among all the composite films with a 60.36% enhancement in the water vapor barrier property and a 45.73 °C increase in the temperature of maximum mass loss rate compared with pure LLDPE film. Furthermore, the improved infrared absorbance (1180–914 cm −1 ) of LLDPE/LDH films revealed the significant enhancement of heat retention. Therefore, this study prompts the application of LLDPE/LDH films as agricultural films with superior heat retention. - Graphical abstract: The fabrication process of LLDPE/LDH composite films. - Highlights: • LDH with basal spacing of 4.07 nm was synthesized by high-energy ball milling. • LLDPE composite films with homogeneous LDH dispersion were fabricated. • The properties of LLDPE/LDH composite films were improved. • LLDPE/LDH composite films show superior heat retention property.

  18. Quantifying streamflow change caused by forest disturbance at a large spatial scale: A single watershed study

    Science.gov (United States)

    Wei, Xiaohua; Zhang, Mingfang

    2010-12-01

    Climatic variability and forest disturbance are commonly recognized as two major drivers influencing streamflow change in large-scale forested watersheds. The greatest challenge in evaluating quantitative hydrological effects of forest disturbance is the removal of climatic effect on hydrology. In this paper, a method was designed to quantify respective contributions of large-scale forest disturbance and climatic variability on streamflow using the Willow River watershed (2860 km2) located in the central part of British Columbia, Canada. Long-term (>50 years) data on hydrology, climate, and timber harvesting history represented by equivalent clear-cutting area (ECA) were available to discern climatic and forestry influences on streamflow by three steps. First, effective precipitation, an integrated climatic index, was generated by subtracting evapotranspiration from precipitation. Second, modified double mass curves were developed by plotting accumulated annual streamflow against annual effective precipitation, which presented a much clearer picture of the cumulative effects of forest disturbance on streamflow following removal of climatic influence. The average annual streamflow changes that were attributed to forest disturbances and climatic variability were then estimated to be +58.7 and -72.4 mm, respectively. The positive (increasing) and negative (decreasing) values in streamflow change indicated opposite change directions, which suggest an offsetting effect between forest disturbance and climatic variability in the study watershed. Finally, a multivariate Autoregressive Integrated Moving Average (ARIMA) model was generated to establish quantitative relationships between accumulated annual streamflow deviation attributed to forest disturbances and annual ECA. The model was then used to project streamflow change under various timber harvesting scenarios. The methodology can be effectively applied to any large-scale single watershed where long-term data (>50

  19. Linear colliders - prospects 1985

    International Nuclear Information System (INIS)

    Rees, J.

    1985-06-01

    We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs

  20. Defining the minimal detectable change in scores on the eight-item Morisky Medication Adherence Scale.

    Science.gov (United States)

    Muntner, Paul; Joyce, Cara; Holt, Elizabeth; He, Jiang; Morisky, Donald; Webber, Larry S; Krousel-Wood, Marie

    2011-05-01

    Self-report scales are used to assess medication adherence. Data on how to discriminate change in self-reported adherence over time from random variability are limited. To determine the minimal detectable change for scores on the 8-item Morisky Medication Adherence Scale (MMAS-8). The MMAS-8 was administered twice, using a standard telephone script, with administration separated by 14-22 days, to 210 participants taking antihypertensive medication in the CoSMO (Cohort Study of Medication Adherence among Older Adults). MMAS-8 scores were calculated and participants were grouped into previously defined categories (<6, 6 to <8, and 8 for low, medium, and high adherence). The mean (SD) age of participants was 78.1 (5.8) years, 43.8% were black, and 68.1% were women. Overall, 8.1% (17/210), 16.2% (34/210), and 51.0% (107/210) of participants had low, medium, and high MMAS-8 scores, respectively, at both survey administrations (overall agreement 75.2%; 158/210). The weighted κ statistic was 0.63 (95% CI 0.53 to 0.72). The intraclass correlation coefficient was 0.78. The within-person standard error of the mean for change in MMAS-8 scores was 0.81, which equated to a minimal detectable change of 1.98 points. Only 4.3% (9/210) of the participants had a change in MMAS-8 of 2 or more points between survey administrations. Within-person changes in MMAS-8 scores of 2 or more points over time may represent a real change in antihypertensive medication adherence.

  1. The effect of millennial-scale changes in Arabian Sea denitrification on atmospheric CO2

    International Nuclear Information System (INIS)

    Altabet, M.A.; Higginson, M.J.; Murray, D.W.

    2002-01-01

    Most global biogeochemical processes are known to respond to climate change, some of which have the capacity to produce feedbacks through the regulation of atmospheric greenhouse gases. Marine denitrification - the reduction of nitrate to gaseous nitrogen - is an important process in this regard, affecting greenhouse gas concentrations directly through the incidental production of nitrous oxide, and indirectly through modification of the marine nitrogen inventory and hence the biological pump for C0 2 . Although denitrification has been shown to vary with glacial-interglacial cycles, its response to more rapid climate change has not yet been well characterized. Here we present nitrogen isotope ratio, nitrogen content and chlorin abundance data from sediment cores with high accumulation rates on the Oman continental margin that reveal substantial millennial-scale variability in Arabian Sea denitrification and productivity during the last glacial period. The detailed correspondence of these changes with Dansgaard-Oeschger events recorded in Greenland ice cores indicates rapid, century-scale reorganization of the Arabian Sea ecosystem in response to climate excursions, mediated through the intensity of summer monsoonal upwelling. Considering the several-thousand-year residence time of fixed nitrogen in the ocean, the response of global marine productivity to changes in denitrification would have occurred at lower frequency and appears to be related to climatic and atmospheric C0 2 oscillations observed in Antarctic ice cores between 20 and A kyr ago. (author)

  2. A biopsychosocial investigation of changes in self-concept on the Head Injury Semantic Differential Scale.

    Science.gov (United States)

    Reddy, Avneel; Ownsworth, Tamara; King, Joshua; Shields, Cassandra

    2017-12-01

    This study aimed to investigate the influence of the "good-old-days" bias, neuropsychological functioning and cued recall of life events on self-concept change. Forty seven adults with TBI (70% male, 1-5 years post-injury) and 47 matched controls rated their past and present self-concept on the Head Injury Semantic Differential Scale (HISD) III. TBI participants also completed a battery of neuropsychological tests. The matched control group of 47 were from a sample of 78 uninjured participants who were randomised to complete either the Social Readjustment Rating Scale-Revised (cued recall) or HISD (non-cued recall) first. Consistent with the good-old-days bias, participants with TBI rated their pre-injury self-concept as more positive than their present self-concept and the present self-concept of controls (p concept ratings were related to lower estimated premorbid IQ and poorer verbal fluency and delayed memory (p concept change (p concept as significantly more negative than the non-cued group (p concept change by affecting retrospective ratings of past self-concept. Further research is needed to investigate the impact of contextual cues on self-concept change after TBI.

  3. Soil organic matter change - analysis on a regional scale of Austria

    Science.gov (United States)

    Gruendling, Ralf; Franko, Uwe; Sedy, Katrin; Freudenschuß, Alexandra; Spiegel, Adelheid; Formayer, Herbert

    2014-05-01

    Soil organic matter (SOM) is an important resource in agriculture. It influences soil fertility, erosion processes and prevents soil degradation. However, SOM is strongly affected by climate change, soil conditions and management alterations. The presented study analyzes SOM changes in Austria on a regional scale in the "Marchfeld" and the "Muehlviertel". For quantification these SOM changes the model CCB (Candy Carbon Balance) was used. Based on a 1 square kilometer raster, the impact of specific site conditions on SOM are determined to characterize the study areas. Used as a main indicator for these conditions is the biologic active time (BAT). BAT describes the biologic activity for carbon cycling in top soils depending on soil and climatic conditions. High values of BAT indicate fast SOM reproduction rates. Hence, BAT changes over last years signpost the risk of SOM loss and can be used as an on-farm decision tool. The change of risks of SOM loss due to climate change is assessed by model results. Therefore, three climate scenarios are used to compute reproduction rates of SOM. "High risk-regions" can be identified for policy consulting. Different climate scenarios can help to develop best case and worst case results. First results show that the region "Marchfeld" had a higher change in BAT during last 2 decades comparing to the "Muehlviertel". A higher risk of SOM loosing is evident. Nevertheless, future scenarios predict a higher change of BAT for the "Muehlviertel". Apparently, the sensitivity of "Marchfeld" sites regard to climate change has been higher in the past and most BAT changes took place until now. With this method an evaluation of farm management in regard to SOM reproduction and recommendation of crop rotations for the future are possible. In conclusion, the aim of the project is a tool box for farmers and policy makers to evaluate present and future agricultural management. An examination of additional regions in Austria is planned.

  4. Political discourse and climate change: the challenge of reconciling scale of impact with level of governance

    International Nuclear Information System (INIS)

    Lindseth, Gard

    2006-04-01

    The politics of climate change is viewed through a discourse perspective. Central to this perspective's understanding of the environment is that the lack of urgency about the problem cannot be attributed to the nature of the climate problem and human beings alone. Environmental problems are subject to discursive struggles. The concept of discourse analysis is not discussed in relation to other, related terms, but used in a pragmatic way, aiming to advance insights about the processes under study. Two main, competing perspectives are identified: 'National Action' and 'Thinking Globally'. The findings are foremost valid for the Norwegian context, although different aspects of the climate issue have broader implications. Two central contributions to the field of climate politics are put forth: Firstly, viewing climate change controversies in terms of 'scales' is an important asset to literature in the field. The understanding of 'scale' adopted is fluid and procedural, a concept that is socially constructed. In climate politics there is no perfect fit between the ecological dimensions of climate change and the institutional dimensions of the problem. The studies show how climate change as a political problem belongs to the local, regional, national, or global scales. It is argued that we misunderstand politics if we make clear distinctions between local or global politics. It is concluded that local and national actors have up-scaled the climate issue, seeing the climate issue as a global problem requiring global solutions, instead of local or national concerns. Second, and related to the first point, the way of viewing climate change as a global issue in a national or local context has consequences for the policy solutions that can be sought. The idea of thinking globally might work to distract attention from how actors at the different levels of governance can make a contribution to climate governance. A broader discussion about climate change as a concerted

  5. Political discourse and climate change: the challenge of reconciling scale of impact with level of governance

    Energy Technology Data Exchange (ETDEWEB)

    Lindseth, Gard

    2006-04-15

    The politics of climate change is viewed through a discourse perspective. Central to this perspective's understanding of the environment is that the lack of urgency about the problem cannot be attributed to the nature of the climate problem and human beings alone. Environmental problems are subject to discursive struggles. The concept of discourse analysis is not discussed in relation to other, related terms, but used in a pragmatic way, aiming to advance insights about the processes under study. Two main, competing perspectives are identified: 'National Action' and 'Thinking Globally'. The findings are foremost valid for the Norwegian context, although different aspects of the climate issue have broader implications. Two central contributions to the field of climate politics are put forth: Firstly, viewing climate change controversies in terms of 'scales' is an important asset to literature in the field. The understanding of 'scale' adopted is fluid and procedural, a concept that is socially constructed. In climate politics there is no perfect fit between the ecological dimensions of climate change and the institutional dimensions of the problem. The studies show how climate change as a political problem belongs to the local, regional, national, or global scales. It is argued that we misunderstand politics if we make clear distinctions between local or global politics. It is concluded that local and national actors have up-scaled the climate issue, seeing the climate issue as a global problem requiring global solutions, instead of local or national concerns. Second, and related to the first point, the way of viewing climate change as a global issue in a national or local context has consequences for the policy solutions that can be sought. The idea of thinking globally might work to distract attention from how actors at the different levels of governance can make a contribution to climate governance. A broader

  6. Political discourse and climate change: the challenge of reconciling scale of impact with level of governance

    Energy Technology Data Exchange (ETDEWEB)

    Lindseth, Gard

    2006-04-15

    The politics of climate change is viewed through a discourse perspective. Central to this perspective's understanding of the environment is that the lack of urgency about the problem cannot be attributed to the nature of the climate problem and human beings alone. Environmental problems are subject to discursive struggles. The concept of discourse analysis is not discussed in relation to other, related terms, but used in a pragmatic way, aiming to advance insights about the processes under study. Two main, competing perspectives are identified: 'National Action' and 'Thinking Globally'. The findings are foremost valid for the Norwegian context, although different aspects of the climate issue have broader implications. Two central contributions to the field of climate politics are put forth: Firstly, viewing climate change controversies in terms of 'scales' is an important asset to literature in the field. The understanding of 'scale' adopted is fluid and procedural, a concept that is socially constructed. In climate politics there is no perfect fit between the ecological dimensions of climate change and the institutional dimensions of the problem. The studies show how climate change as a political problem belongs to the local, regional, national, or global scales. It is argued that we misunderstand politics if we make clear distinctions between local or global politics. It is concluded that local and national actors have up-scaled the climate issue, seeing the climate issue as a global problem requiring global solutions, instead of local or national concerns. Second, and related to the first point, the way of viewing climate change as a global issue in a national or local context has consequences for the policy solutions that can be sought. The idea of thinking globally might work to distract attention from how actors at the different levels of governance can make a contribution to climate governance. A broader discussion about climate change as a concerted

  7. Geographic variation in opinions on climate change at state and local scales in the USA

    Science.gov (United States)

    Howe, Peter D.; Mildenberger, Matto; Marlon, Jennifer R.; Leiserowitz, Anthony

    2015-06-01

    Addressing climate change in the United States requires enactment of national, state and local mitigation and adaptation policies. The success of these initiatives depends on public opinion, policy support and behaviours at appropriate scales. Public opinion, however, is typically measured with national surveys that obscure geographic variability across regions, states and localities. Here we present independently validated high-resolution opinion estimates using a multilevel regression and poststratification model. The model accurately predicts climate change beliefs, risk perceptions and policy preferences at the state, congressional district, metropolitan and county levels, using a concise set of demographic and geographic predictors. The analysis finds substantial variation in public opinion across the nation. Nationally, 63% of Americans believe global warming is happening, but county-level estimates range from 43 to 80%, leading to a diversity of political environments for climate policy. These estimates provide an important new source of information for policymakers, educators and scientists to more effectively address the challenges of climate change.

  8. The causality analysis of climate change and large-scale human crisis.

    Science.gov (United States)

    Zhang, David D; Lee, Harry F; Wang, Cong; Li, Baosheng; Pei, Qing; Zhang, Jane; An, Yulun

    2011-10-18

    Recent studies have shown strong temporal correlations between past climate changes and societal crises. However, the specific causal mechanisms underlying this relation have not been addressed. We explored quantitative responses of 14 fine-grained agro-ecological, socioeconomic, and demographic variables to climate fluctuations from A.D. 1500-1800 in Europe. Results show that cooling from A.D. 1560-1660 caused successive agro-ecological, socioeconomic, and demographic catastrophes, leading to the General Crisis of the Seventeenth Century. We identified a set of causal linkages between climate change and human crisis. Using temperature data and climate-driven economic variables, we simulated the alternation of defined "golden" and "dark" ages in Europe and the Northern Hemisphere during the past millennium. Our findings indicate that climate change was the ultimate cause, and climate-driven economic downturn was the direct cause, of large-scale human crises in preindustrial Europe and the Northern Hemisphere.

  9. Tracking global change at local scales: Phenology for science, outreach, conservation

    Science.gov (United States)

    Sharron, Ed; Mitchell, Brian

    2011-06-01

    A Workshop Exploring the Use of Phenology Studies for Public Engagement; New Orleans, Louisiana, 14 March 2011 ; During a George Wright Society Conference session that was led by the USA National Phenology Network (USANPN; http://www.usanpn.org) and the National Park Service (NPS), professionals from government organizations, nonprofits, and higher-education institutions came together to explore the possibilities of using phenology monitoring to engage the public. One of the most visible effects of global change on ecosystems is shifts in phenology: the timing of biological events such as leafing and flowering, maturation of agricultural plants, emergence of insects, and migration of birds. These shifts are already occurring and reflect biological responses to climate change at local to regional scales. Changes in phenology have important implications for species ecology and resource management and, because they are place-based and tangible, serve as an ideal platform for education, outreach, and citizen science.

  10. Estimating temporal changes in soil carbon stocks at ecoregional scale in Madagascar using remote-sensing

    Science.gov (United States)

    Grinand, C.; Maire, G. Le; Vieilledent, G.; Razakamanarivo, H.; Razafimbelo, T.; Bernoux, M.

    2017-02-01

    Soil organic carbon (SOC) plays an important role in climate change regulation notably through release of CO2 following land use change such a deforestation, but data on stock change levels are lacking. This study aims to empirically assess SOC stocks change between 1991 and 2011 at the landscape scale using easy-to-access spatially-explicit environmental factors. The study area was located in southeast Madagascar, in a region that exhibits very high rate of deforestation and which is characterized by both humid and dry climates. We estimated SOC stock on 0.1 ha plots for 95 different locations in a 43,000 ha reference area covering both dry and humid conditions and representing different land cover including natural forest, cropland, pasture and fallows. We used the Random Forest algorithm to find out the environmental factors explaining the spatial distribution of SOC. We then predicted SOC stocks for two soil layers at 30 cm and 100 cm over a wider area of 395,000 ha. By changing the soil and vegetation indices derived from remote sensing images we were able to produce SOC maps for 1991 and 2011. Those estimates and their related uncertainties where combined in a post-processing step to map estimates of significant SOC variations and we finally compared the SOC change map with published deforestation maps. Results show that the geologic variables, precipitation, temperature, and soil-vegetation status were strong predictors of SOC distribution at regional scale. We estimated an average net loss of 10.7% and 5.2% for the 30 cm and the 100 cm layers respectively for deforested areas in the humid area. Our results also suggest that these losses occur within the first five years following deforestation. No significant variations were observed for the dry region. This study provides new solutions and knowledge for a better integration of soil threats and opportunities in land management policies.

  11. Efficiency scale and technological change in credit unions and multiple banks using the COSIF

    Directory of Open Access Journals (Sweden)

    Wanderson Rocha Bittencourt

    2016-08-01

    Full Text Available The modernization of the financial intermediation process and adapting to new technologies, brought adjustments to operational processes, providing the reduction of information borrowing costs, allowing generate greater customer satisfaction, due to increased competitiveness in addition to making gains with long efficiency period. In this context, this research aims to analyze the evolution in scale and technological efficiency of credit and multiple cooperative banks from 2009 to 2013. We used the method of Data Envelopment Analysis - DEA, which allows to calculate the change in efficiency of institutions through the Malmquist Index. The results indicated that institutions that employ larger volumes of assets in the composition of its resources presented evolution in scale and technological efficiency, influencing the change in total factor productivity. It should be noticed that cooperatives had, in some years, advances in technology and scale efficiency higher than banks. However, this result can be explained by the fact that the average efficiency of credit unions have been lower than that of banks in the analyzed sample, indicating that there is greater need to improve internal processes by cooperatives, compared to multiple banks surveyed.

  12. A Scale-Explicit Framework for Conceptualizing the Environmental Impacts of Agricultural Land Use Changes

    Directory of Open Access Journals (Sweden)

    Iago Lowe Hale

    2014-11-01

    Full Text Available Demand for locally-produced food is growing in areas outside traditionally dominant agricultural regions due to concerns over food safety, quality, and sovereignty; rural livelihoods; and environmental integrity. Strategies for meeting this demand rely upon agricultural land use change, in various forms of either intensification or extensification (converting non-agricultural land, including native landforms, to agricultural use. The nature and extent of the impacts of these changes on non-food-provisioning ecosystem services are determined by a complex suite of scale-dependent interactions among farming practices, site-specific characteristics, and the ecosystem services under consideration. Ecosystem modeling strategies which honor such complexity are often impenetrable by non-experts, resulting in a prevalent conceptual gap between ecosystem sciences and the field of sustainable agriculture. Referencing heavily forested New England as an example, we present a conceptual framework designed to synthesize and convey understanding of the scale- and landscape-dependent nature of the relationship between agriculture and various ecosystem services. By accounting for the total impact of multiple disturbances across a landscape while considering the effects of scale, the framework is intended to stimulate and support the collaborative efforts of land managers, scientists, citizen stakeholders, and policy makers as they address the challenges of expanding local agriculture.

  13. Grassland/atmosphere response to changing climate: Coupling regional and local scales

    International Nuclear Information System (INIS)

    Coughenour, M.B.; Kittel, T.G.F.; Pielke, R.A.; Eastman, J.

    1993-10-01

    The objectives of the study were: to evaluate the response of grassland ecosystems to atmospheric change at regional and site scales, and to develop multiscaled modeling systems to relate ecological and atmospheric models with different spatial and temporal resolutions. A menu-driven shell was developed to facilitate use of models at different temporal scales and to facilitate exchange information between models at different temporal scales. A detailed ecosystem model predicted that C 3 temperate grasslands wig respond more strongly to elevated CO 2 than temperate C 4 grasslands in the short-term while a large positive N-PP response was predicted for a C 4 Kenyan grassland. Long-term climate change scenarios produced either decreases or increases in Colorado plant productivity (NPP) depending on rainfall, but uniform increases in N-PP were predicted in Kenya. Elevated CO 2 is likely to have little effect on ecosystem carbon storage in Colorado while it will increase carbon storage in Kenya. A synoptic climate classification processor (SCP) was developed to evaluate results of GCM climate sensitivity experiments. Roughly 80% agreement was achieved with manual classifications. Comparison of lx and 2xCO 2 GCM Simulations revealed relatively small differences

  14. Scale orientated analysis of river width changes due to extreme flood hazards

    Directory of Open Access Journals (Sweden)

    G. Krapesch

    2011-08-01

    Full Text Available This paper analyses the morphological effects of extreme floods (recurrence interval >100 years and examines which parameters best describe the width changes due to erosion based on 5 affected alpine gravel bed rivers in Austria. The research was based on vertical aerial photos of the rivers before and after extreme floods, hydrodynamic numerical models and cross sectional measurements supported by LiDAR data of the rivers. Average width ratios (width after/before the flood were calculated and correlated with different hydraulic parameters (specific stream power, shear stress, flow area, specific discharge. Depending on the geomorphological boundary conditions of the different rivers, a mean width ratio between 1.12 (Lech River and 3.45 (Trisanna River was determined on the reach scale. The specific stream power (SSP best predicted the mean width ratios of the rivers especially on the reach scale and sub reach scale. On the local scale more parameters have to be considered to define the "minimum morphological spatial demand of rivers", which is a crucial parameter for addressing and managing flood hazards and should be used in hazard zone plans and spatial planning.

  15. Change of diamond film structure and morphology with N2 addition in MW PECVD apparatus with linear antenna delivery system

    Czech Academy of Sciences Publication Activity Database

    Jakl Krečmarová, Marie; Petrák, Václav; Taylor, Andrew; Sankaran, K. J.; Lin, I. N.; Jäger, Aleš; Gärtnerová, Viera; Fekete, Ladislav; Drahokoupil, Jan; Laufek, František; Vacík, Jiří; Hubík, Pavel; Mortet, Vincent; Nesladek, M.

    2014-01-01

    Roč. 211, č. 10 (2014), s. 2296-2301 ISSN 1862-6300 R&D Projects: GA ČR GA13-31783S; GA MŠk(CZ) LM2011026; GA MŠk(XE) LM2011019 Grant - others:OP VK(XE) CZ.1.07/2.3.00/20.0306; AV ČR(CZ) Fellowship J. E. Purkyně Institutional support: RVO:68378271 ; RVO:61389005 Keywords : linear antenna * nano-diamond * nitrogen doping * TEM * Raman spectroscopy Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.616, year: 2014 http://onlinelibrary.wiley.com/doi/10.1002/pssa.201431255/full

  16. Linear-chain model to explain density of states and Tsub(c) changes with atomic ordering

    International Nuclear Information System (INIS)

    Junod, A.

    1978-01-01

    The effect of long-range atomic order on the electronic density of states has been recalculated for the A15-type structure within the linear-chain model. It is found that a defect concentration c reduces the density of states at the Fermi level by a factor (1 + c/c 0 )(c/c 0 ) -3 [ln(1 + c/c 0 )] 3 . This result is in qualitative agreement with experimental data on the specific heat, magnetic susceptibility and superconducting transition temperature of V 3 Au. (author)

  17. Collision recognition and direction changes for small scale fish robots by acceleration sensors

    Science.gov (United States)

    Na, Seung Y.; Shin, Daejung; Kim, Jin Y.; Lee, Bae-Ho

    2005-05-01

    Typical obstacles are walls, rocks, water plants and other nearby robots for a group of small scale fish robots and submersibles that have been constructed in our lab. Sonar sensors are not employed to make the robot structure simple enough. All of circuits, sensors and processor cards are contained in a box of 9 x 7 x 4 cm dimension except motors, fins and external covers. Therefore, image processing results are applied to avoid collisions. However, it is useful only when the obstacles are located far enough to give images processing time for detecting them. Otherwise, acceleration sensors are used to detect collision immediately after it happens. Two of 2-axes acceleration sensors are employed to measure the three components of collision angles, collision magnitudes, and the angles of robot propulsion. These data are integrated to calculate the amount of propulsion direction change. The angle of a collision incident upon an obstacle is the fundamental value to obtain a direction change needed to design a following path. But there is a significant amount of noise due to a caudal fin motor. Because caudal fin provides the main propulsion for a fish robot, there is a periodic swinging noise at the head of a robot. This noise provides a random acceleration effect on the measured acceleration data at the collision. We propose an algorithm which shows that the MEMS-type accelerometers are very effective to provide information for direction changes in spite of the intrinsic noise after the small scale fish robots have made obstacle collision.

  18. Proportional and scale change models to project failures of mechanical components with applications to space station

    Science.gov (United States)

    Taneja, Vidya S.

    1996-01-01

    In this paper we develop the mathematical theory of proportional and scale change models to perform reliability analysis. The results obtained will be applied for the Reaction Control System (RCS) thruster valves on an orbiter. With the advent of extended EVA's associated with PROX OPS (ISSA & MIR), and docking, the loss of a thruster valve now takes on an expanded safety significance. Previous studies assume a homogeneous population of components with each component having the same failure rate. However, as various components experience different stresses and are exposed to different environments, their failure rates change with time. In this paper we model the reliability of a thruster valves by treating these valves as a censored repairable system. The model for each valve will take the form of a nonhomogeneous process with the intensity function that is either treated as a proportional hazard model, or a scale change random effects hazard model. Each component has an associated z, an independent realization of the random variable Z from a distribution G(z). This unobserved quantity z can be used to describe heterogeneity systematically. For various models methods for estimating the model parameters using censored data will be developed. Available field data (from previously flown flights) is from non-renewable systems. The estimated failure rate using such data will need to be modified for renewable systems such as thruster valve.

  19. Scaling Quelccaya: Using 3-D Animation and Satellite Data To Visualize Climate Change

    Science.gov (United States)

    Malone, A.; Leich, M.

    2017-12-01

    The near-global glacier retreat of recent decades is among the most convincing evidence for contemporary climate change. The epicenter of this action, however, is often far from population-dense centers. How can a glacier's scale, both physical and temporal, be communicated to those faraway? This project, an artists-scientist collaboration, proposes an alternate system for presenting climate change data, designed to evoke a more visceral response through a visual, geospatial, poetic approach. Focusing on the Quelccaya Ice Cap, the world's largest tropical glaciated area located in the Peruvian Andes, we integrate 30 years of satellite imagery and elevation models with 3D animation and gaming software, to bring it into a virtual juxtaposition with a model of the city of Chicago. Using Chicago as a cosmopolitan North American "measuring stick," we apply glaciological models to determine, for instance, the amount of ice that has melted on Quelccaya over the last 30 years and what the height of an equivalent amount of snow would fall on the city of Chicago (circa 600 feet, higher than the Willis Tower). Placing the two sites in a framework of intimate scale, we present a more imaginative and psychologically-astute manner of portraying the sober facts of climate change, by inviting viewers to learn and consider without inducing fear.

  20. Quantifying anthropogenic contributions to century-scale groundwater salinity changes, San Joaquin Valley, California, USA

    Science.gov (United States)

    Hansen, Jeffrey; Jurgens, Bryant; Fram, Miranda S.

    2018-01-01

    Total dissolved solids (TDS) concentrations in groundwater tapped for beneficial uses (drinking water, irrigation, freshwater industrial) have increased on average by about 100 mg/L over the last 100 years in the San Joaquin Valley, California (SJV). During this period land use in the SJV changed from natural vegetation and dryland agriculture to dominantly irrigated agriculture with growing urban areas. Century-scale salinity trends were evaluated by comparing TDS concentrations and major ion compositions of groundwater from wells sampled in 1910 (Historic) to data from wells sampled in 1993-2015 (Modern). TDS concentrations in subregions of the SJV, the southern (SSJV), western (WSJV), northeastern (NESJV), and southeastern (SESJV) were calculated using a cell-declustering method. TDS concentrations increased in all regions, with the greatest increases found in the SSJV and SESJV. Evaluation of the Modern data from the NESJV and SESJV found higher TDS concentrations in recently recharged (post-1950) groundwater from shallow (soil amendments combined. Bicarbonate showed the greatest increase among major ions, resulting from enhanced silicate weathering due to recharge of irrigation water enriched in CO2 during the growing season. The results of this study demonstrate that large anthropogenic changes to the hydrologic regime, like massive development of irrigated agriculture in semi-arid areas like the SJV, can cause large changes in groundwater quality on a regional scale.

  1. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de [Max Planck Institut für Chemische Energiekonversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F. [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24014 (United States)

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed

  2. Spatial and topographic trends in forest expansion and biomass change, from regional to local scales.

    Science.gov (United States)

    Buma, Brian; Barrett, Tara M

    2015-09-01

    Natural forest growth and expansion are important carbon sequestration processes globally. Climate change is likely to increase forest growth in some regions via CO2 fertilization, increased temperatures, and altered precipitation; however, altered disturbance regimes and climate stress (e.g. drought) will act to reduce carbon stocks in forests as well. Observations of asynchrony in forest change is useful in determining current trends in forest carbon stocks, both in terms of forest density (e.g. Mg ha(-1) ) and spatially (extent and location). Monitoring change in natural (unmanaged) areas is particularly useful, as while afforestation and recovery from historic land use are currently large carbon sinks, the long-term viability of those sinks depends on climate change and disturbance dynamics at their particular location. We utilize a large, unmanaged biome (>135 000 km(2) ) which spans a broad latitudinal gradient to explore how variation in location affects forest density and spatial patterning: the forests of the North American temperate rainforests in Alaska, which store >2.8 Pg C in biomass and soil, equivalent to >8% of the C in contiguous US forests. We demonstrate that the regional biome is shifting; gains exceed losses and are located in different spatio-topographic contexts. Forest gains are concentrated on northerly aspects, lower elevations, and higher latitudes, especially in sheltered areas, whereas loss is skewed toward southerly aspects and lower latitudes. Repeat plot-scale biomass data (n = 759) indicate that within-forest biomass gains outpace losses (live trees >12.7 cm diameter, 986 Gg yr(-1) ) on gentler slopes and in higher latitudes. This work demonstrates that while temperate rainforest dynamics occur at fine spatial scales (biomass accumulation suggest the potential for relatively rapid biome shifts and biomass changes. © 2015 John Wiley & Sons Ltd.

  3. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    KAUST Repository

    Guo, Yang

    2018-01-04

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  4. Communication: An improved linear scaling perturbative triples correction for the domain based local pair-natural orbital based singles and doubles coupled cluster method [DLPNO-CCSD(T)

    KAUST Repository

    Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank

    2018-01-01

    In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).

  5. Sustained Large-Scale Collective Climate Action Supported by Effective Climate Change Education Practice

    Science.gov (United States)

    Niepold, F., III; Crim, H.; Fiorile, G.; Eldadah, S.

    2017-12-01

    Since 2012, the Climate and Energy Literacy community have realized that as cities, nations and the international community seek solutions to global climate change over the coming decades, a more comprehensive, interdisciplinary approach to climate literacy—one that includes economic and social considerations—will play a vital role in knowledgeable planning, decision-making, and governance. City, county and state leaders are now leading the American response to a changing climate by incubating social innovation to prevail in the face of unprecedented change. Cities are beginning to realize the importance of critical investments to support the policies and strategies that will foster the climate literacy necessary for citizens to understand the urgency of climate actions and to succeed in a resilient post-carbon economy and develop the related workforce. Over decade of federal and non-profit Climate Change Education effective methods have been developed that can support municipality's significant educational capabilities for the purpose of strengthening and scaling city, state, business, and education actions designed to sustain and effectively address this significant social change. Looking to foster the effective and innovative strategies that will enable their communities several networks have collaborated to identify recommendations for effective education and communication practices when working with different types of audiences. U.S. National Science Foundation funded Climate Change Education Partnership (CCEP) Alliance, the National Wildlife Federation, NOAA Climate Program Office, Tri-Agency Climate Change Education Collaborative and the Climate Literacy and Energy Awareness Network (CLEAN) are working to develop a new web portal that will highlight "effective" practices that includes the acquisition and use of climate change knowledge to inform decision-making. The purpose of the web portal is to transfer effective practice to support communities to be

  6. Climatological changing effects on wind, precipitation and erosion: Large, meso and small scale analysis

    International Nuclear Information System (INIS)

    Aslan, Z.

    2004-01-01

    The Fourier transformation analysis for monthly average values of meteorological parameters has been considered, and amplitudes, phase angles have been calculated by using ground measurements in Turkey. The first order harmonics of meteorological parameters show large scale effects, while higher order harmonics show the effects of small scale fluctuations. The variations of first through sixth order harmonic amplitudes and phases provide a useful means of understanding the large and local scale effects on meteorological parameters. The phase angle can be used to determine the time of year the maximum or minimum of a given harmonic occurs. The analysis helps us to distinguish different pressure, relative humidity, temperature, precipitation and wind speed regimes and transition regions. Local and large scale phenomenon and some unusual seasonal patterns are also defined near Keban Dam and the irrigation area. Analysis of precipitation based on long term data shows that semi-annual fluctuations are predominant in the study area. Similarly, pressure variations are mostly influenced by semi-annual fluctuations. Temperature and humidity variations are mostly influenced by meso and micro scale fluctuations. Many large and meso scale climate change simulations for the 21st century are based on concentration of green house gases. A better understanding of these effects on soil erosion is necessary to determine social, economic and other impacts of erosion. The second part of this study covers the time series analysis of precipitation, rainfall erosivity and wind erosion at the Marmara Region. Rainfall and runoff erosivity factors are defined by considering the results of field measurements at 10 stations. Climatological changing effects on rainfall erosion have been determined by monitoring meteorological variables. In the previous studies, Fournier Index is defined to estimate the rainfall erosivity for the study area. The Fournier Index or in other words a climatic index

  7. Climate change impacts on risks of groundwater pollution by herbicides: a regional scale assessment

    Science.gov (United States)

    Steffens, Karin; Moeys, Julien; Lindström, Bodil; Kreuger, Jenny; Lewan, Elisabet; Jarvis, Nick

    2014-05-01

    Groundwater contributes nearly half of the Swedish drinking water supply, which therefore needs to be protected both under present and future climate conditions. Pesticides are sometimes found in Swedish groundwater in concentrations exceeding the EU-drinking water limit and thus constitute a threat. The aim of this study was to assess the present and future risks of groundwater pollution at the regional scale by currently approved herbicides. We identified representative combinations of major crop types and their specific herbicide usage (product, dose and application timing) based on long-term monitoring data from two agricultural catchments in the South-West of Sweden. All these combinations were simulated with the regional version of the pesticide fate model MACRO (called MACRO-SE) for the periods 1970-1999 and 2070-2099 for a major crop production region in South West Sweden. To represent the uncertainty in future climate data, we applied a five-member ensemble based on different climate model projections downscaled with the RCA3-model (Swedish Meteorological and Hydrological Institute). In addition to the direct impacts of changes in the climate, the risks of herbicide leaching in the future will also be affected by likely changes in weed pressure and land use and management practices (e.g. changes in crop rotations and application timings). To assess the relative importance of such factors we performed a preliminary sensitivity analysis which provided us with a hierarchical structure for constructing future herbicide use scenarios for the regional scale model runs. The regional scale analysis gave average concentrations of herbicides leaching to groundwater for a large number of combinations of soils, crops and compounds. The results showed that future scenarios for herbicide use (more autumn-sown crops, more frequent multiple applications on one crop, and a shift from grassland to arable crops such as maize) imply significantly greater risks of herbicide

  8. Length scale hierarchy and spatiotemporal change of alluvial morphologies over the Selenga River delta, Russia

    Science.gov (United States)

    Dong, T. Y.; Nittrouer, J.; McElroy, B. J.; Ma, H.; Czapiga, M. J.; Il'icheva, E.; Pavlov, M.; Parker, G.

    2017-12-01

    The movement of water and sediment in natural channels creates various types of alluvial morphologies that span length scales from dunes to deltas. The behavior of these morphologies is controlled microscopically by hydrodynamic conditions and bed material size, and macroscopically by hydrologic and geological settings. Alluvial morphologies can be modeled as either diffusive or kinematic waves, in accordance with their respective boundary conditions. Recently, it has been shown that the difference between these two dynamic behaviors of alluvial morphologies can be characterized by the backwater number, which is a dimensionless value normalizing the length scale of a morphological feature to its local hydrodynamic condition. Application of the backwater number has proven useful for evaluating the size of morphologies, including deltas (e.g., by assessing the preferential avulsion location of a lobe), and for comparing bedform types across different fluvial systems. Yet two critical questions emerge when applying the backwater number: First, how do different types of alluvial morphologies compare within a single deltaic system, where there is a hydrodynamic transition from uniform to non-uniform flow? Second, how do different types of morphologies evolve temporally within a system as a function of changing water discharge? This study addresses these questions by compiling and analyzing field data from the Selenga River delta, Russia, which include measurements of flow velocity, channel geometry, bed material grain size, and channel slope, as well as length scales of various morphologies, including dunes, island bars, meanders, bifurcations, and delta lobes. Data analyses reveal that the length scale of morphologies decrease and the backwater number increases as flow transitions from uniform to non-uniform conditions progressing downstream. It is shown that the evaluated length scale hierarchy and planform distribution of different morphologies can be used to

  9. Evaluation of different downscaling techniques for hydrological climate-change impact studies at the catchment scale

    Energy Technology Data Exchange (ETDEWEB)

    Teutschbein, Claudia [Stockholm University, Department of Physical Geography and Quaternary Geology, Stockholm (Sweden); Wetterhall, Fredrik [King' s College London, Department of Geography, Strand, London (United Kingdom); Swedish Meteorological and Hydrological Institute, Norrkoeping (Sweden); Seibert, Jan [Stockholm University, Department of Physical Geography and Quaternary Geology, Stockholm (Sweden); Uppsala University, Department of Earth Sciences, Uppsala (Sweden); University of Zurich, Department of Geography, Zurich (Switzerland)

    2011-11-15

    Hydrological modeling for climate-change impact assessment implies using meteorological variables simulated by global climate models (GCMs). Due to mismatching scales, coarse-resolution GCM output cannot be used directly for hydrological impact studies but rather needs to be downscaled. In this study, we investigated the variability of seasonal streamflow and flood-peak projections caused by the use of three statistical approaches to downscale precipitation from two GCMs for a meso-scale catchment in southeastern Sweden: (1) an analog method (AM), (2) a multi-objective fuzzy-rule-based classification (MOFRBC) and (3) the Statistical DownScaling Model (SDSM). The obtained higher-resolution precipitation values were then used to simulate daily streamflow for a control period (1961-1990) and for two future emission scenarios (2071-2100) with the precipitation-streamflow model HBV. The choice of downscaled precipitation time series had a major impact on the streamflow simulations, which was directly related to the ability of the downscaling approaches to reproduce observed precipitation. Although SDSM was considered to be most suitable for downscaling precipitation in the studied river basin, we highlighted the importance of an ensemble approach. The climate and streamflow change signals indicated that the current flow regime with a snowmelt-driven spring flood in April will likely change to a flow regime that is rather dominated by large winter streamflows. Spring flood events are expected to decrease considerably and occur earlier, whereas autumn flood peaks are projected to increase slightly. The simulations demonstrated that projections of future streamflow regimes are highly variable and can even partly point towards different directions. (orig.)

  10. Classification as a generic tool for characterising status and changes of regional scale groundwater systems

    Science.gov (United States)

    Barthel, Roland; Haaf, Ezra

    2016-04-01

    Regional hydrogeology is becoming increasingly important, but at the same time, scientifically sound, universal solutions for typical groundwater problems encountered on the regional scale are hard to find. While managers, decision-makers and state agencies operating on regional and national levels have always shown a strong interest in regional scale hydrogeology, researchers from academia tend to avoid the subject, focusing instead on local scales. Additionally, hydrogeology has always had a tendency to regard every problem as unique to its own site- and problem-specific context. Regional scale hydrogeology is therefore pragmatic rather than aiming at developing generic methodology (Barthel, 2014; Barthel and Banzhaf, 2016). One of the main challenges encountered on the regional scale in hydrogeology is the extreme heterogeneity that generally increases with the size of the studied area - paired with relative data scarcity. Even in well-monitored regions of the world, groundwater observations are usually clustered, leaving large areas without any direct data. However, there are many good reasons for assessing the status and predicting the behavior of groundwater systems under conditions of global change even for those areas and aquifers without observations. This is typically done by using rather coarsely discretized and / or poorly parameterized numerical models, or by using very simplistic conceptual hydrological models that do not take into account the complex three-dimensional geological setup. Numerical models heavily rely on local data and are resource-demanding. Conceptual hydrological models only deliver reliable information on groundwater if the geology is extremely simple. In this contribution, we present an approach to derive statistically relevant information for un-monitored areas, making use of existing information from similar localities that are or have been monitored. The approach combines site-specific knowledge with conceptual assumptions on

  11. An experimental verification of the compensation of length change of line scales caused by ambient air pressure

    International Nuclear Information System (INIS)

    Takahashi, Akira; Miwa, Nobuharu

    2010-01-01

    Line scales are used as a working standard of length for the calibration of optical measuring instruments such as profile projectors, measuring microscopes and video measuring systems. The authors have developed a one-dimensional calibration system for line scales to obtain a lower uncertainty of measurement. The scale calibration system, named Standard Scale Calibrator SSC-05, employs a vacuum interferometer system for length measurement, a 633 nm iodine-stabilized He–Ne laser to calibrate the oscillating frequency of the interferometer laser light source and an Abbe's error compensation structure. To reduce the uncertainty of measurement, the uncertainty factors of the line scale and ambient conditions should not be neglected. Using the length calibration system, the expansion and contraction of a line scale due to changes in ambient air pressure were observed and the measured scale length was corrected into the length under standard atmospheric pressure, 1013.25 hPa. Utilizing a natural rapid change in the air pressure caused by a tropical storm (typhoon), we carried out an experiment on the length measurement of a 1000 mm long line scale made of glass ceramic with a low coefficient of thermal expansion. Using a compensation formula for the length change caused by changes in ambient air pressure, the length change of the 1000 mm long line scale was compensated with a standard deviation of less than 1 nm

  12. The reduction method of statistic scale applied to study of climatic change

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel

    2000-01-01

    In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia

  13. Climate equivalence scales and the effects of climate change on Russian welfare and well-being

    Energy Technology Data Exchange (ETDEWEB)

    Frijters, P. [Tinbergen Institute, University of Amsterdam, Amsterdam (Netherlands)] Van Praag, B.M.S. [Foundation for Economic Research SEO, Faculty of Economics and Econometrics, University of Amsterdam, Amsterdam (Netherlands)

    1996-12-31

    The concepts of welfare and well-being are made operational and are measured for two large Russian household surveys, carried out in 1993 and 1994. Welfare refers to satisfaction with income and well-being refers to satisfaction with life as a whole. The main question in this paper is how different climatic conditions in various parts of Russia affect the cost of living and well-being. This approach yields climate equivalence scales for both welfare and well-being. Finally we apply the result to assess the impact of a climate change. Under the assumption that the climate cost structure is invariant under climate change, an increase of 2 Celsius in average temperature could mean an effective decrease in the cost of living of 32% on average in Russia. 5 tabs., 1 app., 28 refs.

  14. Generic framework for meso-scale assessment of climate change hazards in coastal environments

    DEFF Research Database (Denmark)

    Appelquist, Lars Rosendahl

    2013-01-01

    coastal environments worldwide through a specially designed coastal classification system containing 113 generic coastal types. The framework provides information on the degree to which key climate change hazards are inherent in a particular coastal environment, and covers the hazards of ecosystem......This paper presents a generic framework for assessing inherent climate change hazards in coastal environments through a combined coastal classification and hazard evaluation system. The framework is developed to be used at scales relevant for regional and national planning and aims to cover all...... and computing requirements, allowing for application in developing country settings. It is presented as a graphical tool—the Coastal Hazard Wheel—to ease its application for planning purposes....

  15. Very small glaciers under climate change: from the local to the global scale

    Science.gov (United States)

    Huss, M.; Fischer, M.

    2015-12-01

    Very small glaciers (climate archive. Very small glaciers have generally shorter response times than valley glaciers and their mass balance is strongly dependent on snow redistribution processes. Worldwide glacier monitoring has focused on medium-sized to large glaciers leaving us with a relatively limited understanding of the behavior of very small glaciers. With warming climate there is an increasing concern that very small glaciers might be the first to disappear. Already in the next decades this might result in the complete deglaciation of mountain ranges with glacier equilibrium lines close to the highest peaks, such as in the Rocky Mountains, the European Alps, the Andes or parts of High Mountain Asia. In this contribution, we present a comprehensive modelling framework to assess past and future changes in very small glaciers at the mountain-range scale. Among other processes our model accounts for snow redistribution, changes in glacier geometry and dynamic changes in debris-coverage, and computes e.g. distributed mass balance, englacial temperature and proglacial runoff. Detailed glacier projections until 2060 are shown for the Swiss Alps based on new data sets, and the 21st century contribution of all very small glaciers worldwide to sea-level rise is quantified using a global model. Grid-based modelling of surface mass balance and retreat for 1133 very small glaciers in Switzerland indicates that 70% of them will completely vanish within the next 25 years. However, a few avalanche-fed glaciers at low elevation might be able to survive even substantial atmospheric warming. We find relatively high static and dynamic sensitivities for gently-sloping glaciers. At the global scale, glaciers presently smaller than 1 km2 make up for only 0.7% of total ice volume but account for 6.7% of sea-level rise contribution during the period 2015-2025. This indicates that very small glaciers are a non-negligible component of global glacier change, at least in the near

  16. The Non-linear Trajectory of Change in Play Profiles of Three Children in Psychodynamic Play Therapy

    OpenAIRE

    Halfon, Sibel; ?avdar, Alev; Orsucci, Franco; Schiepek, Gunter K.; Andreassi, Silvia; Giuliani, Alessandro; de Felice, Giulio

    2016-01-01

    Aim: Even though there is substantial evidence that play based therapies produce significant change, the specific play processes in treatment remain unexamined. For that purpose, processes of change in long-term psychodynamic play therapy are assessed through a repeated systematic assessment of three children’s “play profiles,” which reflect patterns of organization among play variables that contribute to play activity in therapy, indicative of the children’s coping strategies, and an express...

  17. Was millennial scale climate change during the Last Glacial triggered by explosive volcanism?

    Science.gov (United States)

    Baldini, James U L; Brown, Richard J; McElwaine, Jim N

    2015-11-30

    The mechanisms responsible for millennial scale climate change within glacial time intervals are equivocal. Here we show that all eight known radiometrically-dated Tambora-sized or larger NH eruptions over the interval 30 to 80 ka BP are associated with abrupt Greenland cooling (>95% confidence). Additionally, previous research reported a strong statistical correlation between the timing of Southern Hemisphere volcanism and Dansgaard-Oeschger (DO) events (>99% confidence), but did not identify a causative mechanism. Volcanic aerosol-induced asymmetrical hemispheric cooling over the last few hundred years restructured atmospheric circulation in a similar fashion as that associated with Last Glacial millennial-scale shifts (albeit on a smaller scale). We hypothesise that following both recent and Last Glacial NH eruptions, volcanogenic sulphate injections into the stratosphere cooled the NH preferentially, inducing a hemispheric temperature asymmetry that shifted atmospheric circulation cells southward. This resulted in Greenland cooling, Antarctic warming, and a southward shifted ITCZ. However, during the Last Glacial, the initial eruption-induced climate response was prolonged by NH glacier and sea ice expansion, increased NH albedo, AMOC weakening, more NH cooling, and a consequent positive feedback. Conversely, preferential SH cooling following large SH eruptions shifted atmospheric circulation to the north, resulting in the characteristic features of DO events.

  18. Multiple time scale analysis of sediment and runoff changes in the Lower Yellow River

    Directory of Open Access Journals (Sweden)

    K. Chi

    2018-06-01

    Full Text Available Sediment and runoff changes of seven hydrological stations along the Lower Yellow River (LYR (Huayuankou Station, Jiahetan Station, Gaocun Station, Sunkou Station, Ai Shan Station, Qikou Station and Lijin Station from 1980 to 2003 were alanyzed at multiple time scale. The maximum value of monthly, daily and hourly sediment load and runoff conservations were also analyzed with the annually mean value. Mann–Kendall non-parametric mathematics correlation test and Hurst coefficient method were adopted in the study. Research results indicate that (1 the runoff of seven hydrological stations was significantly reduced in the study period at different time scales. However, the trends of sediment load in these stations were not obvious. The sediment load of Huayuankou, Jiahetan and Aishan stations even slightly increased with the runoff decrease. (2 The trends of the sediment load with different time scale showed differences at Luokou and Lijin stations. Although the annually and monthly sediment load were broadly flat, the maximum hourly sediment load showed decrease trend. (3 According to the Hurst coefficients, the trend of sediment and runoff will be continue without taking measures, which proved the necessary of runoff-sediment regulation scheme.

  19. Impact of thermoelectric phenomena on phase-change memory performance metrics and scaling

    International Nuclear Information System (INIS)

    Lee, Jaeho; Asheghi, Mehdi; Goodson, Kenneth E

    2012-01-01

    The coupled transport of heat and electrical current, or thermoelectric phenomena, can strongly influence the temperature distribution and figures of merit for phase-change memory (PCM). This paper simulates PCM devices with careful attention to thermoelectric transport and the resulting impact on programming current during the reset operation. The electrothermal simulations consider Thomson heating within the phase-change material and Peltier heating at the electrode interface. Using representative values for the Thomson and Seebeck coefficients extracted from our past measurements of these properties, we predict a cell temperature increase of 44% and a decrease in the programming current of 16%. Scaling arguments indicate that the impact of thermoelectric phenomena becomes greater with smaller dimensions due to enhanced thermal confinement. This work estimates the scaling of this reduction in programming current as electrode contact areas are reduced down to 10 nm × 10 nm. Precise understanding of thermoelectric phenomena and their impact on device performance is a critical part of PCM design strategies. (paper)

  20. Detecting Land-Use Change and On-Farm Investments at the Plot Scale

    Science.gov (United States)

    Burney, J. A.; Goldblatt, R.; Amezaga, K. Y.; Sanford, L.; Nichols, M. M.

    2017-12-01

    The ability to remotely monitor agro-ecosystems over large spatial scales, at high spatial and temporal resolution, promises to open new and previously un-tractable lines of inquiry about the relationships between management practices, welfare, and resilience in coupled human-natural systems. We use several sources of remotely sensed data (from vegetation indices to synthetic aperture radar) and new analysis methods to infer when and where land-use and management changes take place at the farm level, including processes leading to degradation, like overgrazing or tree removal, as well as processes intended to boost resilience, like irrigation and conservation agriculture. Here, we first show how ecosystem health metrics can be used as indicators of both poverty and vulnerability. This is especially important because many other remotely-sensed economic proxies exhibit hysteresis in one direction; that is, they may respond quickly to positive income shocks (e.g., a change in income may rapidly lead to more construction and an expansion of the urban environment), but little if at all to negative shocks (a drop in income does not lead to deconstruction of buildings). We then present results from three field projects that show how these techniques can be used to detect management changes — reflecting changes in household welfare — in both field and quasi/natural experiments.

  1. Did Large-Scale Vaccination Drive Changes in the Circulating Rotavirus Population in Belgium?

    Science.gov (United States)

    Pitzer, Virginia E.; Bilcke, Joke; Heylen, Elisabeth; Crawford, Forrest W.; Callens, Michael; De Smet, Frank; Van Ranst, Marc; Zeller, Mark; Matthijnssens, Jelle

    2015-01-01

    Vaccination can place selective pressures on viral populations, leading to changes in the distribution of strains as viruses evolve to escape immunity from the vaccine. Vaccine-driven strain replacement is a major concern after nationwide rotavirus vaccine introductions. However, the distribution of the predominant rotavirus genotypes varies from year to year in the absence of vaccination, making it difficult to determine what changes can be attributed to the vaccines. To gain insight in the underlying dynamics driving changes in the rotavirus population, we fitted a hierarchy of mathematical models to national and local genotype-specific hospitalization data from Belgium, where large-scale vaccination was introduced in 2006. We estimated that natural- and vaccine-derived immunity was strongest against completely homotypic strains and weakest against fully heterotypic strains, with an intermediate immunity amongst partially heterotypic strains. The predominance of G2P[4] infections in Belgium after vaccine introduction can be explained by a combination of natural genotype fluctuations and weaker natural and vaccine-induced immunity against infection with strains heterotypic to the vaccine, in the absence of significant variation in strain-specific vaccine effectiveness against disease. However, the incidence of rotavirus gastroenteritis is predicted to remain low despite vaccine-driven changes in the distribution of genotypes. PMID:26687288

  2. Linear gate

    International Nuclear Information System (INIS)

    Suwono.

    1978-01-01

    A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)

  3. Comparison and Evaluation of Global Scale Studies of Vulnerability and Risks to Climate Change

    Science.gov (United States)

    Muccione, Veruska; Allen, Simon K.; Huggel, Christian; Birkmann, Joern

    2015-04-01

    Understanding the present and future distribution of different climate change impacts and vulnerability to climate change is a central subject in the context of climate justice and international climate policy. Commonly, it is claimed that poor countries that contributed little to anthropogenic climate change are those most affected and most vulnerable to climate change. Such statements are backed by a number of global-scale vulnerability studies, which identified poor countries as most vulnerable. However, some studies have challenged this view, likewise highlighting the high vulnerability of richer countries. Overall, no consensus has been reached so far about which concept of vulnerability should be applied and what type of indicators should be considered. Furthermore, there is little agreement which specific countries are most vulnerable. This is a major concern in view of the need to inform international climate policy, all the more if such assessments should contribute to allocate climate adaptation funds as was invoked at some instances. We argue that next to the analysis of who is most vulnerable, it is also important to better understand and compare different vulnerability profiles assessed in present global studies. We perform a systematic literature review of global vulnerability assessments with the scope to highlight vulnerability distribution patterns. We then compare these distributions with global risk distributions in line with revised and adopted concepts by most recent IPCC reports. It emerges that improved differentiation of key drivers of risk and the understanding of different vulnerability profiles are important contributions, which can inform future adaptation policies at the regional and national level. This can change the perspective on, and basis for distributional issues in view of climate burden share, and therefore can have implications for UNFCCC financing instruments (e.g. Green Climate Fund). However, in order to better compare

  4. Linear Accelerators

    International Nuclear Information System (INIS)

    Vretenar, M

    2014-01-01

    The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics

  5. Efficient Non Linear Loudspeakers

    DEFF Research Database (Denmark)

    Petersen, Bo R.; Agerkvist, Finn T.

    2006-01-01

    Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....

  6. National Scale Prediction of Soil Carbon Sequestration under Scenarios of Climate Change

    Science.gov (United States)

    Izaurralde, R. C.; Thomson, A. M.; Potter, S. R.; Atwood, J. D.; Williams, J. R.

    2006-12-01

    Carbon sequestration in agricultural soils is gaining momentum as a tool to mitigate the rate of increase of atmospheric CO2. Researchers from the Pacific Northwest National Laboratory, Texas A&M University, and USDA-NRCS used the EPIC model to develop national-scale predictions of soil carbon sequestration with adoption of no till (NT) under scenarios of climate change. In its current form, the EPIC model simulates soil C changes resulting from heterotrophic respiration and wind / water erosion. Representative modeling units were created to capture the climate, soil, and management variability at the 8-digit hydrologic unit (USGS classification) watershed scale. The soils selected represented at least 70% of the variability within each watershed. This resulted in 7,540 representative modeling units for 1,412 watersheds. Each watershed was assigned a major crop system: corn, soybean, spring wheat, winter wheat, cotton, hay, alfalfa, corn-soybean rotation or wheat-fallow rotation based on information from the National Resource Inventory. Each representative farm was simulated with conventional tillage and no tillage, and with and without irrigation. Climate change scenarios for two future periods (2015-2045 and 2045-2075) were selected from GCM model runs using the IPCC SRES scenarios of A2 and B2 from the UK Hadley Center (HadCM3) and US DOE PCM (PCM) models. Changes in mean and standard deviation of monthly temperature and precipitation were extracted from gridded files and applied to baseline climate (1960-1990) for each of the 1,412 modeled watersheds. Modeled crop yields were validated against historical USDA NASS county yields (1960-1990). The HadCM3 model predicted the most severe changes in climate parameters. Overall, there would be little difference between the A2 and B2 scenarios. Carbon offsets were calculated as the difference in soil C change between conventional and no till. Overall, C offsets during the first 30-y period (513 Tg C) are predicted to

  7. Integrated modelling of anthropogenic land-use and land-cover change on the global scale

    Science.gov (United States)

    Schaldach, R.; Koch, J.; Alcamo, J.

    2009-04-01

    In many cases land-use activities go hand in hand with substantial modifications of the physical and biological cover of the Earth's surface, resulting in direct effects on energy and matter fluxes between terrestrial ecosystems and the atmosphere. For instance, the conversion of forest to cropland is changing climate relevant surface parameters (e.g. albedo) as well as evapotranspiration processes and carbon flows. In turn, human land-use decisions are also influenced by environmental processes. Changing temperature and precipitation patterns for example are important determinants for location and intensity of agriculture. Due to these close linkages, processes of land-use and related land-cover change should be considered as important components in the construction of Earth System models. A major challenge in modelling land-use change on the global scale is the integration of socio-economic aspects and human decision making with environmental processes. One of the few global approaches that integrates functional components to represent both anthropogenic and environmental aspects of land-use change, is the LandSHIFT model. It simulates the spatial and temporal dynamics of the human land-use activities settlement, cultivation of food crops and grazing management, which compete for the available land resources. The rational of the model is to regionalize the demands for area intensive commodities (e.g. crop production) and services (e.g. space for housing) from the country-level to a global grid with the spatial resolution of 5 arc-minutes. The modelled land-use decisions within the agricultural sector are influenced by changing climate and the resulting effects on biomass productivity. Currently, this causal chain is modelled by integrating results from the process-based vegetation model LPJmL model for changing crop yields and net primary productivity of grazing land. Model output of LandSHIFT is a time series of grid maps with land-use/land-cover information

  8. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  9. Magnetic field induced changes in linear and nonlinear optical properties of Ti incorporated Cr2O3 nanostructured thin film

    Science.gov (United States)

    Baraskar, Priyanka; Chouhan, Romita; Agrawal, Arpana; Choudhary, R. J.; Sen, Pranay K.; Sen, Pratima

    2018-03-01

    We report the magnetic field effect on the linear and nonlinear optical properties of pulse laser ablated Ti-incorporated Cr2O3 nanostructured thin film. Optical properties have been experimentally analyzed under Voigt geometry by performing ultraviolet-visible spectroscopy and closed aperture Z-scan technique using a continuous wave He-Ne laser source. Nonlinear optical response reveals a single peak-valley feature in the far field diffraction pattern in absence of magnetic field (B = 0) confirming self-defocussing effect. This feature switches to a valley-peak configuration for B = 5000G, suggesting self-focusing effect. For B ≤ 750G, oscillations were observed revealing the occurrence of higher order nonlinearity. Origin of nonlinearity is attributed to the near resonant d-d transitions observed from the broad peak occurring around 2 eV. These transitions are of magnetic origin and get modified under the application of external magnetic field. Our results suggest that magnetic field can be used as an effective tool to monitor the sign of optical nonlinearity and hence the thermal expansion in Ti-incorporated Cr2O3 nanostructured thin film.

  10. Analysis of Multi-Scale Changes in Arable Land and Scale Effects of the Driving Factors in the Loess Areas in Northern Shaanxi, China

    Directory of Open Access Journals (Sweden)

    Lina Zhong

    2014-04-01

    Full Text Available In this study, statistical data on the national economic and social development, including the year-end actual area of arable land, the crop yield per unit area and 10 factors, were obtained for the period between 1980 and 2010 and used to analyze the factors driving changes in the arable land of the Loess Plateau in northern Shaanxi, China. The following areas of arable land, which represent different spatial scales, were investigated: the Baota District, the city of Yan’an, and the Northern Shaanxi region. The scale effects of the factors driving the changes to the arable land were analyzed using a canonical correlation analysis and a principal component analysis. Because it was difficult to quantify the impact of the national government policies on the arable land changes, the contributions of the national government policies to the changes in arable land were analyzed qualitatively. The primary conclusions of the study were as follows: between 1980 and 2010, the arable land area decreased. The trends of the year-end actual arable land proportion of the total area in the northern Shaanxi region and Yan’an City were broadly consistent, whereas the proportion in the Baota District had no obvious similarity with the northern Shaanxi region and Yan’an City. Remarkably different factors were shown to influence the changes in the arable land at different scales. Environmental factors exerted a greater effect for smaller scale arable land areas (the Baota District. The effect of socio-economic development was a major driving factor for the changes in the arable land area at the city and regional scales. At smaller scales, population change, urbanization and socio-economic development affected the crop yield per unit area either directly or indirectly. Socio-economic development and the modernization of agricultural technology had a greater effect on the crop yield per unit area at the large-scales. Furthermore, the qualitative analysis

  11. Ecoregional-scale monitoring within conservation areas, in a rapidly changing climate

    Science.gov (United States)

    Beever, Erik A.; Woodward, Andrea

    2011-01-01

    Long-term monitoring of ecological systems can prove invaluable for resource management and conservation. Such monitoring can: (1) detect instances of long-term trend (either improvement or deterioration) in monitored resources, thus providing an early-warning indication of system change to resource managers; (2) inform management decisions and help assess the effects of management actions, as well as anthropogenic and natural disturbances; and (3) provide the grist for supplemental research on mechanisms of system dynamics and cause-effect relationships (Fancy et al., 2009). Such monitoring additionally provides a snapshot of the status of monitored resources during each sampling cycle, and helps assess whether legal standards and regulations are being met. Until the last 1-2 decades, tracking and understanding changes in condition of natural resources across broad spatial extents have been infrequently attempted. Several factors, however, are facilitating the achievement of such broad-scale investigation and monitoring. These include increasing awareness of the importance of landscape context, greater prevalence of regional and global environmental stressors, and the rise of landscape-scale programs designed to manage and monitor biological systems. Such programs include the US Forest Service's Forest Inventory and Analysis (FIA) Program (Moser et al., 2008), Canada's National Forest Inventory, the 3Q Programme for monitoring agricultural landscapes of Norway (Dramstad et al., 2002), and the emerging (US) Landscape Conservation Cooperatives (USDOI Secretarial Order 3289, 2009; Anonymous, 2011). This Special Section explores the underlying design considerations, as well as many pragmatic aspects associated with program implementation and interpretation of results from broad-scale monitoring systems, particularly within the constraints of high-latitude contexts (e.g., low road density, short field season, dramatic fluctuations in temperature). Although Alaska is

  12. Global climate change - a feasibility perspective of its effect on human health at a local scale

    Directory of Open Access Journals (Sweden)

    Michele Bernardi

    2008-05-01

    Full Text Available There are two responses to global climate change. First, mitigation, which actions to reduce greenhouse gas emissions and sequester or store carbon in the short-term, and make development choices that will lead to low emissions in the long-term. Second, adaptation, which involves adjustments in natural or human systems and behaviours that reduce the risks posed by climate change to people’s lives and livelihoods. While the two are conceptually distinct, in practice they are very much interdependent, and both are equally urgent from a healthy population perspective. To define the policies to mitigate and to adapt to global climate change, data and information at all scales are the basic requirement for both developed and developing countries. However, as compared to mitigation, adaptation is an immediate concern for low-income countries and for small islands states, where the reduction of the emissions from greenhouse gases is not among their priorities. Adaptation is also highly location specific and the required ground data to assess the impacts of climate change on human health are not available. Climate data at high spatial resolution can be derived by various downscaling methods using historical and real-time meteorological observations but, particularly in low-income countries, the outputs are limited by the lack of ground data at the local level. In many of these countries, a negative trend in the number of meteorological stations as compared as to before 2000 is evident, while remotelysensed imagery becomes more and more available at high spatial and temporal resolution. The final consequence is that climate change policy options in the developing world are greatly jeopardized.

  13. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    Science.gov (United States)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  14. Linear algebra

    CERN Document Server

    Said-Houari, Belkacem

    2017-01-01

    This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...

  15. Large-scale patterns of turnover and Basal area change in Andean forests.

    Directory of Open Access Journals (Sweden)

    Selene Báez

    Full Text Available General patterns of forest dynamics and productivity in the Andes Mountains are poorly characterized. Here we present the first large-scale study of Andean forest dynamics using a set of 63 permanent forest plots assembled over the past two decades. In the North-Central Andes tree turnover (mortality and recruitment and tree growth declined with increasing elevation and decreasing temperature. In addition, basal area increased in Lower Montane Moist Forests but did not change in Higher Montane Humid Forests. However, at higher elevations the lack of net basal area change and excess of mortality over recruitment suggests negative environmental impacts. In North-Western Argentina, forest dynamics appear to be influenced by land use history in addition to environmental variation. Taken together, our results indicate that combinations of abiotic and biotic factors that vary across elevation gradients are important determinants of tree turnover and productivity in the Andes. More extensive and longer-term monitoring and analyses of forest dynamics in permanent plots will be necessary to understand how demographic processes and woody biomass are responding to changing environmental conditions along elevation gradients through this century.

  16. Climate change-driven cliff and beach evolution at decadal to centennial time scales

    Science.gov (United States)

    Erikson, Li; O'Neill, Andrea; Barnard, Patrick; Vitousek, Sean; Limber, Patrick

    2017-01-01

    Here we develop a computationally efficient method that evolves cross-shore profiles of sand beaches with or without cliffs along natural and urban coastal environments and across expansive geographic areas at decadal to centennial time-scales driven by 21st century climate change projections. The model requires projected sea level rise rates, extrema of nearshore wave conditions, bluff recession and shoreline change rates, and cross-shore profiles representing present-day conditions. The model is applied to the ~470-km long coast of the Southern California Bight, USA, using recently available projected nearshore waves and bluff recession and shoreline change rates. The results indicate that eroded cliff material, from unarmored cliffs, contribute 11% to 26% to the total sediment budget. Historical beach nourishment rates will need to increase by more than 30% for a 0.25 m sea level rise (~2044) and by at least 75% by the year 2100 for a 1 m sea level rise, if evolution of the shoreline is to keep pace with rising sea levels.

  17. Development of the Motivation to Change Lifestyle and Health Behaviours for Dementia Risk Reduction Scale

    Directory of Open Access Journals (Sweden)

    Sarang Kim

    2014-06-01

    Full Text Available Background and Aims: It is not yet understood how attitudes concerning dementia risk may affect motivation to change health behaviours and lifestyle. This study was designed to develop a reliable and valid theory-based measure to understand beliefs underpinning the lifestyle and health behavioural changes needed for dementia risk reduction. Methods: 617 participants aged ≥50 years completed a theory-based questionnaire, namely, the Motivation to Change Lifestyle and Health Behaviours for Dementia Risk Reduction (MCLHB-DRR scale. The MCLHB-DRR consists of 53 items, reflecting seven subscales of the Health Belief Model. Results: Confirmatory factor analysis was performed and revealed that a seven-factor solution with 27 items fitted the data (comparative fit index = 0.920, root-mean-square error of approximation = 0.047 better than the original 53 items. Internal reliability (α = 0.608-0.864 and test-retest reliability (α = 0.552-0.776 were moderate to high. Measurement of invariance across gender and age was also demonstrated. Conclusions: These results propose that the MCLHB-DRR is a useful tool in assessing the beliefs and attitudes of males and females aged ≥50 years towards dementia risk reduction. This measure can be used in the development and evaluation of interventions aimed at dementia prevention.

  18. Linear Energy Transfer-Dependent Change in Rice Gene Expression Profile after Heavy-Ion Beam Irradiation.

    Directory of Open Access Journals (Sweden)

    Kotaro Ishii

    Full Text Available A heavy-ion beam has been recognized as an effective mutagen for plant breeding and applied to the many kinds of crops including rice. In contrast with X-ray or γ-ray, the heavy-ion beam is characterized by a high linear energy transfer (LET. LET is an important factor affecting several aspects of the irradiation effect, e.g. cell survival and mutation frequency, making the heavy-ion beam an effective mutagen. To study the mechanisms behind LET-dependent effects, expression profiling was performed after heavy-ion beam irradiation of imbibed rice seeds. Array-based experiments at three time points (0.5, 1, 2 h after the irradiation revealed that the number of up- or down-regulated genes was highest 2 h after irradiation. Array-based experiments with four different LETs at 2 h after irradiation identified LET-independent regulated genes that were up/down-regulated regardless of the value of LET; LET-dependent regulated genes, whose expression level increased with the rise of LET value, were also identified. Gene ontology (GO analysis of LET-independent up-regulated genes showed that some GO terms were commonly enriched, both 2 hours and 3 weeks after irradiation. GO terms enriched in LET-dependent regulated genes implied that some factor regulates genes that have kinase activity or DNA-binding activity in cooperation with the ATM gene. Of the LET-dependent up-regulated genes, OsPARP3 and OsPCNA were identified, which are involved in DNA repair pathways. This indicates that the Ku-independent alternative non-homologous end-joining pathway may contribute to repairing complex DNA legions induced by high-LET irradiation. These findings may clarify various LET-dependent responses in rice.

  19. Linear Energy Transfer-Dependent Change in Rice Gene Expression Profile after Heavy-Ion Beam Irradiation.

    Science.gov (United States)

    Ishii, Kotaro; Kazama, Yusuke; Morita, Ryouhei; Hirano, Tomonari; Ikeda, Tokihiro; Usuda, Sachiko; Hayashi, Yoriko; Ohbu, Sumie; Motoyama, Ritsuko; Nagamura, Yoshiaki; Abe, Tomoko

    2016-01-01

    A heavy-ion beam has been recognized as an effective mutagen for plant breeding and applied to the many kinds of crops including rice. In contrast with X-ray or γ-ray, the heavy-ion beam is characterized by a high linear energy transfer (LET). LET is an important factor affecting several aspects of the irradiation effect, e.g. cell survival and mutation frequency, making the heavy-ion beam an effective mutagen. To study the mechanisms behind LET-dependent effects, expression profiling was performed after heavy-ion beam irradiation of imbibed rice seeds. Array-based experiments at three time points (0.5, 1, 2 h after the irradiation) revealed that the number of up- or down-regulated genes was highest 2 h after irradiation. Array-based experiments with four different LETs at 2 h after irradiation identified LET-independent regulated genes that were up/down-regulated regardless of the value of LET; LET-dependent regulated genes, whose expression level increased with the rise of LET value, were also identified. Gene ontology (GO) analysis of LET-independent up-regulated genes showed that some GO terms were commonly enriched, both 2 hours and 3 weeks after irradiation. GO terms enriched in LET-dependent regulated genes implied that some factor regulates genes that have kinase activity or DNA-binding activity in cooperation with the ATM gene. Of the LET-dependent up-regulated genes, OsPARP3 and OsPCNA were identified, which are involved in DNA repair pathways. This indicates that the Ku-independent alternative non-homologous end-joining pathway may contribute to repairing complex DNA legions induced by high-LET irradiation. These findings may clarify various LET-dependent responses in rice.

  20. Watershed-scale changes in terrestrial nitrogen cycling during a period of decreased atmospheric nitrate and sulfur deposition

    Science.gov (United States)

    Sabo, Robert D.; Scanga, Sara E.; Lawrence, Gregory B.; Nelson, David M.; Eshleman, Keith N.; Zabala, Gabriel A.; Alinea, Alexandria A.; Schirmer, Charles D.

    2016-01-01

    Recent reports suggest that decreases in atmospheric nitrogen (N) deposition throughout Europe and North America may have resulted in declining nitrate export in surface waters in recent decades, yet it is unknown if and how terrestrial N cycling was affected. During a period of decreased atmospheric N deposition, we assessed changes in forest N cycling by evaluating trends in tree-ring δ15N values (between 1980 and 2010; n = 20 trees per watershed), stream nitrate yields (between 2000 and 2011), and retention of atmospherically-deposited N (between 2000 and 2011) in the North and South Tributaries (North and South, respectively) of Buck Creek in the Adirondack Mountains, USA. We hypothesized that tree-ring δ15N values would decline following decreases in atmospheric N deposition (after approximately 1995), and that trends in stream nitrate export and retention of atmospherically deposited N would mirror changes in tree-ring δ15N values. Three of the six sampled tree species and the majority of individual trees showed declining linear trends in δ15N for the period 1980–2010; only two individual trees showed increasing trends in δ15N values. From 1980 to 2010, trees in the watersheds of both tributaries displayed long-term declines in tree-ring δ15N values at the watershed scale (R = −0.35 and p = 0.001 in the North and R = −0.37 and p <0.001 in the South). The decreasing δ15N trend in the North was associated with declining stream nitrate concentrations (−0.009 mg N L−1 yr−1, p = 0.02), but no change in the retention of atmospherically deposited N was observed. In contrast, nitrate yields in the South did not exhibit a trend, and the watershed became less retentive of atmospherically deposited N (−7.3% yr−1, p < 0.001). Our δ15N results indicate a change in terrestrial N availability in both watersheds prior to decreases in atmospheric N deposition, suggesting that decreased atmospheric N deposition was not the sole driver of

  1. Linear and Nonlinear Finite Elements.

    Science.gov (United States)

    1983-12-01

    Metzler. Con/ ugte rapdent solution of a finite element elastic problem with high Poson rato without scaling and once with the global stiffness matrix K...nonzero c, that makes u(0) = 1. According to the linear, small deflection theory of the membrane the central displacement given to the membrane is not... theory is possible based on the approximations (l-y 2 )t = +y’ 2 +y𔃾 , (1-y𔃼)’ 1-y’ 2 - y" (6) that change eq. (5) to V𔃺) = , [yŖ(1 + y") - Qy𔃼

  2. Land-Use Scenarios: National-Scale Housing-Density Scenarios Consistent with Climate Change Storylines (Final Report)

    Science.gov (United States)

    EPA announced the availability of the final report, Land-Use Scenarios: National-Scale Housing-Density Scenarios Consistent with Climate Change Storylines. This report describes the scenarios and models used to generate national-scale housing density scenarios for the con...

  3. Regional impacts of climate change and atmospheric CO2 on future ocean carbon uptake: A multi-model linear feedback analysis

    OpenAIRE

    Roy Tilla; Bopp Laurent; Gehlen Marion; Schneider Birgitt; Cadule Patricia; Frölicher Thomas; Segschneider Jochen; Tijputra Jerry; Heinze Christoph; Joos Fortunat

    2011-01-01

    The increase in atmospheric CO2 over this century depends on the evolution of the oceanic air–sea CO2 uptake which will be driven by the combined response to rising atmospheric CO2 itself and climate change. Here the future oceanic CO2 uptake is simulated using an ensemble of coupled climate–carbon cycle models. The models are driven by CO2 emissions from historical data and the Special Report on Emissions Scenarios (SRES) A2 high emission scenario. A linear feedback analysis successfully sep...

  4. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  5. A framework for the quantitative assessment of climate change impacts on water-related activities at the basin scale

    OpenAIRE

    Anghileri, D.; Pianosi, F.; Soncini-Sessa, R.

    2011-01-01

    While quantitative assessment of the climate change impact on hydrology at the basin scale is quite addressed in the literature, extension of quantitative analysis to impact on the ecological, economic and social sphere is still limited, although well recognized as a key issue to support water resource planning and promote public participation. In this paper we propose a framework for assessing climate change impact on water-related activities at the basin scale. The specific features of our ...

  6. When the globe is your classroom: teaching and learning about large-scale environmental change online

    Science.gov (United States)

    Howard, E. A.; Coleman, K. J.; Barford, C. L.; Kucharik, C.; Foley, J. A.

    2005-12-01

    Understanding environmental problems that cross physical and disciplinary boundaries requires a more holistic view of the world - a "systems" approach. Yet it is a challenge for many learners to start thinking this way, particularly when the problems are large in scale and not easily visible. We will describe our online university course, "Humans and the Changing Biosphere," which takes a whole-systems perspective for teaching regional to global-scale environmental science concepts, including climate, hydrology, ecology, and human demographics. We will share our syllabus and learning objectives and summarize our efforts to incorporate "best" practices for online teaching. We will describe challenges we have faced, and our efforts to reach different learner types. Our goals for this presentation are: (1) to communicate how a systems approach ties together environmental sciences (including climate, hydrology, ecology, biogeochemistry, and demography) that are often taught as separate disciplines; (2) to generate discussion about challenges of teaching large-scale environmental processes; (3) to share our experiences in teaching these topics online; (4) to receive ideas and feedback on future teaching strategies. We will explain why we developed this course online, and share our experiences about benefits and challenges of teaching over the web - including some suggestions about how to use technology to supplement face-to-face learning experiences (and vice versa). We will summarize assessment data about what students learned during the course, and discuss key misconceptions and barriers to learning. We will highlight the role of an online discussion board in creating classroom community, identifying misconceptions, and engaging different types of learners.

  7. Influence of climate variability versus change at multi-decadal time scales on hydrological extremes

    Science.gov (United States)

    Willems, Patrick

    2014-05-01

    Recent studies have shown that rainfall and hydrological extremes do not randomly occur in time, but are subject to multidecadal oscillations. In addition to these oscillations, there are temporal trends due to climate change. Design statistics, such as intensity-duration-frequency (IDF) for extreme rainfall or flow-duration-frequency (QDF) relationships, are affected by both types of temporal changes (short term and long term). This presentation discusses these changes, how they influence water engineering design and decision making, and how this influence can be assessed and taken into account in practice. The multidecadal oscillations in rainfall and hydrological extremes were studied based on a technique for the identification and analysis of changes in extreme quantiles. The statistical significance of the oscillations was evaluated by means of a non-parametric bootstrapping method. Oscillations in large scale atmospheric circulation were identified as the main drivers for the temporal oscillations in rainfall and hydrological extremes. They also explain why spatial phase shifts (e.g. north-south variations in Europe) exist between the oscillation highs and lows. Next to the multidecadal climate oscillations, several stations show trends during the most recent decades, which may be attributed to climate change as a result of anthropogenic global warming. Such attribution to anthropogenic global warming is, however, uncertain. It can be done based on simulation results with climate models, but it is shown that the climate model results are too uncertain to enable a clear attribution. Water engineering design statistics, such as extreme rainfall IDF or peak or low flow QDF statistics, obviously are influenced by these temporal variations (oscillations, trends). It is shown in the paper, based on the Brussels 10-minutes rainfall data, that rainfall design values may be about 20% biased or different when based on short rainfall series of 10 to 15 years length, and

  8. Changes in dental care access upon health care benefit expansion to include scaling.

    Science.gov (United States)

    Park, Hee-Jung; Lee, Jun Hyup; Park, Sujin; Kim, Tae-Il

    2016-12-01

    This study aimed to evaluate the effects of a policy change to expand Korean National Health Insurance (KNHI) benefit coverage to include scaling on access to dental care at the national level. A nationally representative sample of 12,794 adults aged 20 to 64 years from Korea National Health and Nutritional Examination Survey (2010-2014) was analyzed. To examine the effect of the policy on the outcomes of interest (unmet dental care needs and preventive dental care utilization in the past year), an estimates-based probit model was used, incorporating marginal effects with a complex sampling structure. The effect of the policy on individuals depending on their income and education level was also assessed. Adjusting for potential covariates, the probability of having unmet needs for dental care decreased by 6.1% and preventative dental care utilization increased by 14% in the post-policy period compared to those in the pre-policy period (2010, 2012). High income and higher education levels were associated with fewer unmet dental care needs and more preventive dental visits. The expansion of coverage to include scaling demonstrated to have a significant association with decreasing unmet dental care needs and increasing preventive dental care utilization. However, the policy disproportionately benefited certain groups, in contrast with the objective of the policy to benefit all participants in the KNHI system.

  9. Database support for adaptation to climate change: An assessment of web-based portals across scales.

    Science.gov (United States)

    Sanderson, Hans; Hilden, Mikael; Russel, Duncan; Dessai, Suraje

    2016-10-01

    The widely recognized increase in greenhouse gas emissions is necessitating adaptation to a changing climate, and policies are being developed and implemented worldwide, across sectors, and between government scales globally. The aim of this article is to reflect on one of the major challenges: facilitating and sharing information on the next adaptation practices. Web portals (i.e., web sites) for disseminating information are important tools in meeting this challenge, and therefore, we assessed the characteristics of select major portals across multiple scales. We found that there is a rather limited number of case studies available in the portals-between 900 and 1000 in total-with 95 that include cost information and 195 that include the participation of stakeholders globally. Portals are rarely cited by researchers, suggesting a suboptimal connection between the practical, policy-related, and scientific development of adaptation. The government portals often lack links on search results between US and European Union (EU) web sites, for example. With significant investments and policy development emerging in both the United States and the European Union, there is great potential to share information via portals. Moreover, there is the possibility of better connecting the practical adaptation experience from bottom-up projects to the science of adaptation. Integr Environ Assess Manag 2016;12:627-631. © 2016 SETAC. © 2016 SETAC.

  10. Prewhitening of hydroclimatic time series? Implications for inferred change and variability across time scales

    Science.gov (United States)

    Razavi, Saman; Vogel, Richard

    2018-02-01

    Prewhitening, the process of eliminating or reducing short-term stochastic persistence to enable detection of deterministic change, has been extensively applied to time series analysis of a range of geophysical variables. Despite the controversy around its utility, methodologies for prewhitening time series continue to be a critical feature of a variety of analyses including: trend detection of hydroclimatic variables and reconstruction of climate and/or hydrology through proxy records such as tree rings. With a focus on the latter, this paper presents a generalized approach to exploring the impact of a wide range of stochastic structures of short- and long-term persistence on the variability of hydroclimatic time series. Through this approach, we examine the impact of prewhitening on the inferred variability of time series across time scales. We document how a focus on prewhitened, residual time series can be misleading, as it can drastically distort (or remove) the structure of variability across time scales. Through examples with actual data, we show how such loss of information in prewhitened time series of tree rings (so-called "residual chronologies") can lead to the underestimation of extreme conditions in climate and hydrology, particularly droughts, reconstructed for centuries preceding the historical period.

  11. Large-scale conformational changes of Trypanosoma cruzi proline racemase predicted by accelerated molecular dynamics simulation.

    Directory of Open Access Journals (Sweden)

    César Augusto F de Oliveira

    2011-10-01

    Full Text Available Chagas' disease, caused by the protozoan parasite Trypanosoma cruzi (T. cruzi, is a life-threatening illness affecting 11-18 million people. Currently available treatments are limited, with unacceptable efficacy and safety profiles. Recent studies have revealed an essential T. cruzi proline racemase enzyme (TcPR as an attractive candidate for improved chemotherapeutic intervention. Conformational changes associated with substrate binding to TcPR are believed to expose critical residues that elicit a host mitogenic B-cell response, a process contributing to parasite persistence and immune system evasion. Characterization of the conformational states of TcPR requires access to long-time-scale motions that are currently inaccessible by standard molecular dynamics simulations. Here we describe advanced accelerated molecular dynamics that extend the effective simulation time and capture large-scale motions of functional relevance. Conservation and fragment mapping analyses identified potential conformational epitopes located in the vicinity of newly identified transient binding pockets. The newly identified open TcPR conformations revealed by this study along with knowledge of the closed to open interconversion mechanism advances our understanding of TcPR function. The results and the strategy adopted in this work constitute an important step toward the rationalization of the molecular basis behind the mitogenic B-cell response of TcPR and provide new insights for future structure-based drug discovery.

  12. Modification of cementitious binder characteristics following a change in manufacturing scale

    International Nuclear Information System (INIS)

    Coninck, P. de; Ferre, B.; Moinard, M.; Tronche, E.

    2015-01-01

    CEA is developing conditioning processes for the disposal of legacy nuclear waste. One of the waste materials is magnesium cladding removed from fuel elements irradiated in nuclear reactors. The final specifications that must be met by the packages mainly include mechanical strength, cracking, waste immobilization, and H 2 release. A matrix material has been selected that complies with the requirements. This material is a geo-polymer mortar. The purpose of this study was to measure the impact on the matrix material characteristics of a change in scale with the objective of industrializing a solid magnesium waste retrieval process. The process parameters tested were different production volumes (0.7, 210 and 1000 liter packages) and process temperatures (10, 22 and 40 C. degrees). 3 types of mixers were used to scale up the production volume. The results show that the process temperature has a significant impact on the viscosity, workability time and temperature of the matrix. The size of the mixers did not significantly influence the material characteristics

  13. Along the Rainfall-Runoff Chain: From Scaling of Greatest Point Rainfall to Global Change Attribution

    Science.gov (United States)

    Fraedrich, K.

    2014-12-01

    Processes along the continental rainfall-runoff chain cover a wide range of time and space scales which are presented here combining observations (ranging from minutes to decades) and minimalist concepts. (i) Rainfall, which can be simulated by a censored first-order autoregressive process (vertical moisture fluxes), exhibits 1/f-spectra if presented as binary events (tropics), while extrema world wide increase with duration according to Jennings' scaling law. (ii) Runoff volatility (Yangtze) shows data collapse which, linked to an intra-annual 1/f-spectrum, is represented by a single function not unlike physical systems at criticality and the short and long return times of extremes are Weibull-distributed. Atmospheric and soil moisture variabilities are also discussed. (iii) Soil moisture (in a bucket), whose variability is interpreted by a biased coinflip Ansatz for rainfall events, adds an equation of state to energy and water flux balances comprising Budyko's frame work for quasi-stationary watershed analysis. Eco-hydrologic state space presentations in terms of surface flux ratios of energy excess (loss by sensible heat over supply by net radiation) versus water excess (loss by discharge over gain by precipitation) allow attributions of state change to external (or climate) and internal (or anthropogenic) causes. Including the vegetation-greenness index (NDVI) as an active tracer extends the eco-hydrologic state space analysis to supplement the common geographical presentations. Two examples demonstrate the approach combining ERA and MODIS data sets: (a) global geobotanic classification by combining first and second moments of the dryness ratio (net radiation over precipitation) and (b) regional attributions (Tibetan Plateau) of vegetation changes.

  14. Deriving Scaling Factors Using a Global Hydrological Model to Restore GRACE Total Water Storage Changes for China's Yangtze River Basin

    Science.gov (United States)

    Long, Di; Yang, Yuting; Yoshihide, Wada; Hong, Yang; Liang, Wei; Chen, Yaning; Yong, Bin; Hou, Aizhong; Wei, Jiangfeng; Chen, Lu

    2015-01-01

    This study used a global hydrological model (GHM), PCR-GLOBWB, which simulates surface water storage changes, natural and human induced groundwater storage changes, and the interactions between surface water and subsurface water, to generate scaling factors by mimicking low-pass filtering of GRACE signals. Signal losses in GRACE data were subsequently restored by the scaling factors from PCR-GLOBWB. Results indicate greater spatial heterogeneity in scaling factor from PCR-GLOBWB and CLM4.0 than that from GLDAS-1 Noah due to comprehensive simulation of surface and subsurface water storage changes for PCR-GLOBWB and CLM4.0. Filtered GRACE total water storage (TWS) changes applied with PCR-GLOBWB scaling factors show closer agreement with water budget estimates of TWS changes than those with scaling factors from other land surface models (LSMs) in China's Yangtze River basin. Results of this study develop a further understanding of the behavior of scaling factors from different LSMs or GHMs over hydrologically complex basins, and could be valuable in providing more accurate TWS changes for hydrological applications (e.g., monitoring drought and groundwater storage depletion) over regions where human-induced interactions between surface water and subsurface water are intensive.

  15. Linear algebra

    CERN Document Server

    Stoll, R R

    1968-01-01

    Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand

  16. Basic linear algebra

    CERN Document Server

    Blyth, T S

    2002-01-01

    Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...

  17. Changes and Attribution of Extreme Precipitation in Climate Models: Subdaily and Daily Scales

    Science.gov (United States)

    Zhang, W.; Villarini, G.; Scoccimarro, E.; Vecchi, G. A.

    2017-12-01

    Extreme precipitation events are responsible for numerous hazards, including flooding, soil erosion, and landslides. Because of their significant socio-economic impacts, the attribution and projection of these events is of crucial importance to improve our response, mitigation and adaptation strategies. Here we present results from our ongoing work.In terms of attribution, we use idealized experiments [pre-industrial control experiment (PI) and 1% per year increase (1%CO2) in atmospheric CO2] from ten general circulation models produced under the Coupled Model Intercomparison Project Phase 5 (CMIP5) and the fraction of attributable risk to examine the CO2 effects on extreme precipitation at the sub-daily and daily scales. We find that the increased CO2 concentration substantially increases the odds of the occurrence of sub-daily precipitation extremes compared to the daily scale in most areas of the world, with the exception of some regions in the sub-tropics, likely in relation to the subsidence of the Hadley Cell. These results point to the large role that atmospheric CO2 plays in extreme precipitation under an idealized framework. Furthermore, we investigate the changes in extreme precipitation events with the Community Earth System Model (CESM) climate experiments using the scenarios consistent with the 1.5°C and 2°C temperature targets. We find that the frequency of annual extreme precipitation at a global scale increases in both 1.5°C and 2°C scenarios until around 2070, after which the magnitudes of the trend become much weaker or even negative. Overall, the frequency of global annual extreme precipitation is similar between 1.5°C and 2°C for the period 2006-2035, and the changes in extreme precipitation in individual seasons are consistent with those for the entire year. The frequency of extreme precipitation in the 2°C experiments is higher than for the 1.5°C experiment after the late 2030s, particularly for the period 2071-2100.

  18. Distinguishing globally-driven changes from regional- and local-scale impacts: The case for long-term and broad-scale studies of recovery from pollution.

    Science.gov (United States)

    Hawkins, S J; Evans, A J; Mieszkowska, N; Adams, L C; Bray, S; Burrows, M T; Firth, L B; Genner, M J; Leung, K M Y; Moore, P J; Pack, K; Schuster, H; Sims, D W; Whittington, M; Southward, E C

    2017-11-30

    Marine ecosystems are subject to anthropogenic change at global, regional and local scales. Global drivers interact with regional- and local-scale impacts of both a chronic and acute nature. Natural fluctuations and those driven by climate change need to be understood to diagnose local- and regional-scale impacts, and to inform assessments of recovery. Three case studies are used to illustrate the need for long-term studies: (i) separation of the influence of fishing pressure from climate change on bottom fish in the English Channel; (ii) recovery of rocky shore assemblages from the Torrey Canyon oil spill in the southwest of England; (iii) interaction of climate change and chronic Tributyltin pollution affecting recovery of rocky shore populations following the Torrey Canyon oil spill. We emphasize that "baselines" or "reference states" are better viewed as envelopes that are dependent on the time window of observation. Recommendations are made for adaptive management in a rapidly changing world. Copyright © 2017. Published by Elsevier Ltd.

  19. Records of millennial-scale climate change from the Great Basin of the Western United States

    Science.gov (United States)

    Benson, Larry

    High-resolution (decadal) records of climate change from the Owens, Mono, and Pyramid Lake basins of California and Nevada indicate that millennialscale oscillations in climate of the Great Basin occurred between 52.6 and 9.2 14C ka. Climate records from the Owens and Pyramid Lake basins indicate that most, but not all, glacier advances (stades) between 52.6 and ˜15.0 14C ka occurred during relatively dry times. During the last alpine glacial period (˜60.0 to ˜14.0 14C ka), stadial/interstadial oscillations were recorded in Owens and Pyramid Lake sediments by the negative response of phytoplankton productivity to the influx of glacially derived silicates. During glacier advances, rock flour diluted the TOC fraction of lake sediments and introduction of glacially derived suspended sediment also increased the turbidity of lake water, decreasing light penetration and photosynthetic production of organic carbon. It is not possible to correlate objectively peaks in the Owens and Pyramid Lake TOC records (interstades) with Dansgaard-Oeschger interstades in the GISP2 ice-core δ18O record given uncertainties in age control and difference in the shapes of the OL90, PLC92 and GISP2 records. In the North Atlantic region, some climate records have clearly defined variability/cyclicity with periodicities of 102 to 103 yr; these records are correlatable over several thousand km. In the Great Basin, climate proxies also have clearly defined variability with similar time constants, but the distance over which this variability can be correlated remains unknown. Globally, there may be minimal spatial scales (domains) within which climate varies coherently on centennial and millennial scales, but it is likely that the sizes of these domains vary with geographic setting and time. A more comprehensive understanding of the mechanisms of climate forcing and the physical linkages between climate forcing and system response is needed in order to predict the spatial scale(s) over which

  20. Changing practice patterns of Gamma Knife versus linear accelerator-based stereotactic radiosurgery for brain metastases in the US.

    Science.gov (United States)

    Park, Henry S; Wang, Elyn H; Rutter, Charles E; Corso, Christopher D; Chiang, Veronica L; Yu, James B

    2016-04-01

    Single-fraction stereotactic radiosurgery (SRS) is a crucial component in the management of limited brain metastases from non-small cell lung cancer (NSCLC). Intracranial SRS has traditionally been delivered using a frame-based Gamma Knife (GK) platform, but stereotactic modifications to the linear accelerator (LINAC) have made an alternative approach possible. In the absence of definitive prospective trials comparing the efficacy and toxicities of treatment between the 2 techniques, nonclinical factors (such as technology accessibility, costs, and efficiency) may play a larger role in determining which radiosurgery system a facility may choose to install. To the authors' knowledge, this study is the first to investigate national patterns of GK SRS versus LINAC SRS use and to determine which factors may be associated with the adoption of these radiosurgery systems. The National Cancer Data Base was used to identify patients > 18 years old with NSCLC who were treated with single-fraction SRS to the brain between 2003 and 2011. Patients who received "SRS not otherwise specified" or who did not receive a radiotherapy dose within the range of 12-24 Gy were excluded to reduce the potential for misclassification. The chi-square test, t-test, and multivariable logistic regression analysis were used to compare potential demographic, clinicopathologic, and health care system predictors of GK versus LINAC SRS use, when appropriate. This study included 1780 patients, among whom 1371 (77.0%) received GK SRS and 409 (23.0%) underwent LINAC SRS. Over time, the proportion of patients undergoing LINAC SRS steadily increased, from 3.2% in 2003 to 30.8% in 2011 (p < 0.001). LINAC SRS was adopted more rapidly by community versus academic facilities (overall 29.2% vs 17.2%, p < 0.001). On multivariable analysis, 4 independent predictors of increased LINAC SRS use emerged, including year of diagnosis in 2008-2011 versus 2003-2007 (adjusted OR [AOR] 2.04, 95% CI 1.52-2.73, p < 0

  1. Shape shifting predicts ontogenetic changes in metabolic scaling in diverse aquatic invertebrates

    DEFF Research Database (Denmark)

    Glazier, Douglas S.; Hirst, Andrew G.; Atkinson, D.

    2016-01-01

    in metabolic scaling that deviate from 3/4-power scaling predicted by general models. Here, we show that in diverse aquatic invertebrates, ontogenetic shifts in the scaling of routine metabolic rate from near isometry (bR = scaling exponent approx. 1) to negative allometry (bR

  2. Linear programming

    CERN Document Server

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  3. Linear algebra

    CERN Document Server

    Liesen, Jörg

    2015-01-01

    This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...

  4. Linear algebra

    CERN Document Server

    Berberian, Sterling K

    2014-01-01

    Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.

  5. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  6. LINEAR ACCELERATOR

    Science.gov (United States)

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  7. Large Scale Analyses and Visualization of Adaptive Amino Acid Changes Projects.

    Science.gov (United States)

    Vázquez, Noé; Vieira, Cristina P; Amorim, Bárbara S R; Torres, André; López-Fernández, Hugo; Fdez-Riverola, Florentino; Sousa, José L R; Reboiro-Jato, Miguel; Vieira, Jorge

    2018-03-01

    When changes at few amino acid sites are the target of selection, adaptive amino acid changes in protein sequences can be identified using maximum-likelihood methods based on models of codon substitution (such as codeml). Although such methods have been employed numerous times using a variety of different organisms, the time needed to collect the data and prepare the input files means that tens or hundreds of coding regions are usually analyzed. Nevertheless, the recent availability of flexible and easy to use computer applications that collect relevant data (such as BDBM) and infer positively selected amino acid sites (such as ADOPS), means that the entire process is easier and quicker than before. However, the lack of a batch option in ADOPS, here reported, still precludes the analysis of hundreds or thousands of sequence files. Given the interest and possibility of running such large-scale projects, we have also developed a database where ADOPS projects can be stored. Therefore, this study also presents the B+ database, which is both a data repository and a convenient interface that looks at the information contained in ADOPS projects without the need to download and unzip the corresponding ADOPS project file. The ADOPS projects available at B+ can also be downloaded, unzipped, and opened using the ADOPS graphical interface. The availability of such a database ensures results repeatability, promotes data reuse with significant savings on the time needed for preparing datasets, and effortlessly allows further exploration of the data contained in ADOPS projects.

  8. An integrated model to simulate sown area changes for major crops at a global scale

    Institute of Scientific and Technical Information of China (English)

    SHIBASAKI; Ryosuke

    2008-01-01

    Dynamics of land use systems have attracted much attention from scientists around the world due to their ecological and socio-economic implications. An integrated model to dynamically simulate future changes in sown areas of four major crops (rice, maize, wheat and soybean) on a global scale is pre- sented. To do so, a crop choice model was developed on the basis of Multinomial Logit (Logit) model to model land users’ decisions on crop choices among a set of available alternatives with using a crop utility function. A GIS-based Environmental Policy Integrated Climate (EPIC) model was adopted to simulate the crop yields under a given geophysical environment and farming management conditions, while the International Food Policy and Agricultural Simulation (IFPSIM) model was utilized to estimate crop price in the international market. The crop choice model was linked with the GIS-based EPIC model and the IFPSIM model through data exchange. This integrated model was then validated against the FAO statistical data in 2001-2003 and the Moderate Resolution Imaging Spectroradiometer (MODIS) global land cover product in 2001. Both validation approaches indicated reliability of the model for ad- dressing the dynamics in agricultural land use and its capability for long-term scenario analysis. Finally, the model application was designed to run over a time period of 30 a, taking the year 2000 as baseline. The model outcomes can help understand and explain the causes, locations and consequences of land use changes, and provide support for land use planning and policy making.

  9. Minimal detectable change of the Personal and Social Performance scale in individuals with schizophrenia.

    Science.gov (United States)

    Lee, Shu-Chun; Tang, Shih-Fen; Lu, Wen-Shian; Huang, Sheau-Ling; Deng, Nai-Yu; Lue, Wen-Chyn; Hsieh, Ching-Lin

    2016-12-30

    The minimal detectable change (MDC) of the Personal and Social Performance scale (PSP) has not yet been investigated, limiting its utility in data interpretation. The purpose of this study was to determine the MDCs of the PSP administered by the same rater or different raters in individuals with schizophrenia. Participants with schizophrenia were recruited from two psychiatric community rehabilitation centers to complete the PSP assessments twice, 2 weeks apart, by the same rater or 2 different raters. MDC values were calculated from the coefficients of intra- and inter-rater reliability (i.e., intraclass correlation coefficients). Forty patients (mean age 36.9 years, SD 9.7) from one center participated in the intra-rater reliability study. Another 40 patients (mean age 44.3 years, SD 11.1) from the other center participated in the inter-rater study. The MDCs (MDC%) of the PSP were 10.7 (17.1%) for the same rater and 16.2 (24.1%) for different raters. The MDCs of the PSP appeared appropriate for clinical trials aiming to determine whether a real change in social functioning has occurred in people with schizophrenia. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Automated Topographic Change Detection via Dem Differencing at Large Scales Using The Arcticdem Database

    Science.gov (United States)

    Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.

    2016-12-01

    In the last decade, high resolution satellite imagery has become an increasingly accessible tool for geoscientists to quantify changes in the Arctic land surface due to geophysical, ecological and anthropomorphic processes. However, the trade off between spatial coverage and spatial-temporal resolution has limited detailed, process-level change detection over large (i.e. continental) scales. The ArcticDEM project utilized over 300,000 Worldview image pairs to produce a nearly 100% coverage elevation model (above 60°N) offering the first polar, high spatial - high resolution (2-8m by region) dataset, often with multiple repeats in areas of particular interest to geo-scientists. A dataset of this size (nearly 250 TB) offers endless new avenues of scientific inquiry, but quickly becomes unmanageable computationally and logistically for the computing resources available to the average scientist. Here we present TopoDiff, a framework for a generalized. automated workflow that requires minimal input from the end user about a study site, and utilizes cloud computing resources to provide a temporally sorted and differenced dataset, ready for geostatistical analysis. This hands-off approach allows the end user to focus on the science, without having to manage thousands of files, or petabytes of data. At the same time, TopoDiff provides a consistent and accurate workflow for image sorting, selection, and co-registration enabling cross-comparisons between research projects.

  11. Complex terrain wind resource estimation with the wind-atlas method: Prediction errors using linearized and nonlinear CFD micro-scale models

    DEFF Research Database (Denmark)

    Troen, Ib; Bechmann, Andreas; Kelly, Mark C.

    2014-01-01

    Using the Wind Atlas methodology to predict the average wind speed at one location from measured climatological wind frequency distributions at another nearby location we analyse the relative prediction errors using a linearized flow model (IBZ) and a more physically correct fully non-linear 3D...... flow model (CFD) for a number of sites in very complex terrain (large terrain slopes). We first briefly describe the Wind Atlas methodology as implemented in WAsP and the specifics of the “classical” model setup and the new setup allowing the use of the CFD computation engine. We discuss some known...

  12. Relaxation time and impurity effects on linear and nonlinear refractive index changes in (In,Ga)N–GaN spherical QD

    Energy Technology Data Exchange (ETDEWEB)

    El Ghazi, Haddou, E-mail: hadghazi@gmail.com [LPS, Faculty of Science, Dhar El Mehrez, BP 1796 Fes-Atlas (Morocco); Special Mathematics, CPGE My Youssef, Rabat (Morocco); Jorio, Anouar [LPS, Faculty of Science, Dhar El Mehrez, BP 1796 Fes-Atlas (Morocco)

    2014-10-01

    By means of a combination of Quantum Genetic Algorithm and Hartree–Fock–Roothaan method, the changes in linear, third-order nonlinear and total refractive index associated with intra-conduction band transition are investigated with and without shallow-donor impurity in wurtzite (In,Ga)N–GaN spherical quantum dot. For both cases with and without impurity, the calculation is performed within the framework of single band effective-mass and parabolic band approximations. Impurity's position and relaxation time effects are investigated. It is found that the modulation of the refractive index changes, suitable for good performance optical modulators and various infra-red optical device applications can be easily obtained by tailoring the relaxation time and the position of the impurity.

  13. Relaxation time and impurity effects on linear and nonlinear refractive index changes in (In,Ga)N–GaN spherical QD

    International Nuclear Information System (INIS)

    El Ghazi, Haddou; Jorio, Anouar

    2014-01-01

    By means of a combination of Quantum Genetic Algorithm and Hartree–Fock–Roothaan method, the changes in linear, third-order nonlinear and total refractive index associated with intra-conduction band transition are investigated with and without shallow-donor impurity in wurtzite (In,Ga)N–GaN spherical quantum dot. For both cases with and without impurity, the calculation is performed within the framework of single band effective-mass and parabolic band approximations. Impurity's position and relaxation time effects are investigated. It is found that the modulation of the refractive index changes, suitable for good performance optical modulators and various infra-red optical device applications can be easily obtained by tailoring the relaxation time and the position of the impurity

  14. Multivariate analysis of DSC-XRD simultaneous measurement data: a study of multistage crystalline structure changes in a linear poly(ethylene imine) thin film.

    Science.gov (United States)

    Kakuda, Hiroyuki; Okada, Tetsuo; Otsuka, Makoto; Katsumoto, Yukiteru; Hasegawa, Takeshi

    2009-01-01

    A multivariate analytical technique has been applied to the analysis of simultaneous measurement data from differential scanning calorimetry (DSC) and X-ray diffraction (XRD) in order to study thermal changes in crystalline structure of a linear poly(ethylene imine) (LPEI) film. A large number of XRD patterns generated from the simultaneous measurements were subjected to an augmented alternative least-squares (ALS) regression analysis, and the XRD patterns were readily decomposed into chemically independent XRD patterns and their thermal profiles were also obtained at the same time. The decomposed XRD patterns and the profiles were useful in discussing the minute peaks in the DSC. The analytical results revealed the following changes of polymorphisms in detail: An LPEI film prepared by casting an aqueous solution was composed of sesquihydrate and hemihydrate crystals. The sesquihydrate one was lost at an early stage of heating, and the film changed into an amorphous state. Once the sesquihydrate was lost by heating, it was not recovered even when it was cooled back to room temperature. When the sample was heated again, structural changes were found between the hemihydrate and the amorphous components. In this manner, the simultaneous DSC-XRD measurements combined with ALS analysis proved to be powerful for obtaining a better understanding of the thermally induced changes of the crystalline structure in a polymer film.

  15. Consensuses and discrepancies of basin-scale ocean heat content changes in different ocean analyses

    Science.gov (United States)

    Wang, Gongjie; Cheng, Lijing; Abraham, John; Li, Chongyin

    2018-04-01

    Inconsistent global/basin ocean heat content (OHC) changes were found in different ocean subsurface temperature analyses, especially in recent studies related to the slowdown in global surface temperature rise. This finding challenges the reliability of the ocean subsurface temperature analyses and motivates a more comprehensive inter-comparison between the analyses. Here we compare the OHC changes in three ocean analyses (Ishii, EN4 and IAP) to investigate the uncertainty in OHC in four major ocean basins from decadal to multi-decadal scales. First, all products show an increase of OHC since 1970 in each ocean basin revealing a robust warming, although the warming rates are not identical. The geographical patterns, the key modes and the vertical structure of OHC changes are consistent among the three datasets, implying that the main OHC variabilities can be robustly represented. However, large discrepancies are found in the percentage of basinal ocean heating related to the global ocean, with the largest differences in the Pacific and Southern Ocean. Meanwhile, we find a large discrepancy of ocean heat storage in different layers, especially within 300-700 m in the Pacific and Southern Oceans. Furthermore, the near surface analysis of Ishii and IAP are consistent with sea surface temperature (SST) products, but EN4 is found to underestimate the long-term trend. Compared with ocean heat storage derived from the atmospheric budget equation, all products show consistent seasonal cycles of OHC in the upper 1500 m especially during 2008 to 2012. Overall, our analyses further the understanding of the observed OHC variations, and we recommend a careful quantification of errors in the ocean analyses.

  16. Non-climatic factors and long-term, continental-scale changes in seasonally frozen ground

    Science.gov (United States)

    Shiklomanov, Nikolay I.

    2012-03-01

    ). In their recent paper entitled 'An observational 71-year history of seasonally frozen ground changes in Eurasian high latitudes', Frauenfeld and Zhang (2011) provided detailed analysis of soil temperature data to assess 1930-2000 trends in seasonal freezing depth. The data were obtained from 387 Soviet non-permafrost meteorological stations. The authors performed systematic, quality-controlled, integrative analysis over the entire former Soviet Union domain. The long-term changes in depth of seasonal freezing were discussed in relation to such forcing variables as air temperature, degree days of freezing/thawing, snow depth and summer precipitation as well as modes of the North Atlantic Oscillation. The spatially average approach adopted for the study provides a generalized continental-scale trend. The study greatly improves, expands and extends previous 1956-90 analysis of the ground thermal regime over the Eurasian high latitudes (Frauenfeld et al 2004). Although the work of Frauenfeld and Zhang (2011) is the most comprehensive assessment of the continental-scale long-term trends in seasonal freezing available to date, more detailed analysis is needed to determine the effect of climate change on seasonally frozen ground. It should be noted that, in addition to the variables considered for analysis, other non-climatic factors affect the depth of freezing propagation. Unlike the surface, which is influenced by the climate directly, the ground even at shallow depth receives a climatic signal that is substantially modified by edaphic processes, contributing to highly localized thermal sensitivities of the ground to climatic forcing. Subsurface properties, soil moisture, and snow and vegetation covers influence the depth of freezing. Topography also plays an important role in establishing the ground thermal regime. It is an important determinant of the amount of heat received by the ground surface, affects the distribution of snow and vegetation, and influences the

  17. Scaling environmental change through the community-level: a trait-based response-and-effect framework for plants.

    NARCIS (Netherlands)

    Suding, K.N.; Lavorel, S.; Chapin III, F.S.; Cornelissen, J.H.C.; Diaz, S.; Garnier, E.; Goldberg, D.; Hooper, D.U.; Jackson, S.T.; Navas, M.-L.

    2008-01-01

    Predicting ecosystem responses to global change is a major challenge in ecology. A critical step in that challenge is to understand how changing environmental conditions influence processes across levels of ecological organization. While direct scaling from individual to ecosystem dynamics can lead

  18. Scaling environmental change through the community level: a trait-based response-and-effect framework for plants

    Science.gov (United States)

    Katharine N. Suding; Sandra Lavorel; F. Stuart Chapin; Johannes H.C. Cornelissen; Sandra Diaz; Eric Garnier; Deborah Goldberg; David U. Hooper; Stephen T. Jackson; Marie-Laure. Navas

    2008-01-01

    Predicting ecosystem responses to global change is a major challenge in ecology. A critical step in that challenge is to understand how changing environmental conditions influence processes across levels of ecological organization. While direct scaling from individual to ecosystem dynamics can lead to robust and mechanistic predictions, new approaches are needed to...

  19. Combining remote sensing and household level data for regional scale analysis of land cover change in the Brazilian Amazon

    NARCIS (Netherlands)

    de Souza Soler, L.; Verburg, P.H.

    2010-01-01

    Land cover change in the Brazilian Amazon depends on the spatial variability of political, socioeconomic and biophysical factors, as well as on the land use history and its actors. A regional scale analysis was made in Rondônia State to identify possible differences in land cover change connected to

  20. Tree growth and climate in the Pacific Northwest, North America: a broad-scale analysis of changing growth environments

    Science.gov (United States)

    Whitney L. Albright; David L. Peterson

    2013-01-01

    Climate change in the 21st century will affect tree growth in the Pacific Northwest region of North America, although complex climate–growth relationships make it difficult to identify how radial growth will respond across different species distributions. We used a novel method to examine potential growth responses to climate change at a broad geographical scale with a...

  1. Resilience to climate change in a cross-scale tourism governance context: a combined quantitative-qualitative network analysis

    Directory of Open Access Journals (Sweden)

    Tobias Luthe

    2016-03-01

    Full Text Available Social systems in mountain regions are exposed to a number of disturbances, such as climate change. Calls for conceptual and practical approaches on how to address climate change have been taken up in the literature. The resilience concept as a comprehensive theory-driven approach to address climate change has only recently increased in importance. Limited research has been undertaken concerning tourism and resilience from a network governance point of view. We analyze tourism supply chain networks with regard to resilience to climate change at the municipal governance scale of three Alpine villages. We compare these with a planned destination management organization (DMO as a governance entity of the same three municipalities on the regional scale. Network measures are analyzed via a quantitative social network analysis (SNA focusing on resilience from a tourism governance point of view. Results indicate higher resilience of the regional DMO because of a more flexible and diverse governance structure, more centralized steering of fast collective action, and improved innovative capacity, because of higher modularity and better core-periphery integration. Interpretations of quantitative results have been qualitatively validated by interviews and a workshop. We conclude that adaptation of tourism-dependent municipalities to gradual climate change should be dealt with at a regional governance scale and adaptation to sudden changes at a municipal scale. Overall, DMO building at a regional scale may enhance the resilience of tourism destinations, if the municipalities are well integrated.

  2. Climatic changes on orbital and sub-orbital time scale recorded by the Guliya ice core in Tibetan Plateau

    Institute of Scientific and Technical Information of China (English)

    姚檀栋; 徐柏青; 蒲健辰

    2001-01-01

    Based on ice core records in the Tibetan Plateau and Greenland, the features and possible causes of climatic changes on orbital and sub-orbital time scale were discussed. Orbital time scale climatic change recorded in ice core from the Tibetan Plateau is typically ahead of that from polar regions, which indicates that climatic change in the Tibetan Plateau might be earlier than polar regions. The solar radiation change is a major factor that dominates the climatic change on orbital time scale. However, climatic events on sub-orbital time scale occurred later in the Tibetan Plateau than in the Arctic Region, indicating a different mechanism. For example, the Younger Dryas and Heinrich events took place earlier in Greenland ice core record than in Guliya ice core record. It is reasonable to propose the hypothesis that these climatic events were affected possibly by the Laurentide Ice Sheet. Therefore, ice sheet is critically important to climatic change on sub-orbital time scale in some ice ages.

  3. Millennial-scale ocean current intensity changes off southernmost Chile and implications for Drake Passage throughflow

    Science.gov (United States)

    Lamy, F.; Arz, H. W.; Kilian, R.; Baeza Urrea, O.; Caniupan, M.; Kissel, C.; Lange, C.

    2012-04-01

    The Antarctic Circumpolar Current (ACC) plays an essential role in the thermohaline circulation and global climate. Today a large volume of ACC water passes through the Drake Passage, a major geographic constrain for the circumpolar flow. Satellite tracked surface drifters have shown that Subantarctic Surface water of the ACC is transported northeastward across the Southeast Pacific from ~53°S/100°W towards the Chilean coast at ~40°S/75°W where surface waters bifurcate and flow northward into the Peru Chile Current (PCC) finally reaching the Eastern Tropical Pacific, and southwards into the Cape Horn Current (CHC). The CHC thus transports a significant amount of northern ACC water towards the Drake Passage and reaches surface current velocities of up to 35 cm/s within a narrow belt of ~100-150 km width off the coast. Also at deeper water levels, an accelerated southward flow occurs along the continental slope off southernmost South America that likewise substantially contributes to the Drake Passage throughflow. Here we report on high resolution geochemical and grain-size records from core MD07-3128 (53°S; 1032 m water depth) which has been retrieved from the upper continental slope off the Pacific entrance of the Magellan Strait beneath the CHC. Magnetic grain-sizes and grain-size distributions of the terrigenous fraction reveal large amplitude changes between the Holocene and the last glacial, as well as millennial-scale variability (most pronounced during Marine Isotope Stage). Magnetic grain-sizes, silt/clay ratios, fine sand contents, sortable silt contents, and sortable silt mean grain-sizes are substantially higher during the Holocene suggesting strongly enhanced current activity. The high absolute values imply flow speeds larger than 25 cm/s as currently observed in the CHC surface current. Furthermore, winnowing processes through bottom current activity and changes in the availability of terrigenous material (ice-sheet extension and related supply of

  4. Effects of tectonics and large scale climatic changes on the evolutionary history of Hyalomma ticks.

    Science.gov (United States)

    Sands, Arthur F; Apanaskevich, Dmitry A; Matthee, Sonja; Horak, Ivan G; Harrison, Alan; Karim, Shahid; Mohammad, Mohammad K; Mumcuoglu, Kosta Y; Rajakaruna, Rupika S; Santos-Silva, Maria M; Matthee, Conrad A

    2017-09-01

    -Diva, we also propose that the closure of the Tethyan seaway allowed for the genus to first enter Africa approximately 17.73Mya. In concert, our data supports the notion that tectonic events and large scale global changes in the environment contributed significantly to produce the rich species diversity currently found in the genus Hyalomma. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Surface water quality in streams and rivers: introduction, scaling, and climate change: Chapter 5

    Science.gov (United States)

    Loperfido, John

    2013-01-01

    A variety of competing and complementary needs such as ecological health, human consumption, transportation, recreation, and economic value make management and protection of water resources in riverine environments essential. Thus, an understanding of the complex and interacting factors that dictate riverine water quality is essential in empowering stake-holders to make informed management decisions (see Chapter 1.15 for additional information on water resource management). Driven by natural and anthropogenic forcing factors, a variety of chemical, physical, and biological processes dictate riverine water quality, resulting in temporal and spatial patterns and cycling (see Chapter 1.2 for information describing how global change interacts with water resources). Furthermore, changes in climatic forcing factors may lead to long-term deviations in water quality outside the envelope of historical data. The goal of this chapter is to present fundamental concepts dictating the conditions of basic water quality parameters in rivers and streams (herein generally referred to as rivers unless discussing a specific system) in the context of temporal (diel (24 h) to decadal) longitudinal scaling. Understanding water quality scaling in rivers is imperative as water is continually reused and recycled (see also Chapters 3.1 and 3.15); upstream discharges from anthropogenic sources are incorporated into bulk riverine water quality that is used by downstream consumers. Water quality parameters reviewed here include temperature, pH, dissolved oxygen (DO), and suspended sediment and were selected given the abundance of data available for these parameters due to recent advances in water quality sensor technology (see Chapter 4.13 for use of hydrologic data in watershed management). General equations describing reactions affecting water temperature, pH, DO, and suspended sediment are included to convey the complexity of how simultaneously occurring reactions can affect water quality

  6. Linear regression

    CERN Document Server

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  7. Linear Colliders

    International Nuclear Information System (INIS)

    Alcaraz, J.

    2001-01-01

    After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs

  8. Linear algebra

    CERN Document Server

    Edwards, Harold M

    1995-01-01

    In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject

  9. An emergentist vs a linear approach to social change processes: a gender look in contemporary India between modernity and Hindu tradition.

    Science.gov (United States)

    Condorelli, Rosalia

    2015-01-01

    Using Census of India data from 1901 to 2011 and national and international reports on women's condition in India, beginning with sex ratio trends according to regional distribution up to female infanticides and sex-selective abortions and dowry deaths, this study examines the sociological aspects of the gender imbalance in modern contemporary India. Gender inequality persistence in India proves that new values and structures do not necessarily lead to the disappearance of older forms, but they can co-exist with mutual adaptations and reinforcements. Data analysis suggests that these unexpected combinations are not comprehensible in light of a linear concept of social change which is founded, in turn, on a concept of social systems as linear interaction systems that relate to environmental perturbations according to proportional cause and effect relationships. From this perspective, in fact, behavioral attitudes and interaction relationships should be less and less proportionally regulated by traditional values and practices as exposure to modernizing influences increases. And progressive decreases should be found in rates of social indicators of gender inequality like dowry deaths (the inverse should be found in sex ratio trends). However, data does not confirm these trends. This finding leads to emphasize a new theoretical and methodological approach toward social systems study, namely the conception of social systems as complex adaptive systems and the consequential emergentist, nonlinear conception of social change processes. Within the framework of emergentist theory of social change is it possible to understand the lasting strength of the patriarchal tradition and its problematic consequences in the modern contemporary India.

  10. Association of metabolic syndrome and change in Unified Parkinson's Disease Rating Scale scores.

    Science.gov (United States)

    Leehey, Maureen; Luo, Sheng; Sharma, Saloni; Wills, Anne-Marie A; Bainbridge, Jacquelyn L; Wong, Pei Shieen; Simon, David K; Schneider, Jay; Zhang, Yunxi; Pérez, Adriana; Dhall, Rohit; Christine, Chadwick W; Singer, Carlos; Cambi, Franca; Boyd, James T

    2017-10-24

    To explore the association between metabolic syndrome and the Unified Parkinson's Disease Rating Scale (UPDRS) scores and, secondarily, the Symbol Digit Modalities Test (SDMT). This is a secondary analysis of data from 1,022 of 1,741 participants of the National Institute of Neurological Disorders and Stroke Exploratory Clinical Trials in Parkinson Disease Long-Term Study 1, a randomized, placebo-controlled trial of creatine. Participants were categorized as having or not having metabolic syndrome on the basis of modified criteria from the National Cholesterol Education Program Adult Treatment Panel III. Those who had the same metabolic syndrome status at consecutive annual visits were included. The change in UPDRS and SDMT scores from randomization to 3 years was compared in participants with and without metabolic syndrome. Participants with metabolic syndrome (n = 396) compared to those without (n = 626) were older (mean [SD] 63.9 [8.1] vs 59.9 [9.4] years; p metabolic syndrome experienced an additional 0.6- (0.2) unit annual increase in total UPDRS ( p = 0.02) and 0.5- (0.2) unit increase in motor UPDRS ( p = 0.01) scores compared with participants without metabolic syndrome. There was no difference in the change in SDMT scores. Persons with Parkinson disease meeting modified criteria for metabolic syndrome experienced a greater increase in total UPDRS scores over time, mainly as a result of increases in motor scores, compared to those who did not. Further studies are needed to confirm this finding. NCT00449865. © 2017 American Academy of Neurology.

  11. An integrated model to simulate sown area changes for major crops at a global scale

    Institute of Scientific and Technical Information of China (English)

    WU WenBin; YANG Peng; MENG ChaoYing; SHIBASAKI Ryosuke; ZHOU QingBo; TANG HuaJun; SHI Yun

    2008-01-01

    Dynamics of land use systems have attracted much attention from scientists around the world due to their ecological and socio-economic implications. An integrated model to dynamically simulate future changes in sown areas of four major crops (rice, maize, wheat and soybean) on a global scale is presented. To do so, a crop choice model was developed on the basis of Multinomial Logit (Logit) model to model land users' decisions on crop choices among a set of available alternatives with using a crop utility function. A GIS-based Environmental Policy Integrated Climate (EPIC) model was adopted to simulate the crop yields under a given geophysical environment and farming management conditions,while the International Food Policy and Agricultural Simulation (IFPSIM) model was utilized to estimate crop price in the international market. The crop choice model was linked with the GIS-based EPIC model and the IFPSIM model through data exchange. This integrated model was then validated against the FAO statistical data in 2001-2003 and the Moderate Resolution Imaging Spectroradiometer (MODIS)global land cover product in 2001. Both validation approaches indicated reliability of the model for addressing the dynamics in agricultural land use and its capability for long-term scenario analysis. Finally,the model application was designed to run over a time period of 30 a, taking the year 2000 as baseline.The model outcomes can help understand and explain the causes, locations and consequences of land use changes, and provide support for land use planning and policy making.

  12. Fine-scale ecological and economic assessment of climate change on olive in the Mediterranean Basin reveals winners and losers.

    Science.gov (United States)

    Ponti, Luigi; Gutierrez, Andrew Paul; Ruti, Paolo Michele; Dell'Aquila, Alessandro

    2014-04-15

    The Mediterranean Basin is a climate and biodiversity hot spot, and climate change threatens agro-ecosystems such as olive, an ancient drought-tolerant crop of considerable ecological and socioeconomic importance. Climate change will impact the interactions of olive and the obligate olive fruit fly (Bactrocera oleae), and alter the economics of olive culture across the Basin. We estimate the effects of climate change on the dynamics and interaction of olive and the fly using physiologically based demographic models in a geographic information system context as driven by daily climate change scenario weather. A regional climate model that includes fine-scale representation of the effects of topography and the influence of the Mediterranean Sea on regional climate was used to scale the global climate data. The system model for olive/olive fly was used as the production function in our economic analysis, replacing the commonly used production-damage control function. Climate warming will affect olive yield and fly infestation levels across the Basin, resulting in economic winners and losers at the local and regional scales. At the local scale, profitability of small olive farms in many marginal areas of Europe and elsewhere in the Basin will decrease, leading to increased abandonment. These marginal farms are critical to conserving soil, maintaining biodiversity, and reducing fire risk in these areas. Our fine-scale bioeconomic approach provides a realistic prototype for assessing climate change impacts in other Mediterranean agro-ecosystems facing extant and new invasive pests.

  13. Linear programming using Matlab

    CERN Document Server

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  14. Scaling Mode Shapes in Output-Only Structure by a Mass-Change-Based Method

    Directory of Open Access Journals (Sweden)

    Liangliang Yu

    2017-01-01

    Full Text Available A mass-change-based method based on output-only data for the rescaling of mode shapes in operational modal analysis (OMA is introduced. The mass distribution matrix, which is defined as a diagonal matrix whose diagonal elements represent the ratios among the diagonal elements of the mass matrix, is calculated using the unscaled mode shapes. Based on the theory of null space, the mass distribution vector or mass distribution matrix is obtained. A small mass with calibrated weight is added to a certain location of the structure, and then the mass distribution vector of the modified structure is estimated. The mass matrix is identified according to the difference of the mass distribution vectors between the original and modified structures. Additionally, the universal set of modes is unnecessary when calculating the mass distribution matrix, indicating that modal truncation is allowed in the proposed method. The mass-scaled mode shapes estimated in OMA according to the proposed method are compared with those obtained by experimental modal analysis. A simulation is employed to validate the feasibility of the method. Finally, the method is tested on output-only data from an experiment on a five-storey structure, and the results confirm the effectiveness of the method.

  15. Orbital-scale denitrification changes in the Eastern Arabian Sea during the last 800 kyrs.

    Science.gov (United States)

    Kim, Ji-Eun; Khim, Boo-Keun; Ikehara, Minoru; Lee, Jongmin

    2018-05-04

    Denitrification in the Arabian Sea is closely related to the monsoon-induced upwelling and subsequent phytoplankton production in the surface water. The δ 15 N values of bulk sediments collected at Site U1456 of the International Ocean Discovery Program (IODP) Expedition 355 reveal the orbital-scale denitrification history in response to the Indian Monsoon. Age reconstruction based on the correlation of planktonic foraminifera (Globigerinoides ruber) δ 18 O values with the LR04 stack together with the shipboard biostratigraphic and paleomagnetic data assigns the study interval to be 1.2 Ma. Comparison of δ 15 N values during the last 800 kyrs between Site U1456 (Eastern Arabian Sea) and Site 722B (Western Arabian Sea) showed that δ 15 N values were high during interglacial periods, indicating intensified denitrification, while the opposite was observed during glacial periods. Taking 6‰ as the empirical threshold of denitrification, the Eastern Arabian Sea has experienced a persistent oxygen minimum zone (OMZ) to maintain strong denitrification whereas the Western Arabian Sea has undergone OMZ breakdown during some glacial periods. The results of this study also suggests that five principal oceanographic conditions were changed in response to the Indian Monsoon following the interglacial and glacial cycles, which controls the degree of denitrification in the Arabian Sea.

  16. [Interpreting change scores of the Behavioural Rating Scale for Geriatric Inpatients (GIP)].

    Science.gov (United States)

    Diesfeldt, H F A

    2013-09-01

    The Behavioural Rating Scale for Geriatric Inpatients (GIP) consists of fourteen, Rasch modelled subscales, each measuring different aspects of behavioural, cognitive and affective disturbances in elderly patients. Four additional measures are derived from the GIP: care dependency, apathy, cognition and affect. The objective of the study was to determine the reproducibility of the 18 measures. A convenience sample of 56 patients in psychogeriatric day care was assessed twice by the same observer (a professional caregiver). The median time interval between rating occasions was 45 days (interquartile range 34-58 days). Reproducibility was determined by calculating intraclass correlation coefficients (ICC agreement) for test-retest reliability. The minimal detectable difference (MDD) was calculated based on the standard error of measurement (SEM agreement). Test-retest reliability expressed by the ICCs varied from 0.57 (incoherent behaviour) to 0.93 (anxious behaviour). Standard errors of measurement varied from 0.28 (anxious behaviour) to 1.63 (care dependency). The results show how the GIP can be applied when interpreting individual change in psychogeriatric day care participants.

  17. The Climate-G testbed: towards a large scale data sharing environment for climate change

    Science.gov (United States)

    Aloisio, G.; Fiore, S.; Denvil, S.; Petitdidier, M.; Fox, P.; Schwichtenberg, H.; Blower, J.; Barbera, R.

    2009-04-01

    The Climate-G testbed provides an experimental large scale data environment for climate change addressing challenging data and metadata management issues. The main scope of Climate-G is to allow scientists to carry out geographical and cross-institutional climate data discovery, access, visualization and sharing. Climate-G is a multidisciplinary collaboration involving both climate and computer scientists and it currently involves several partners such as: Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), Institut Pierre-Simon Laplace (IPSL), Fraunhofer Institut für Algorithmen und Wissenschaftliches Rechnen (SCAI), National Center for Atmospheric Research (NCAR), University of Reading, University of Catania and University of Salento. To perform distributed metadata search and discovery, we adopted a CMCC metadata solution (which provides a high level of scalability, transparency, fault tolerance and autonomy) leveraging both on P2P and grid technologies (GRelC Data Access and Integration Service). Moreover, data are available through OPeNDAP/THREDDS services, Live Access Server as well as the OGC compliant Web Map Service and they can be downloaded, visualized, accessed into the proposed environment through the Climate-G Data Distribution Centre (DDC), the web gateway to the Climate-G digital library. The DDC is a data-grid portal allowing users to easily, securely and transparently perform search/discovery, metadata management, data access, data visualization, etc. Godiva2 (integrated into the DDC) displays 2D maps (and animations) and also exports maps for display on the Google Earth virtual globe. Presently, Climate-G publishes (through the DDC) about 2TB of data related to the ENSEMBLES project (also including distributed replicas of data) as well as to the IPCC AR4. The main results of the proposed work are: wide data access/sharing environment for climate change; P2P/grid metadata approach; production-level Climate-G DDC; high quality tools for

  18. Temporal changes in vegetation of a virgin beech woodland remnant: stand-scale stability with intensive fine-scale dynamics governed by stand dynamic events

    Directory of Open Access Journals (Sweden)

    Tibor Standovár

    2017-03-01

    Full Text Available The aim of this resurvey study is to check if herbaceous vegetation on the forest floor exhibits overall stability at the stand-scale in spite of intensive dynamics at the scale of individual plots and stand dynamic events (driven by natural fine scale canopy gap dynamics. In 1996, we sampled a 1.5 ha patch using 0.25 m² plots placed along a 5 m × 5 m grid in the best remnant of central European montane beech woods in Hungary. All species in the herbaceous layer and their cover estimates were recorded. Five patches representing different stand developmental situations (SDS were selected for resurvey. In 2013, 306 plots were resurveyed by using blocks of four 0.25 m² plots to test the effects of imperfect relocation. We found very intensive fine-scale dynamics in the herbaceous layer with high species turnover and sharp changes in ground layer cover at the local-scale (< 1 m2. A decrease in species richness and herbaceous layer cover, as well as high species turnover, characterized the closing gaps. Colonization events and increasing species richness and herbaceous layer cover prevailed in the two newly created gaps. A pronounced decrease in the total cover, but low species turnover and survival of the majority of the closed forest specialists was detected by the resurvey at the stand-scale. The test aiming at assessing the effect of relocation showed a higher time effect than the effect of imprecise relocation. The very intensive fine-scale dynamics of the studied beech forest are profoundly determined by natural stand dynamics. Extinction and colonisation episodes even out at the stand-scale, implying an overall compositional stability of the herbaceous vegetation at the given spatial and temporal scale. We argue that fine-scale gap dynamics, driven by natural processes or applied as a management method, can warrant the survival of many closed forest specialist species in the long-run. Nomenclature: Flora Europaea (Tutin et al. 2010 for

  19. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  20. Cross-scale intercomparison of climate change impacts simulated by regional and global hydrological models in eleven large river basins

    Energy Technology Data Exchange (ETDEWEB)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Flörke, M.; Huang, S.; Motovilov, Y.; Buda, S.; Yang, T.; Müller, C.; Leng, G.; Tang, Q.; Portmann, F. T.; Hagemann, S.; Gerten, D.; Wada, Y.; Masaki, Y.; Alemayehu, T.; Satoh, Y.; Samaniego, L.

    2017-01-04

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.

  1. Association between changes on the Negative Symptom Assessment scale (NSA-16) and measures of functional outcome in schizophrenia.

    Science.gov (United States)

    Velligan, Dawn I; Alphs, Larry; Lancaster, Scott; Morlock, Robert; Mintz, Jim

    2009-09-30

    We examined whether changes in negative symptoms, as measured by scores on the 16-item Negative Symptom Assessment scale (NSA-16), were associated with changes in functional outcome. A group of 125 stable outpatients with schizophrenia were assessed at baseline and at 6 months using the NSA-16, the Brief Psychiatric Rating Scale, and multiple measures of functional outcome. Baseline adjusted regression coefficients indicated moderate correlations between negative symptoms and functional outcomes when baseline values of both variables were controlled. Results were nearly identical when we controlled for positive symptoms. Cross-lag panel correlations and Structural Equation Modeling were used to examine whether changes in negative symptoms drove changes in functional outcomes over time. Results indicated that negative symptoms drove the changes in the Social and Occupational Functioning Scale (SOFAS) rather than the reverse. Measures of Quality of Life and measures of negative symptoms may be assessing overlapping constructs or changes in both may be driven by a third variable. Negative symptoms were unrelated over time to scores on a performance-based measure of functional capacity. This study indicates that the relationship between negative symptom change and the change in functional outcomes is complex, and points to potential issues in selection of assessments.

  2. Using multiple linear regression and physicochemical changes of amino acid mutations to predict antigenic variants of influenza A/H3N2 viruses.

    Science.gov (United States)

    Cui, Haibo; Wei, Xiaomei; Huang, Yu; Hu, Bin; Fang, Yaping; Wang, Jia

    2014-01-01

    Among human influenza viruses, strain A/H3N2 accounts for over a quarter of a million deaths annually. Antigenic variants of these viruses often render current vaccinations ineffective and lead to repeated infections. In this study, a computational model was developed to predict antigenic variants of the A/H3N2 strain. First, 18 critical antigenic amino acids in the hemagglutinin (HA) protein were recognized using a scoring method combining phi (ϕ) coefficient and information entropy. Next, a prediction model was developed by integrating multiple linear regression method with eight types of physicochemical changes in critical amino acid positions. When compared to other three known models, our prediction model achieved the best performance not only on the training dataset but also on the commonly-used testing dataset composed of 31878 antigenic relationships of the H3N2 influenza virus.

  3. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  4. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  5. Large-scale changes in network interactions as a physiological signature of spatial neglect.

    Science.gov (United States)

    Baldassarre, Antonello; Ramsey, Lenny; Hacker, Carl L; Callejas, Alicia; Astafiev, Serguei V; Metcalf, Nicholas V; Zinn, Kristi; Rengachary, Jennifer; Snyder, Abraham Z; Carter, Alex R; Shulman, Gordon L; Corbetta, Maurizio

    2014-12-01

    networks in the right hemisphere; and (iii) increased intrahemispheric connectivity with the basal ganglia. These patterns of functional connectivity:behaviour correlations were stronger in patients with right- as compared to left-hemisphere damage and were independent of lesion volume. Our findings identify large-scale changes in resting state network interactions that are a physiological signature of spatial neglect and may relate to its right hemisphere lateralization. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Driving forces behind the stagnancy of China's energy-related CO2 emissions from 1996 to 1999: the relative importance of structural change, intensity change and scale change

    International Nuclear Information System (INIS)

    Libo Wu; Kaneko, S.; Matsuoka, S.

    2005-01-01

    It is noteworthy that income elasticity of energy consumption in China shifted from positive to negative after 1996, accompanied by an unprecedented decline in energy-related CO 2 emissions. This paper therefore investigate the evolution of energy-related CO 2 emissions in China from 1985 to 1999 and the underlying driving forces, using the newly proposed three-level 'perfect decomposition' method and provincially aggregated data. The province-based estimates and analyses reveal a 'sudden stagnancy' of energy consumption, supply and energy-related CO 2 emissions in China from 1996 to 1999. The speed of a decrease in energy intensity and a slowdown in the growth of average labor productivity of industrial enterprises may have been the dominant contributors to this 'stagnancy'. The findings of this paper point to the highest rate of deterioration of state-owned enterprises in early 1996, the industrial restructuring caused by changes in ownership, the shutdown of small-scale power plants, and the introduction of policies to improve energy efficiency as probable factors. Taking into account the characteristics of those key driving forces, we characterize China's decline of energy-related CO 2 emissions as a short-term fluctuation and incline to the likelihood that China will resume an increasing trend from a lower starting point in the near future. (author)

  7. Linear programming

    CERN Document Server

    Karloff, Howard

    1991-01-01

    To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...

  8. Superimposing various biophysical and social scales in a rapidly changing rural area (SW Niger)

    Science.gov (United States)

    Leduc, Christian; Massuel, Sylvain; Favreau, Guillaume; Cappelaere, Bernard; Leblanc, Marc; Bachir, Salifou; Ousmane, Boureïma

    2014-05-01

    transboundary aquifer that extends far beyond the study area, over about 150 000 km2. It is also heterogeneous. Like surface flows, but at a different scale, groundwater flows are marked by a strong endorheism. For example the Dantiandou closed piezometric depression extends over about approximately 5000 km2. These natural closed depressions are explained only by evapotranspiration uptake, weak in absolute terms (a few mm.a-1) but with a very high impact on hydrodynamics because of poor permeability and porosity. Both density of observations and hydraulic continuity of the CT3 aquifer give a fine idea of groundwater changes in the whole area. Human activities, continuously adapting in this poor rural area, add another complexity to the hydrological diversity in surface and ground water. The replacement of the natural vegetation with millet fields and fallow increased the surface runoff, and consequently water accumulation in temporary pools and then CT3 recharge. In the SE part of the study area, the water table has risen up to outcropping in the lowest valley bottoms. These new permanent ponds reflect groundwater while temporary ponds still reflect surface dynamics. This new component of the hydrological landscape induces several consequences, in physical and human dimensions. Evaporation strongly affects the permanent water and increases its salinity while the natural mineralization of groundwater is very low. The easier access to water resources allows a significant development of local gardening, which modifies the social functioning of villages (e.g. land rights between villages and within a village, diversification of crops and sources of income, new sales channels). Different physically based models (for surface and ground water) were built, with a significant discrepancy between their respective quantification of water flows at the region scale. Extrapolation of surface fluxes from the few instrumented catchments to a much larger mosaic of non-instrumented catchments is

  9. Massively parallel and linear-scaling algorithm for second-order Møller–Plesset perturbation theory applied to the study of supramolecular wires

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro

    2017-01-01

    We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventiona...

  10. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  11. Global change impact on water resources at the regional scale - a reflection on participatory modeling

    Science.gov (United States)

    Barthel, Roland; Büttner, Hannah; Nickel, Darla; Seidl, Roman

    2015-04-01

    discussion we therefore focus on the following three questions: • Can a stakeholder dialogue be successfully used to support the development of new, complex modelling systems, in particular at the regional scale? • What is the right timing for stakeholder interaction in the case of unclear problem definition - i.e. global (climate) change impact on regions where climate is not (yet) a threat to water or land use related demands and activities? • To what degree can scientists be motivated to carry out participatory research at all? We conclude that the PM process in GD was only partly successful because the project set overambitious goals, including the application of fundamentally new approaches to interdisciplinary science, the use of new modelling technologies, the focus upon and evaluation of potential and therefore characteristically uncertain future problems, including stakeholder demands, and the development of a ready-to-use, user-friendly tool. Furthermore, GD also showed that an externally and professionally moderated stakeholder dialogue is an absolute necessity to achieve successful participation of stakeholders in model development. The modelers themselves neither had the time, the skills and the ambitions to do this. Furthermore, there is a lack of incentives for scientists, particularly natural scientists, to commit to PM activities. Given the fact that the outcomes of PM are supposed to be relevant for societal decision making, this issue needs further attention.

  12. Non-linear Feedbacks Between Forest Mortality and Climate Change: Implications for Snow Cover, Water Resources, and Ecosystem Recovery in Western North America (Invited)

    Science.gov (United States)

    Brooks, P. D.; Harpold, A. A.; Biederman, J. A.; Gochis, D. J.; Litvak, M. E.; Ewers, B. E.; Broxton, P. D.; Reed, D. E.

    2013-12-01

    Unprecedented levels of tree mortality from insect infestation and wildfire are dramatically altering forest structure and composition in Western North America. Warming temperatures and increased drought stress have been implicated as major factors in the increasing spatial extent and frequency of these forest disturbances, but it is unclear how these changes in forest structure will interact with ongoing climate change to affect snowmelt water resources either for society or for ecosystem recovery following mortality. Because surface discharge, groundwater recharge, and ecosystem productivity all depend on seasonal snowmelt, a critical knowledge gap exists not only in predicting discharge, but in quantifying spatial and temporal variability in the partitioning of snowfall into abiotic vapor loss, plant available water, recharge, and streamflow within the complex mosaic of forest disturbance and topography that characterizes western mountain catchments. This presentation will address this knowledge gap by synthesizing recent work on snowpack dynamics and ecosystem productivity from seasonally snow-covered forests along a climate gradient from Arizona to Wyoming; including undisturbed sites, recently burned forests, and areas of extensive insect-induced forest mortality. Both before-after and control-impacted studies of forest disturbance on snow accumulation and ablation suggest that the spatial scale of snow distribution increases following disturbance, but net snow water input in a warming climate will increase only in topographically sheltered areas. While forest disturbance changes spatial scale of snowpack partitioning, the amount and especially the timing of snow cover accumulation and ablation are strongly related to interannual variability in ecosystem productivity with both earlier snowmelt and later snow accumulation associated with decreased carbon uptake. Empirical analyses and modeling are being developed to identify landscapes most sensitive to

  13. Determining the size of a complete disturbance landscape: multi-scale, continental analysis of forest change

    Science.gov (United States)

    Brian Buma; Jennifer K Costanza; Kurt Riitters

    2017-01-01

    The scale of investigation for disturbanceinfluenced processes plays a critical role in theoretical assumptions about stability, variance, and equilibrium, as well as conservation reserve and long-term monitoring program design. Critical consideration of scale is required for robust planning designs, especially when anticipating future disturbances whose exact...

  14. 'Glocal' politics of scale on environmental issues: Climate change, water and forests

    NARCIS (Netherlands)

    Gupta, J.; Padt, F.; Opdam, P.; Polman, N.; Termeer, C.

    2014-01-01

    The lack of objective ability to define the level of a problem leads to the politics of scale. A multidisciplinary, glocal approach helps scholars and policymakers to transcend such territorial traps and understand how scale is used as a political tool by social actors. This chapter explains how a

  15. Identification of linearized bearing characteristics of a rotating shaft from experimental synchronous forced responses. Application to the determination of unbalance change

    International Nuclear Information System (INIS)

    Audebert, S.

    1996-03-01

    The monitoring of vibrations evolution of a rotating shaft is necessary for the early detection of a possible default. The recent investigations to improve the monitoring are now in a way which is both associating modeling and experimentation. The aim is to obtain mathematical models of rotors mounted on hydrodynamic bearings, which constitute a good initial representation of real rotating shafts, and permit identification of particular type of default. The feasibility of determining unbalances default, from synchronous responses of a rotating shaft located only at bearings stations, is investigated in two steps: first, motion of rotor journals relatively to two known unbalance excitations is used for linearized bearing characteristics determination; second, localisation and identification of unbalance change can be made, providing that measurements before and after the change has taken place are available. The method is tested on a two-bearing rotor system, added with a mass: randomly disturbed flexural displacements of rotor journals, characterised by four noise levels (0%, 5%, 10%, 15%), are successively considered in order to test the robustness of the method. The stiffness and damping identified characteristics are permitting a correct representation of the dynamic behaviour of the rotating shaft, even relatively to unbalance configurations not used for identification (MAC criterion is greater than 0.98 with 15% noise - disturbed data). Whatever the noise level considered, the localisation of the plane where the unbalance vector is supposed to be applied is correct. (author). 3 refs., 1 fig., 3 tabs

  16. A new class of scale free solutions to linear ordinary differential equations and the universality of the golden mean (Radical radicand 5 -1)/2=0.618033...

    CERN Document Server

    Datta, D P

    2003-01-01

    A new class of finitely differentiable scale free solutions to the simplest class of ordinary differential equations is presented. Consequently, the real number set gets replaced by an extended physical set, each element of which is endowed with an equivalence class of infinitesimally separated neighbours in the form of random fluctuations. We show how a sense of time and evolution is intrinsically defined by the infinite continued fraction of the golden mean irrational number (Radical radicand 5 -1)/2, which plays a key role in this extended SL(2,R) formalism of calculus analogous to El Naschie's theory of E sup ( supinfinity sup ) spacetime manifold. Time may thereby undergo random inversions generating well defined random scales, thus allowing a dynamical system to evolve self similarly over the set of multiple scales. The late time stochastic fluctuations of a dynamical system enjoys the generic 1/f spectrum. A universal form of the related probability density is also derived. We prove that the golden mea...

  17. A new class of scale free solutions to linear ordinary differential equations and the universality of the golden mean (Radical radicand 5 -1)/2=0.618033.

    International Nuclear Information System (INIS)

    Datta, Dhurjati Prasad

    2003-01-01

    A new class of finitely differentiable scale free solutions to the simplest class of ordinary differential equations is presented. Consequently, the real number set gets replaced by an extended physical set, each element of which is endowed with an equivalence class of infinitesimally separated neighbours in the form of random fluctuations. We show how a sense of time and evolution is intrinsically defined by the infinite continued fraction of the golden mean irrational number (Radical radicand 5 -1)/2, which plays a key role in this extended SL(2,R) formalism of calculus analogous to El Naschie's theory of E (∞) spacetime manifold. Time may thereby undergo random inversions generating well defined random scales, thus allowing a dynamical system to evolve self similarly over the set of multiple scales. The late time stochastic fluctuations of a dynamical system enjoys the generic 1/f spectrum. A universal form of the related probability density is also derived. We prove that the golden mean number is intrinsically random, letting all measurements in the physical universe fundamentally uncertain. The present analysis offers an explanation of the universal occurrence of the golden mean in diverse natural and biological processes as well as the mass spectrum of high energy particle physics

  18. Linear mode conversion of Langmuir/z-mode waves to radiation: Scalings of conversion efficiencies and propagation angles with temperature and magnetic field orientation

    International Nuclear Information System (INIS)

    Schleyer, F.; Cairns, Iver H.; Kim, E.-H.

    2013-01-01

    Linear mode conversion (LMC) is the linear transfer of energy from one wave mode to another in an inhomogeneous plasma. It is relevant to laboratory plasmas and multiple solar system radio emissions, such as continuum radiation from planetary magnetospheres and type II and III radio bursts from the solar corona and solar wind. This paper simulates LMC of waves defined by warm, magnetized fluid theory, specifically the conversion of Langmuir/z-mode waves to electromagnetic (EM) radiation. The primary focus is the calculation of the energy and power conversion efficiencies for LMC as functions of the angle of incidence θ of the Langmuir/z-mode wave, temperature β=T e /m e c 2 , adiabatic index γ, and orientation angle φ between the ambient density gradient ∇N 0 and ambient magnetic field B 0 in a warm, unmagnetized plasma. The ratio of these efficiencies is found to agree well as a function of θ, γ, and β with an analytical relation that depends on the group speeds of the Langmuir/z and EM wave modes. The results demonstrate that the energy conversion efficiency ε is strongly dependent on γβ, φ and θ, with ε∝(γβ) 1/2 and θ∝(γβ) 1/2 . The power conversion efficiency ε p , on the other hand, is independent of γβ but does vary significantly with θ and φ. The efficiencies are shown to be maximum for approximately perpendicular density gradients (φ≈90°) and minimal for parallel orientation (φ=0°) and both the energy and power conversion efficiencies peak at the same θ.

  19. ECMOR 4. 4th European conference on the mathematics of oil recovery. Topic C: Scale change procedures. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    1994-01-01

    The report with collected proceedings from a conference, deals with mathematics of oil recovery with the focus on scale change procedures. Topics of proceedings are as follow: Upscaling permeability, mathematics of renormalization; a new method for the scale up of displacement processes in heterogeneous reservoirs; The scaleup of two-phase flow using permeability tensors; upscaling of permeability based on wavelet representation; preferential flow-paths detection for heterogeneous reservoirs using a new renormalization technique; averaged heterogeneous porous media by minimization of the error on the flow rate; change of scale for the full permeability tensor on a tetrahedron grid; effective relative permeabilities and capillary pressure for 1D heterogeneous media; practical and operational method for the calculation of effective dispersion coefficients in heterogeneous porous media. Nine papers are prepared. 144 refs., 71 figs., 10 tabs.

  20. A seesaw in Mediterranean precipitation during the Roman Period linked to millennial-scale changes in the North Atlantic

    NARCIS (Netherlands)

    Dermody, B.; Boer, H.J. de; Bierkens, M.F.P.; Weber, S.L.; Wassen, M.J.; Dekker, S.C.

    2012-01-01

    We present a reconstruction of the change in climatic humidity around the Mediterranean between 3000– 1000 yr BP. Using a range of proxy archives and model simulations we demonstrate that climate during this period was typified by a millennial-scale seesaw in climatic humidity between Spain and

  1. The sea-level budget along the Northwest Atlantic coast : GIA, mass changes, and large-scale ocean dynamics

    NARCIS (Netherlands)

    Frederikse, T.; Simon, K.M.; Katsman, C.A.; Riva, R.E.M.

    2017-01-01

    Sea-level rise and decadal variability along the northwestern coast of the North Atlantic Ocean are studied in a self-consistent framework that takes into account the effects of solid-earth deformation and geoid changes due to large-scale mass redistribution processes. Observations of sea and

  2. Assessment of robustness and significance of climate change signals for an ensemble of distribution-based scaled climate projections

    DEFF Research Database (Denmark)

    Seaby, Lauren Paige; Refsgaard, J.C.; Sonnenborg, T.O.

    2013-01-01

    An ensemble of 11 regional climate model (RCM) projections are analysed for Denmark from a hydrological modelling inputs perspective. Two bias correction approaches are applied: a relatively simple monthly delta change (DC) method and a more complex daily distribution-based scaling (DBS) method...

  3. Calculating Clinically Significant Change: Applications of the Clinical Global Impressions (CGI) Scale to Evaluate Client Outcomes in Private Practice

    Science.gov (United States)

    Kelly, Peter James

    2010-01-01

    The Clinical Global Impressions (CGI) scale is a therapist-rated measure of client outcome that has been widely used within the research literature. The current study aimed to develop reliable and clinically significant change indices for the CGI, and to demonstrate its application in private psychological practice. Following the guidelines…

  4. Bias correction method for climate change impact assessment at a basin scale

    Science.gov (United States)

    Nyunt, C.; Jaranilla-sanchez, P. A.; Yamamoto, A.; Nemoto, T.; Kitsuregawa, M.; Koike, T.

    2012-12-01

    Climate change impact studies are mainly based on the general circulation models GCM and these studies play an important role to define suitable adaptation strategies for resilient environment in a basin scale management. For this purpose, this study summarized how to select appropriate GCM to decrease the certain uncertainty amount in analysis. This was applied to the Pampanga, Angat and Kaliwa rivers in Luzon Island, the main island of Philippine and these three river basins play important roles in irrigation water supply, municipal water source for Metro Manila. According to the GCM scores of both seasonal evolution of Asia summer monsoon and spatial correlation and root mean squared error of atmospheric variables over the region, finally six GCM is chosen. Next, we develop a complete, efficient and comprehensive statistical bias correction scheme covering extremes events, normal rainfall and frequency of dry period. Due to the coarse resolution and parameterization scheme of GCM, extreme rainfall underestimation, too many rain days with low intensity and poor representation of local seasonality have been known as bias of GCM. Extreme rainfall has unusual characteristics and it should be focused specifically. Estimated maximum extreme rainfall is crucial for planning and design of infrastructures in river basin. Developing countries have limited technical, financial and management resources for implementing adaptation measures and they need detailed information of drought and flood for near future. Traditionally, the analysis of extreme has been examined using annual maximum series (AMS) adjusted to a Gumbel or Lognormal distribution. The drawback is the loss of the second, third etc, largest rainfall. Another approach is partial duration series (PDS) constructed using the values above a selected threshold and permit more than one event per year. The generalized Pareto distribution (GPD) has been used to model PDS and it is the series of excess over a threshold

  5. Scaling-Stimulated Salivary Antioxidant Changes and Oral-Health Behavior in an Evaluation of Periodontal Treatment Outcomes

    Directory of Open Access Journals (Sweden)

    Po-Sheng Yang

    2014-01-01

    Full Text Available Aim. Our goal was to investigate associations among scaling-stimulated changes in salivary antioxidants, oral-health-related behaviors and attitudes, and periodontal treatment outcomes. Materials and Methods. Thirty periodontitis patients with at least 6 pockets with pocket depths of >5 mm and more than 16 functional teeth were enrolled in the study. Patients were divided into three groups: an abandoned group (AB group, a nonprogress outcome group (NP group, and an effective treatment group (ET group. Nonstimulated saliva was collected before and after scaling were received to determine superoxide dismutase (SOD and the total antioxidant capacity (TAOC. Results. Salivary SOD following scaling significantly increased from 83.09 to 194.30 U/g protein in patients who had irregular dental visit patterns (<1 visit per year. After scaling, the TAOC was significantly higher in patients who had regular dental visits than in patients who had irregular dental visits (3.52 versus 0.70 mmole/g protein, P<0.01. The scaling-stimulated increase in SOD was related to a higher severity of periodontitis in the NP group, while the scaling-stimulated increase in the TAOC was inversely related to the severity of periodontitis in the AB group. Conclusions. These results demonstrate the importance of scaling-stimulated salivary antioxidants as prognostic biomarkers of periodontal treatment.

  6. Landscape-and regional-scale shifts in forest composition under climate change in the Central Hardwood Region of the United States

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Frank R. Thompson; Jacob S. Fraser; William D. Dijak

    2016-01-01

    Tree species distribution and abundance are affected by forces operating at multiple scales. Niche and biophysical process models have been commonly used to predict climate change effects at regional scales, however, these models have limited capability to include site-scale population dynamics and landscape- scale disturbance and dispersal. We applied a landscape...

  7. Behaviour change counselling--how do I know if I am doing it well? The development of the Behaviour Change Counselling Scale (BCCS).

    Science.gov (United States)

    Vallis, Michael

    2013-02-01

    The purpose of this article is to operationalize behaviour change counselling skills (motivation enhancement, behaviour modification, emotion management) that facilitate self-management support activities and evaluate the psychometric properties of an expert rater scale, the Behaviour Change Counselling Scale (BCCS). Twenty-one healthcare providers with varying levels of behaviour change counselling training interviewed a simulated patient. Videotapes were independently rated by 3 experts on 2 occasions over 6 months. Data on item/subscale characteristics, interrater and test-retest reliability, preliminary data on construct reliability, were reported. All items of the BCCS performed well with the exception of 3 that were dropped due to infrequent endorsement. Most subscales showed strong psychometric properties. Interrater and test-retest reliability coefficients were uniformly high. Competency scores improved significantly from pre- to posttraining. Behaviour change counselling skills to guide lifestyle interventions can be operationalized and assessed in a reliable and valid manner. The BCCS can be used to guide clinical training in lifestyle counselling by operationalizing the component skills and providing feedback on skill achieved. Further research is needed to establish cut scores for competency and scale construct and criterion validity. Copyright © 2013 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.

  8. Timing of millennial-scale climate change in Antarctica and Greenland during the last glacial period

    DEFF Research Database (Denmark)

    Blunier, T; Brook, E J

    2001-01-01

    A precise relative chronology for Greenland and West Antarctic paleotemperature is extended to 90,000 years ago, based on correlation of atmospheric methane records from the Greenland Ice Sheet Project 2 and Byrd ice cores. Over this period, the onset of seven major millennial-scale warmings in A....... This pattern provides further evidence for the operation of a "bipolar see-saw" in air temperatures and an oceanic teleconnection between the hemispheres on millennial time scales....

  9. Using a personal watercraft for monitoring bathymetric changes at storm scale

    NARCIS (Netherlands)

    Van Son, S.T.J.; Lindenbergh, R.C.; De Schipper, M.A.; De Vries, S.; Duijnmayer, K.

    2009-01-01

    Monitoring and understanding coastal processes is important for the Netherlands since the most densely populated areas are situated directly behind the coastal defense. Traditionally, bathymetric changes are monitored at annual intervals, although nowadays it is understood that most dramatic changes

  10. Wave dispersion of carbon nanotubes conveying fluid supported on linear viscoelastic two-parameter foundation including thermal and small-scale effects

    Science.gov (United States)

    Sina, Nima; Moosavi, Hassan; Aghaei, Hosein; Afrand, Masoud; Wongwises, Somchai

    2017-01-01

    In this paper, for the first time, a nonlocal Timoshenko beam model is employed for studying the wave dispersion of a fluid-conveying single-walled carbon nanotube on Viscoelastic Pasternak foundation under high and low temperature change. In addition, the phase and group velocity for the nanotube are discussed, respectively. The influences of Winkler and Pasternak modulus, homogenous temperature change, steady flow velocity and damping factor of viscoelastic foundation on wave dispersion of carbon nanotubes are investigated. It was observed that the characteristic of the wave for carbon nanotubes conveying fluid is the normal dispersion. Moreover, implying viscoelastic foundation leads to increasing the wave frequencies.

  11. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  12. Sea-level changes on multiple spatial scales: estimates and contributing processes

    NARCIS (Netherlands)

    Frederikse, T.

    2018-01-01

    Being one of the major consequences of anthropogenic climate change, sea level rise forms a threat for many coastal areas and their inhabitants. Because all processes that cause sea-level changes have a spatially-varying fingerprint, local sea-level changes deviate substantially from the global

  13. Age-related changes in the plasticity and toughness of human cortical bone at multiple length-scales

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, Elizabeth A.; Schaible, Eric; Bale, Hrishikesh; Barth, Holly D.; Tang, Simon Y.; Reichert, Peter; Busse, Bjoern; Alliston, Tamara; Ager III, Joel W.; Ritchie, Robert O.

    2011-08-10

    The structure of human cortical bone evolves over multiple length-scales from its basic constituents of collagen and hydroxyapatite at the nanoscale to osteonal structures at nearmillimeter dimensions, which all provide the basis for its mechanical properties. To resist fracture, bone’s toughness is derived intrinsically through plasticity (e.g., fibrillar sliding) at structural-scales typically below a micron and extrinsically (i.e., during crack growth) through mechanisms (e.g., crack deflection/bridging) generated at larger structural-scales. Biological factors such as aging lead to a markedly increased fracture risk, which is often associated with an age-related loss in bone mass (bone quantity). However, we find that age-related structural changes can significantly degrade the fracture resistance (bone quality) over multiple lengthscales. Using in situ small-/wide-angle x-ray scattering/diffraction to characterize sub-micron structural changes and synchrotron x-ray computed tomography and in situ fracture-toughness measurements in the scanning electron microscope to characterize effects at micron-scales, we show how these age-related structural changes at differing size-scales degrade both the intrinsic and extrinsic toughness of bone. Specifically, we attribute the loss in toughness to increased non-enzymatic collagen cross-linking which suppresses plasticity at nanoscale dimensions and to an increased osteonal density which limits the potency of crack-bridging mechanisms at micron-scales. The link between these processes is that the increased stiffness of the cross-linked collagen requires energy to be absorbed by “plastic” deformation at higher structural levels, which occurs by the process of microcracking.

  14. Recent Regional Climate State and Change - Derived through Downscaling Homogeneous Large-scale Components of Re-analyses

    Science.gov (United States)

    Von Storch, H.; Klehmet, K.; Geyer, B.; Li, D.; Schubert-Frisius, M.; Tim, N.; Zorita, E.

    2015-12-01

    Global re-analyses suffer from inhomogeneities, as they process data from networks under development. However, the large-scale component of such re-analyses is mostly homogeneous; additional observational data add in most cases to a better description of regional details and less so on large-scale states. Therefore, the concept of downscaling may be applied to homogeneously complementing the large-scale state of the re-analyses with regional detail - wherever the condition of homogeneity of the large-scales is fulfilled. Technically this can be done by using a regional climate model, or a global climate model, which is constrained on the large scale by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional risks - in particular marine risks - was identified. While the data density in Europe is considerably better than in most other regions of the world, even here insufficient spatial and temporal coverage is limiting risk assessments. Therefore, downscaled data-sets are frequently used by off-shore industries. We have run this system also in regions with reduced or absent data coverage, such as the Lena catchment in Siberia, in the Yellow Sea/Bo Hai region in East Asia, in Namibia and the adjacent Atlantic Ocean. Also a global (large scale constrained) simulation has been. It turns out that spatially detailed reconstruction of the state and change of climate in the three to six decades is doable for any region of the world.The different data sets are archived and may freely by used for scientific purposes. Of course, before application, a careful analysis of the quality for the intended application is needed, as sometimes unexpected changes in the quality of the description of large-scale driving states prevail.

  15. Temporal stability and rates of post-depositional change in geochemical signatures of brown trout Salmo trutta scales.

    Science.gov (United States)

    Ryan, D; Shephard, S; Kelly, F L

    2016-09-01

    This study investigates temporal stability in the scale microchemistry of brown trout Salmo trutta in feeder streams of a large heterogeneous lake catchment and rates of change after migration into the lake. Laser-ablation inductively coupled plasma mass spectrometry was used to quantify the elemental concentrations of Na, Mg, Mn, Cu, Zn, Ba and Sr in archived (1997-2002) scales of juvenile S. trutta collected from six major feeder streams of Lough Mask, County Mayo, Ireland. Water-element Ca ratios within these streams were determined for the fish sampling period and for a later period (2013-2015). Salmo trutta scale Sr and Ba concentrations were significantly (P < 0·05) correlated with stream water sample Sr:Ca and Ba:Ca ratios respectively from both periods, indicating multi-annual stability in scale and water-elemental signatures. Discriminant analysis of scale chemistries correctly classified 91% of sampled juvenile S. trutta to their stream of origin using a cross-validated classification model. This model was used to test whether assumed post-depositional change in scale element concentrations reduced correct natal stream classification of S. trutta in successive years after migration into Lough Mask. Fish residing in the lake for 1-3 years could be reliably classified to their most likely natal stream, but the probability of correct classification diminished strongly with longer lake residence. Use of scale chemistry to identify natal streams of lake S. trutta should focus on recent migrants, but may not require contemporary water chemistry data. © 2016 The Fisheries Society of the British Isles.

  16. Volume changes at macro- and nano-scale in epoxy resins studied by PALS and PVT experimental techniques

    Energy Technology Data Exchange (ETDEWEB)

    Somoza, A. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina) and CICPBA, Pinto 399, B7000GHG Tandil (Argentina)]. E-mail: asomoza@exa.unicen.edu.ar; Salgueiro, W. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina); Goyanes, S. [LPMPyMC, Depto. de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pabellon I, 1428 Buenos Aires (Argentina); Ramos, J. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain); Mondragon, I. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain)

    2007-02-15

    A systematic study on changes in the volumes at macro- and nano-scale in epoxy systems cu