WorldWideScience

Sample records for linear scale change

  1. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  2. Linear scaling of density functional algorithms

    International Nuclear Information System (INIS)

    Stechel, E.B.; Feibelman, P.J.; Williams, A.R.

    1993-01-01

    An efficient density functional algorithm (DFA) that scales linearly with system size will revolutionize electronic structure calculations. Density functional calculations are reliable and accurate in determining many condensed matter and molecular ground-state properties. However, because current DFA's, including methods related to that of Car and Parrinello, scale with the cube of the system size, density functional studies are not routinely applied to large systems. Linear scaling is achieved by constructing functions that are both localized and fully occupied, thereby eliminating the need to calculate global eigenfunctions. It is, however, widely believed that exponential localization requires the existence of an energy gap between the occupied and unoccupied states. Despite this, the authors demonstrate that linear scaling can still be achieved for metals. Using a linear scaling algorithm, they have explicitly constructed localized, almost fully occupied orbitals for the quintessential metallic system, jellium. The algorithm is readily generalizable to any system geometry and Hamiltonian. They will discuss the conceptual issues involved, convergence properties and scaling for their new algorithm

  3. Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.

    Science.gov (United States)

    Cawkwell, M J; Niklasson, Anders M N

    2012-10-07

    Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.

  4. Universal Linear Scaling of Permeability and Time for Heterogeneous Fracture Dissolution

    Science.gov (United States)

    Wang, L.; Cardenas, M. B.

    2017-12-01

    Fractures are dynamically changing over geological time scale due to mechanical deformation and chemical reactions. However, the latter mechanism remains poorly understood with respect to the expanding fracture, which leads to a positively coupled flow and reactive transport processes, i.e., as a fracture expands, so does its permeability (k) and thus flow and reactive transport processes. To unravel this coupling, we consider a self-enhancing process that leads to fracture expansion caused by acidic fluid, i.e., CO2-saturated brine dissolving calcite fracture. We rigorously derive a theory, for the first time, showing that fracture permeability increases linearly with time [Wang and Cardenas, 2017]. To validate this theory, we resort to the direct simulation that solves the Navier-Stokes and Advection-Diffusion equations with a moving mesh according to the dynamic dissolution process in two-dimensional (2D) fractures. We find that k slowly increases first until the dissolution front breakthrough the outbound when we observe a rapid k increase, i.e., the linear time-dependence of k occurs. The theory agrees well with numerical observations across a broad range of Peclet and Damkohler numbers through homogeneous and heterogeneous 2D fractures. Moreover, the theory of linear scaling relationship between k and time matches well with experimental observations of three-dimensional (3D) fractures' dissolution. To further attest to our theory's universality for 3D heterogeneous fractures across a broad range of roughness and correlation length of aperture field, we develop a depth-averaged model that simulates the process-based reactive transport. The simulation results show that, regardless of a wide variety of dissolution patterns such as the presence of dissolution fingers and preferential dissolution paths, the linear scaling relationship between k and time holds. Our theory sheds light on predicting permeability evolution in many geological settings when the self

  5. Cosmological large-scale structures beyond linear theory in modified gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bernardeau, Francis; Brax, Philippe, E-mail: francis.bernardeau@cea.fr, E-mail: philippe.brax@cea.fr [CEA, Institut de Physique Théorique, 91191 Gif-sur-Yvette Cédex (France)

    2011-06-01

    We consider the effect of modified gravity on the growth of large-scale structures at second order in perturbation theory. We show that modified gravity models changing the linear growth rate of fluctuations are also bound to change, although mildly, the mode coupling amplitude in the density and reduced velocity fields. We present explicit formulae which describe this effect. We then focus on models of modified gravity involving a scalar field coupled to matter, in particular chameleons and dilatons, where it is shown that there exists a transition scale around which the existence of an extra scalar degree of freedom induces significant changes in the coupling properties of the cosmic fields. We obtain the amplitude of this effect for realistic dilaton models at the tree-order level for the bispectrum, finding them to be comparable in amplitude to those obtained in the DGP and f(R) models.

  6. Parameter Scaling in Non-Linear Microwave Tomography

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Talcoth, Oskar

    2012-01-01

    Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when the imag......Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when...... the imaging problem is formulated. Under such conditions, microwave imaging systems will most often be considerably more sensitive to changes in the electromagnetic properties in certain regions of the breast. The result is that the parameters might not be reconstructed correctly in the less sensitive regions...... introduced as a measure of the sensitivity. The scaling of the parameters is shown to improve performance of the microwave imaging system when applied to reconstruction of images from 2-D simulated data and measurement data....

  7. Frequency scaling of linear super-colliders

    International Nuclear Information System (INIS)

    Mondelli, A.; Chernin, D.; Drobot, A.; Reiser, M.; Granatstein, V.

    1986-06-01

    The development of electron-positron linear colliders in the TeV energy range will be facilitated by the development of high-power rf sources at frequencies above 2856 MHz. Present S-band technology, represented by the SLC, would require a length in excess of 50 km per linac to accelerate particles to energies above 1 TeV. By raising the rf driving frequency, the rf breakdown limit is increased, thereby allowing the length of the accelerators to be reduced. Currently available rf power sources set the realizable gradient limit in an rf linac at frequencies above S-band. This paper presents a model for the frequency scaling of linear colliders, with luminosity scaled in proportion to the square of the center-of-mass energy. Since wakefield effects are the dominant deleterious effect, a separate single-bunch simulation model is described which calculates the evolution of the beam bunch with specified wakefields, including the effects of using programmed phase positioning and Landau damping. The results presented here have been obtained for a SLAC structure, scaled in proportion to wavelength

  8. Linear Polarization Properties of Parsec-Scale AGN Jets

    Directory of Open Access Journals (Sweden)

    Alexander B. Pushkarev

    2017-12-01

    Full Text Available We used 15 GHz multi-epoch Very Long Baseline Array (VLBA polarization sensitive observations of 484 sources within a time interval 1996–2016 from the MOJAVE program, and also from the NRAO data archive. We have analyzed the linear polarization characteristics of the compact core features and regions downstream, and their changes along and across the parsec-scale active galactic nuclei (AGN jets. We detected a significant increase of fractional polarization with distance from the radio core along the jet as well as towards the jet edges. Compared to quasars, BL Lacs have a higher degree of polarization and exhibit more stable electric vector position angles (EVPAs in their core features and a better alignment of the EVPAs with the local jet direction. The latter is accompanied by a higher degree of linear polarization, suggesting that compact bright jet features might be strong transverse shocks, which enhance magnetic field regularity by compression.

  9. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  10. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  11. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  12. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  13. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  14. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  15. An {Mathematical expression} iteration bound primal-dual cone affine scaling algorithm for linear programmingiteration bound primal-dual cone affine scaling algorithm for linear programming

    NARCIS (Netherlands)

    J.F. Sturm; J. Zhang (Shuzhong)

    1996-01-01

    textabstractIn this paper we introduce a primal-dual affine scaling method. The method uses a search-direction obtained by minimizing the duality gap over a linearly transformed conic section. This direction neither coincides with known primal-dual affine scaling directions (Jansen et al., 1993;

  16. Polarization properties of linearly polarized parabolic scaling Bessel beams

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Mengwen; Zhao, Daomu, E-mail: zhaodaomu@yahoo.com

    2016-10-07

    The intensity profiles for the dominant polarization, cross polarization, and longitudinal components of modified parabolic scaling Bessel beams with linear polarization are investigated theoretically. The transverse intensity distributions of the three electric components are intimately connected to the topological charge. In particular, the intensity patterns of the cross polarization and longitudinal components near the apodization plane reflect the sign of the topological charge. - Highlights: • We investigated the polarization properties of modified parabolic scaling Bessel beams with linear polarization. • We studied the evolution of transverse intensity profiles for the three components of these beams. • The intensity patterns of the cross polarization and longitudinal components can reflect the sign of the topological charge.

  17. Novel algorithm of large-scale simultaneous linear equations

    International Nuclear Information System (INIS)

    Fujiwara, T; Hoshi, T; Yamamoto, S; Sogabe, T; Zhang, S-L

    2010-01-01

    We review our recently developed methods of solving large-scale simultaneous linear equations and applications to electronic structure calculations both in one-electron theory and many-electron theory. This is the shifted COCG (conjugate orthogonal conjugate gradient) method based on the Krylov subspace, and the most important issue for applications is the shift equation and the seed switching method, which greatly reduce the computational cost. The applications to nano-scale Si crystals and the double orbital extended Hubbard model are presented.

  18. Linear-scaling evaluation of the local energy in quantum Monte Carlo

    International Nuclear Information System (INIS)

    Austin, Brian; Aspuru-Guzik, Alan; Salomon-Ferrer, Romelia; Lester, William A. Jr.

    2006-01-01

    For atomic and molecular quantum Monte Carlo calculations, most of the computational effort is spent in the evaluation of the local energy. We describe a scheme for reducing the computational cost of the evaluation of the Slater determinants and correlation function for the correlated molecular orbital (CMO) ansatz. A sparse representation of the Slater determinants makes possible efficient evaluation of molecular orbitals. A modification to the scaled distance function facilitates a linear scaling implementation of the Schmidt-Moskowitz-Boys-Handy (SMBH) correlation function that preserves the efficient matrix multiplication structure of the SMBH function. For the evaluation of the local energy, these two methods lead to asymptotic linear scaling with respect to the molecule size

  19. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  20. A simplified density matrix minimization for linear scaling self-consistent field theory

    International Nuclear Information System (INIS)

    Challacombe, M.

    1999-01-01

    A simplified version of the Li, Nunes and Vanderbilt [Phys. Rev. B 47, 10891 (1993)] and Daw [Phys. Rev. B 47, 10895 (1993)] density matrix minimization is introduced that requires four fewer matrix multiplies per minimization step relative to previous formulations. The simplified method also exhibits superior convergence properties, such that the bulk of the work may be shifted to the quadratically convergent McWeeny purification, which brings the density matrix to idempotency. Both orthogonal and nonorthogonal versions are derived. The AINV algorithm of Benzi, Meyer, and Tuma [SIAM J. Sci. Comp. 17, 1135 (1996)] is introduced to linear scaling electronic structure theory, and found to be essential in transformations between orthogonal and nonorthogonal representations. These methods have been developed with an atom-blocked sparse matrix algebra that achieves sustained megafloating point operations per second rates as high as 50% of theoretical, and implemented in the MondoSCF suite of linear scaling SCF programs. For the first time, linear scaling Hartree - Fock theory is demonstrated with three-dimensional systems, including water clusters and estane polymers. The nonorthogonal minimization is shown to be uncompetitive with minimization in an orthonormal representation. An early onset of linear scaling is found for both minimal and double zeta basis sets, and crossovers with a highly optimized eigensolver are achieved. Calculations with up to 6000 basis functions are reported. The scaling of errors with system size is investigated for various levels of approximation. copyright 1999 American Institute of Physics

  1. Turbulence Spreading into Linearly Stable Zone and Transport Scaling

    International Nuclear Information System (INIS)

    Hahm, T.S.; Diamond, P.H.; Lin, Z.; Itoh, K.; Itoh, S.-I.

    2003-01-01

    We study the simplest problem of turbulence spreading corresponding to the spatio-temporal propagation of a patch of turbulence from a region where it is locally excited to a region of weaker excitation, or even local damping. A single model equation for the local turbulence intensity I(x, t) includes the effects of local linear growth and damping, spatially local nonlinear coupling to dissipation and spatial scattering of turbulence energy induced by nonlinear coupling. In the absence of dissipation, the front propagation into the linearly stable zone occurs with the property of rapid progression at small t, followed by slower subdiffusive progression at late times. The turbulence radial spreading into the linearly stable zone reduces the turbulent intensity in the linearly unstable zone, and introduces an additional dependence on the rho* is always equal to rho i/a to the turbulent intensity and the transport scaling. These are in broad, semi-quantitative agreements with a number of global gyrokinetic simulation results with zonal flows and without zonal flows. The front propagation stops when the radial flux of fluctuation energy from the linearly unstable region is balanced by local dissipation in the linearly stable region

  2. Graph-based linear scaling electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  3. ONETEP: linear-scaling density-functional theory with plane-waves

    International Nuclear Information System (INIS)

    Haynes, P D; Mostof, A A; Skylaris, C-K; Payne, M C

    2006-01-01

    This paper provides a general overview of the methodology implemented in onetep (Order-N Electronic Total Energy Package), a parallel density-functional theory code for largescale first-principles quantum-mechanical calculations. The distinctive features of onetep are linear-scaling in both computational effort and resources, obtained by making well-controlled approximations which enable simulations to be performed with plane-wave accuracy. Titanium dioxide clusters of increasing size designed to mimic surfaces are studied to demonstrate the accuracy and scaling of onetep

  4. Scaling Climate Change Communication for Behavior Change

    Science.gov (United States)

    Rodriguez, V. C.; Lappé, M.; Flora, J. A.; Ardoin, N. M.; Robinson, T. N.

    2014-12-01

    Ultimately, effective climate change communication results in a change in behavior, whether the change is individual, household or collective actions within communities. We describe two efforts to promote climate-friendly behavior via climate communication and behavior change theory. Importantly these efforts are designed to scale climate communication principles focused on behavior change rather than soley emphasizing climate knowledge or attitudes. Both cases are embedded in rigorous evaluations (randomized controlled trial and quasi-experimental) of primary and secondary outcomes as well as supplementary analyses that have implications for program refinement and program scaling. In the first case, the Girl Scouts "Girls Learning Environment and Energy" (GLEE) trial is scaling the program via a Massive Open Online Course (MOOC) for Troop Leaders to teach the effective home electricity and food and transportation energy reduction programs. The second case, the Alliance for Climate Education (ACE) Assembly Program, is advancing the already-scaled assembly program by using communication principles to further engage youth and their families and communities (school and local communities) in individual and collective actions. Scaling of each program uses online learning platforms, social media and "behavior practice" videos, mastery practice exercises, virtual feedback and virtual social engagement to advance climate-friendly behavior change. All of these communication practices aim to simulate and advance in-person train-the-trainers technologies.As part of this presentation we outline scaling principles derived from these two climate change communication and behavior change programs.

  5. Linear-scaling quantum mechanical methods for excited states.

    Science.gov (United States)

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  6. Offset linear scaling for H-mode confinement

    International Nuclear Information System (INIS)

    Miura, Yukitoshi; Tamai, Hiroshi; Suzuki, Norio; Mori, Masahiro; Matsuda, Toshiaki; Maeda, Hikosuke; Takizuka, Tomonori; Itoh, Sanae; Itoh, Kimitaka.

    1992-01-01

    An offset linear scaling for the H-mode confinement time is examined based on single parameter scans on the JFT-2M experiment. Regression study is done for various devices with open divertor configuration such as JET, DIII-D, JFT-2M. The scaling law of the thermal energy is given in the MKSA unit as W th =0.0046R 1.9 I P 1.1 B T 0.91 √A+2.9x10 -8 I P 1.0 R 0.87 P√AP, where R is the major radius, I P is the plasma current, B T is the toroidal magnetic field, A is the average mass number of plasma and neutral beam particles, and P is the heating power. This fitting has a similar root mean square error (RMSE) compared to the power law scaling. The result is also compared with the H-mode in other configurations. The W th of closed divertor H-mode on ASDEX shows a little better values than that of open divertor H-mode. (author)

  7. Linear-scaling implementation of the direct random-phase approximation

    International Nuclear Information System (INIS)

    Kállay, Mihály

    2015-01-01

    We report the linear-scaling implementation of the direct random-phase approximation (dRPA) for closed-shell molecular systems. As a bonus, linear-scaling algorithms are also presented for the second-order screened exchange extension of dRPA as well as for the second-order Møller–Plesset (MP2) method and its spin-scaled variants. Our approach is based on an incremental scheme which is an extension of our previous local correlation method [Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The approach extensively uses local natural orbitals to reduce the size of the molecular orbital basis of local correlation domains. In addition, we also demonstrate that using natural auxiliary functions [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], the size of the auxiliary basis of the domains and thus that of the three-center Coulomb integral lists can be reduced by an order of magnitude, which results in significant savings in computation time. The new approach is validated by extensive test calculations for energies and energy differences. Our benchmark calculations also demonstrate that the new method enables dRPA calculations for molecules with more than 1000 atoms and 10 000 basis functions on a single processor

  8. Common Nearly Best Linear Estimates of Location and Scale ...

    African Journals Online (AJOL)

    Common nearly best linear estimates of location and scale parameters of normal and logistic distributions, which are based on complete samples, are considered. Here, the population from which the samples are drawn is either normal or logistic population or a fusion of both distributions and the estimates are computed ...

  9. Three-point phase correlations: A new measure of non-linear large-scale structure

    CERN Document Server

    Wolstenhulme, Richard; Obreschkow, Danail

    2015-01-01

    We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the non-linear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F_2, which governs the non-linear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a very good agreement for separations r>20 Mpc/h. Fitting formulae for the power spectrum and the non-linear coupling kernel at small scales allow us to extend our prediction into the strongly non-linear regime. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the linear bias. Furtherm...

  10. Thresholds, switches and hysteresis in hydrology from the pedon to the catchment scale: a non-linear systems theory

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Hysteresis is a rate-independent non-linearity that is expressed through thresholds, switches, and branches. Exceedance of a threshold, or the occurrence of a turning point in the input, switches the output onto a particular output branch. Rate-independent branching on a very large set of switches with non-local memory is the central concept in the new definition of hysteresis. Hysteretic loops are a special case. A self-consistent mathematical description of hydrological systems with hysteresis demands a new non-linear systems theory of adequate generality. The goal of this paper is to establish this and to show how this may be done. Two results are presented: a conceptual model for the hysteretic soil-moisture characteristic at the pedon scale and a hysteretic linear reservoir at the catchment scale. Both are based on the Preisach model. A result of particular significance is the demonstration that the independent domain model of the soil moisture characteristic due to Childs, Poulavassilis, Mualem and others, is equivalent to the Preisach hysteresis model of non-linear systems theory, a result reminiscent of the reduction of the theory of the unit hydrograph to linear systems theory in the 1950s. A significant reduction in the number of model parameters is also achieved. The new theory implies a change in modelling paradigm.

  11. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  12. On the interaction of small-scale linear waves with nonlinear solitary waves

    Science.gov (United States)

    Xu, Chengzhu; Stastna, Marek

    2017-04-01

    In the study of environmental and geophysical fluid flows, linear wave theory is well developed and its application has been considered for phenomena of various length and time scales. However, due to the nonlinear nature of fluid flows, in many cases results predicted by linear theory do not agree with observations. One of such cases is internal wave dynamics. While small-amplitude wave motion may be approximated by linear theory, large amplitude waves tend to be solitary-like. In some cases, when the wave is highly nonlinear, even weakly nonlinear theories fail to predict the wave properties correctly. We study the interaction of small-scale linear waves with nonlinear solitary waves using highly accurate pseudo spectral simulations that begin with a fully nonlinear solitary wave and a train of small-amplitude waves initialized from linear waves. The solitary wave then interacts with the linear waves through either an overtaking collision or a head-on collision. During the collision, there is a net energy transfer from the linear wave train to the solitary wave, resulting in an increase in the kinetic energy carried by the solitary wave and a phase shift of the solitary wave with respect to a freely propagating solitary wave. At the same time the linear waves are greatly reduced in amplitude. The percentage of energy transferred depends primarily on the wavelength of the linear waves. We found that after one full collision cycle, the longest waves may retain as much as 90% of the kinetic energy they had initially, while the shortest waves lose almost all of their initial energy. We also found that a head-on collision is more efficient in destroying the linear waves than an overtaking collision. On the other hand, the initial amplitude of the linear waves has very little impact on the percentage of energy that can be transferred to the solitary wave. Because of the nonlinearity of the solitary wave, these results provide us some insight into wave-mean flow

  13. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  14. Non-linear elastic thermal stress analysis with phase changes

    International Nuclear Information System (INIS)

    Amada, S.; Yang, W.H.

    1978-01-01

    The non-linear elastic, thermal stress analysis with temperature induced phase changes in the materials is presented. An infinite plate (or body) with a circular hole (or tunnel) is subjected to a thermal loading on its inner surface. The peak temperature around the hole reaches beyond the melting point of the material. The non-linear diffusion equation is solved numerically using the finite difference method. The material properties change rapidly at temperatures where the change of crystal structures and solid-liquid transition occur. The elastic stresses induced by the transient non-homogeneous temperature distribution are calculated. The stresses change remarkably when the phase changes occur and there are residual stresses remaining in the plate after one cycle of thermal loading. (Auth.)

  15. Scaling laws for e+/e- linear colliders

    International Nuclear Information System (INIS)

    Delahaye, J.P.; Guignard, G.; Raubenheimer, T.; Wilson, I.

    1999-01-01

    Design studies of a future TeV e + e - Linear Collider (TLC) are presently being made by five major laboratories within the framework of a world-wide collaboration. A figure of merit is defined which enables an objective comparison of these different designs. This figure of merit is shown to depend only on a small number of parameters. General scaling laws for the main beam parameters and linac parameters are derived and prove to be very effective when used as guidelines to optimize the linear collider design. By adopting appropriate parameters for beam stability, the figure of merit becomes nearly independent of accelerating gradient and RF frequency of the accelerating structures. In spite of the strong dependence of the wake fields with frequency, the single-bunch emittance blow-up during acceleration along the linac is also shown to be independent of the RF frequency when using equivalent trajectory correction schemes. In this situation, beam acceleration using high-frequency structures becomes very advantageous because it enables high accelerating fields to be obtained, which reduces the overall length and consequently the total cost of the linac. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  16. Linear arrangement of nano-scale magnetic particles formed in Cu-Fe-Ni alloys

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung, E-mail: k3201s@hotmail.co [Department of Materials Engineering (SEISAN), Yokohama National University, 79-5 Tokiwadai, Hodogayaku, Yokohama, 240-8501 (Japan); Takeda, Mahoto [Department of Materials Engineering (SEISAN), Yokohama National University, 79-5 Tokiwadai, Hodogayaku, Yokohama, 240-8501 (Japan); Takeguchi, Masaki [Advanced Electron Microscopy Group, National Institute for Materials Science (NIMS), Sakura 3-13, Tsukuba, 305-0047 (Japan); Bae, Dong-Sik [School of Nano and Advanced Materials Engineering, Changwon National University, Gyeongnam, 641-773 (Korea, Republic of)

    2010-04-30

    The structural evolution of nano-scale magnetic particles formed in Cu-Fe-Ni alloys on isothermal annealing at 878 K has been investigated by means of transmission electron microscopy (TEM), electron dispersive X-ray spectroscopy (EDS), electron energy-loss spectroscopy (EELS) and field-emission scanning electron microscopy (FE-SEM). Phase decomposition of Cu-Fe-Ni occurred after an as-quenched specimen received a short anneal, and nano-scale magnetic particles were formed randomly in the Cu-rich matrix. A striking feature that two or more nano-scale particles with a cubic shape were aligned linearly along <1,0,0> directions was observed, and the trend was more pronounced at later stages of the precipitation. Large numbers of <1,0,0> linear chains of precipitates extended in three dimensions in late stages of annealing.

  17. Self-consistent field theory based molecular dynamics with linear system-size scaling

    Energy Technology Data Exchange (ETDEWEB)

    Richters, Dorothee [Institute of Mathematics and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 9, D-55128 Mainz (Germany); Kühne, Thomas D., E-mail: kuehne@uni-mainz.de [Institute of Physical Chemistry and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 7, D-55128 Mainz (Germany); Technical and Macromolecular Chemistry, University of Paderborn, Warburger Str. 100, D-33098 Paderborn (Germany)

    2014-04-07

    We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.

  18. Linear Scaling Solution of the Time-Dependent Self-Consistent-Field Equations

    Directory of Open Access Journals (Sweden)

    Matt Challacombe

    2014-03-01

    Full Text Available A new approach to solving the Time-Dependent Self-Consistent-Field equations is developed based on the double quotient formulation of Tsiper 2001 (J. Phys. B. Dual channel, quasi-independent non-linear optimization of these quotients is found to yield convergence rates approaching those of the best case (single channel Tamm-Dancoff approximation. This formulation is variational with respect to matrix truncation, admitting linear scaling solution of the matrix-eigenvalue problem, which is demonstrated for bulk excitons in the polyphenylene vinylene oligomer and the (4,3 carbon nanotube segment.

  19. CHANG-ES. IX. Radio scale heights and scale lengths of a consistent sample of 13 spiral galaxies seen edge-on and their correlations

    Science.gov (United States)

    Krause, Marita; Irwin, Judith; Wiegert, Theresa; Miskolczi, Arpad; Damas-Segovia, Ancor; Beck, Rainer; Li, Jiang-Tao; Heald, George; Müller, Peter; Stein, Yelena; Rand, Richard J.; Heesen, Volker; Walterbos, Rene A. M.; Dettmar, Ralf-Jürgen; Vargas, Carlos J.; English, Jayanne; Murphy, Eric J.

    2018-03-01

    Aim. The vertical halo scale height is a crucial parameter to understand the transport of cosmic-ray electrons (CRE) and their energy loss mechanisms in spiral galaxies. Until now, the radio scale height could only be determined for a few edge-on galaxies because of missing sensitivity at high resolution. Methods: We developed a sophisticated method for the scale height determination of edge-on galaxies. With this we determined the scale heights and radial scale lengths for a sample of 13 galaxies from the CHANG-ES radio continuum survey in two frequency bands. Results: The sample average values for the radio scale heights of the halo are 1.1 ± 0.3 kpc in C-band and 1.4 ± 0.7 kpc in L-band. From the frequency dependence analysis of the halo scale heights we found that the wind velocities (estimated using the adiabatic loss time) are above the escape velocity. We found that the halo scale heights increase linearly with the radio diameters. In order to exclude the diameter dependence, we defined a normalized scale height h˜ which is quite similar for all sample galaxies at both frequency bands and does not depend on the star formation rate or the magnetic field strength. However, h˜ shows a tight anticorrelation with the mass surface density. Conclusions: The sample galaxies with smaller scale lengths are more spherical in the radio emission, while those with larger scale lengths are flatter. The radio scale height depends mainly on the radio diameter of the galaxy. The sample galaxies are consistent with an escape-dominated radio halo with convective cosmic ray propagation, indicating that galactic winds are a widespread phenomenon in spiral galaxies. While a higher star formation rate or star formation surface density does not lead to a higher wind velocity, we found for the first time observational evidence of a gravitational deceleration of CRE outflow, e.g. a lowering of the wind velocity from the galactic disk.

  20. Polarized atomic orbitals for linear scaling methods

    Science.gov (United States)

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  1. Reconnection Scaling Experiment (RSX): Magnetic Reconnection in Linear Geometry

    Science.gov (United States)

    Intrator, T.; Sovinec, C.; Begay, D.; Wurden, G.; Furno, I.; Werley, C.; Fisher, M.; Vermare, L.; Fienup, W.

    2001-10-01

    The linear Reconnection Scaling Experiment (RSX) at LANL is a new experiment that can create MHD relevant plasmas to look at the physics of magnetic reconnection. This experiment can scale many relevant parameters because the guns that generate the plasma and current channels do not depend on equilibrium or force balance for startup. We describe the experiment and initial electrostatic and magnetic probe data. Two parallel current channels sweep down a long plasma column and probe data accumulated over many shots gives 3D movies of magnetic reconnection. Our first data tries to define an operating regime free from kink instabilities that might otherwise confuse the data and shot repeatability. We compare this with MHD 2 fluid NIMROD simulations of the single current channel kink stability boundary for a variety of experimental conditions.

  2. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  3. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  4. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    International Nuclear Information System (INIS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  5. Linear and Nonlinear Optical Properties of Micrometer-Scale Gold Nanoplates

    International Nuclear Information System (INIS)

    Liu Xiao-Lan; Peng Xiao-Niu; Yang Zhong-Jian; Li Min; Zhou Li

    2011-01-01

    Micrometer-scale gold nanoplates have been synthesized in high yield through a polyol process. The morphology, crystal structure and linear optical extinction of the gold nanoplates have been characterized. These gold nanoplates are single-crystalline with triangular, truncated triangular and hexagonal shapes, exhibiting strong surface plasmon resonance (SPR) extinction in the visible and near-infrared (NIR) region. The linear optical properties of gold nanoplates are also investigated by theoretical calculations. We further investigate the nonlinear optical properties of the gold nanoplates in solution by Z-scan technique. The nonlinear absorption (NLA) coefficient and nonlinear refraction (NLR) index are measured to be 1.18×10 2 cm/GW and −1.04×10 −3 cm 2 /GW, respectively. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  6. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    Science.gov (United States)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  7. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  8. Mathematical models of non-linear phenomena, processes and systems: from molecular scale to planetary atmosphere

    CERN Document Server

    2013-01-01

    This book consists of twenty seven chapters, which can be divided into three large categories: articles with the focus on the mathematical treatment of non-linear problems, including the methodologies, algorithms and properties of analytical and numerical solutions to particular non-linear problems; theoretical and computational studies dedicated to the physics and chemistry of non-linear micro-and nano-scale systems, including molecular clusters, nano-particles and nano-composites; and, papers focused on non-linear processes in medico-biological systems, including mathematical models of ferments, amino acids, blood fluids and polynucleic chains.

  9. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  10. Linear trend and abrupt changes of climate indices in the arid region of northwestern China

    Science.gov (United States)

    Wang, Huaijun; Pan, Yingping; Chen, Yaning; Ye, Zhengwei

    2017-11-01

    In recent years, climate extreme events have caused increasing direct economic and social losses in the arid region of northwestern China. Based on daily temperature and precipitation data from 1960 to 2010, this paper discussed the linear trend and abrupt changes of climate indices. The general evolution was obtained by the empirical orthogonal function (EOF), the Mann-Kendall test, and the distribution-free cumulative sum chart (CUSUM) test. The results are as follows: (1) climate showed a warming trend at annual and seasonal scale, with all temperature indices exhibiting statistically significant changes. The warm indices have increased, with 1.37%days/decade of warm days (TX90p), 0.17 °C/decade of warmest days (TXx) and 1.97 days/decade of warm spell duration indicator (WSDI), respectively. The cold indices have decreased, with - 1.89%days/decade, 0.65 °C/decade and - 0.66 days/decade for cold nights (TN10p), coldest nights (TNn) and cold spell duration indicator (CSDI), respectively. The precipitation indices have also increased significantly, coupled with the changes of magnitude (max 1-day precipitation amount (RX1day)), frequency (rain day (R0.1)), and duration (consecutive dry days (CDD)). (2) Abrupt changes of the annual regional precipitation indices and the minimum temperature indices were observed around 1986, and that of the maximum temperature indices were observed in 1996. (3) The EOF1 indicated the overall coherent distribution for the whole study area, and its principal component (PC1) was also observed, showing a significant linear trend with an abrupt change, which were in accordance with the regional observation results. EOF2 and EOF3 show contrasts between the southern and northern study areas, and between the eastern and western study areas, respectively, whereas no significant tendency was observed for their PCs. Hence, the climate indices have changed significantly, with linear trends and abrupt changes noted for all climate indices

  11. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  12. Canonical-ensemble extended Lagrangian Born-Oppenheimer molecular dynamics for the linear scaling density functional theory.

    Science.gov (United States)

    Hirakawa, Teruo; Suzuki, Teppei; Bowler, David R; Miyazaki, Tsuyoshi

    2017-10-11

    We discuss the development and implementation of a constant temperature (NVT) molecular dynamics scheme that combines the Nosé-Hoover chain thermostat with the extended Lagrangian Born-Oppenheimer molecular dynamics (BOMD) scheme, using a linear scaling density functional theory (DFT) approach. An integration scheme for this canonical-ensemble extended Lagrangian BOMD is developed and discussed in the context of the Liouville operator formulation. Linear scaling DFT canonical-ensemble extended Lagrangian BOMD simulations are tested on bulk silicon and silicon carbide systems to evaluate our integration scheme. The results show that the conserved quantity remains stable with no systematic drift even in the presence of the thermostat.

  13. Modeling Fire Occurrence at the City Scale: A Comparison between Geographically Weighted Regression and Global Linear Regression.

    Science.gov (United States)

    Song, Chao; Kwan, Mei-Po; Zhu, Jiping

    2017-04-08

    An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.

  14. Hardy inequality on time scales and its application to half-linear dynamic equations

    Directory of Open Access Journals (Sweden)

    Řehák Pavel

    2005-01-01

    Full Text Available A time-scale version of the Hardy inequality is presented, which unifies and extends well-known Hardy inequalities in the continuous and in the discrete setting. An application in the oscillation theory of half-linear dynamic equations is given.

  15. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    International Nuclear Information System (INIS)

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  16. Non-linear regime shifts in Holocene Asian monsoon variability: potential impacts on cultural change and migratory patterns

    Science.gov (United States)

    Donges, J. F.; Donner, R. V.; Marwan, N.; Breitenbach, S. F. M.; Rehfeld, K.; Kurths, J.

    2015-05-01

    The Asian monsoon system is an important tipping element in Earth's climate with a large impact on human societies in the past and present. In light of the potentially severe impacts of present and future anthropogenic climate change on Asian hydrology, it is vital to understand the forcing mechanisms of past climatic regime shifts in the Asian monsoon domain. Here we use novel recurrence network analysis techniques for detecting episodes with pronounced non-linear changes in Holocene Asian monsoon dynamics recorded in speleothems from caves distributed throughout the major branches of the Asian monsoon system. A newly developed multi-proxy methodology explicitly considers dating uncertainties with the COPRA (COnstructing Proxy Records from Age models) approach and allows for detection of continental-scale regime shifts in the complexity of monsoon dynamics. Several epochs are characterised by non-linear regime shifts in Asian monsoon variability, including the periods around 8.5-7.9, 5.7-5.0, 4.1-3.7, and 3.0-2.4 ka BP. The timing of these regime shifts is consistent with known episodes of Holocene rapid climate change (RCC) and high-latitude Bond events. Additionally, we observe a previously rarely reported non-linear regime shift around 7.3 ka BP, a timing that matches the typical 1.0-1.5 ky return intervals of Bond events. A detailed review of previously suggested links between Holocene climatic changes in the Asian monsoon domain and the archaeological record indicates that, in addition to previously considered longer-term changes in mean monsoon intensity and other climatic parameters, regime shifts in monsoon complexity might have played an important role as drivers of migration, pronounced cultural changes, and the collapse of ancient human societies.

  17. Non-linear dielectric signatures of entropy changes in liquids subject to time dependent electric fields

    Energy Technology Data Exchange (ETDEWEB)

    Richert, Ranko [School of Molecular Sciences, Arizona State University, Tempe, Arizona 85287-1604 (United States)

    2016-03-21

    A model of non-linear dielectric polarization is studied in which the field induced entropy change is the source of polarization dependent retardation time constants. Numerical solutions for the susceptibilities of the system are obtained for parameters that represent the dynamic and thermodynamic behavior of glycerol. The calculations for high amplitude sinusoidal fields show a significant enhancement of the steady state loss for frequencies below that of the low field loss peak. Also at relatively low frequencies, the third harmonic susceptibility spectrum shows a “hump,” i.e., a maximum, with an amplitude that increases with decreasing temperature. Both of these non-linear effects are consistent with experimental evidence. While such features have been used to conclude on a temperature dependent number of dynamically correlated particles, N{sub corr}, the present result demonstrates that the third harmonic susceptibility display a peak with an amplitude that tracks the variation of the activation energy in a model that does not involve dynamical correlations or spatial scales.

  18. Recent development of linear scaling quantum theories in GAMESS

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Cheol Ho [Kyungpook National Univ., Daegu (Korea, Republic of)

    2003-06-01

    Linear scaling quantum theories are reviewed especially focusing on the method adopted in GAMESS. The three key translation equations of the fast multipole method (FMM) are deduced from the general polypolar expansions given earlier by Steinborn and Rudenberg. Simplifications are introduced for the rotation-based FMM that lead to a very compact FMM formalism. The OPS (optimum parameter searching) procedure, a stable and efficient way of obtaining the optimum set of FMM parameters, is established with complete control over the tolerable error {epsilon}. In addition, a new parallel FMM algorithm requiring virtually no inter-node communication, is suggested which is suitable for the parallel construction of Fock matrices in electronic structure calculations.

  19. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  20. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  1. A comparison of linear and logarithmic auditory tones in pulse oximeters.

    Science.gov (United States)

    Brown, Zoe; Edworthy, Judy; Sneyd, J Robert; Schlesinger, Joseph

    2015-11-01

    This study compared the ability of forty anaesthetists to judge absolute levels of oxygen saturation, direction of change, and size of change in saturation using auditory pitch and pitch difference in two laboratory-based studies that compared a linear pitch scale with a logarithmic scale. In the former the differences in saturation become perceptually closer as the oxygenation level becomes higher whereas in the latter the pitch differences are perceptually equivalent across the whole range of values. The results show that anaesthetist participants produce significantly more accurate judgements of both absolute oxygenation values and size of oxygenation level difference when a logarithmic, rather than a linear, scale is used. The line of best fit for the logarithmic function was also closer to x = y than for the linear function. The results of these studies can inform the development and standardisation of pulse oximetry tones in order to improve patient safety. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. Design changes of device to investigation of alloys linear contraction and shrinkage stresses

    Directory of Open Access Journals (Sweden)

    J. Mutwil

    2009-07-01

    Full Text Available Some design changes in device elaborated by author to examination of linear contraction and shrinkage stresses progress of metals and alloys during– and after solidification have been described. The introduced changes have been focused on design of closing of shrinkage test rod mould. The introduced changes have been allowed to simplify a mounting procedure of thermocouples measuring a temperature of the shrinkage rod casting (in 6 points. Exemplary investigation results of linear contraction and shrinkage stresses development in Al-Si13.5% alloy have been presented.

  3. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  4. Energy harvesting with stacked dielectric elastomer transducers: Nonlinear theory, optimization, and linearized scaling law

    Science.gov (United States)

    Tutcuoglu, A.; Majidi, C.

    2014-12-01

    Using principles of damped harmonic oscillation with continuous media, we examine electrostatic energy harvesting with a "soft-matter" array of dielectric elastomer (DE) transducers. The array is composed of infinitely thin and deformable electrodes separated by layers of insulating elastomer. During vibration, it deforms longitudinally, resulting in a change in the capacitance and electrical enthalpy of the charged electrodes. Depending on the phase of electrostatic loading, the DE array can function as either an actuator that amplifies small vibrations or a generator that converts these external excitations into electrical power. Both cases are addressed with a comprehensive theory that accounts for the influence of viscoelasticity, dielectric breakdown, and electromechanical coupling induced by Maxwell stress. In the case of a linearized Kelvin-Voigt model of the dielectric, we obtain a closed-form estimate for the electrical power output and a scaling law for DE generator design. For the complete nonlinear model, we obtain the optimal electrostatic voltage input for maximum electrical power output.

  5. Scaling versus asymptotic scaling in the non-linear σ-model in 2D. Continuum version

    International Nuclear Information System (INIS)

    Flyvbjerg, H.

    1990-01-01

    The two-point function of the O(N)-symmetric non-linear σ-model in two dimensions is large-N expanded and renormalized, neglecting terms of O(1/N 2 ). At finite cut-off, universal, analytical expressions relate the magnetic susceptibility and the dressed mass to the bare coupling. Removing the cut-off, a similar relation gives the renormalized coupling as a function of the mass gap. In the weak-coupling limit these relations reproduce the results of renormalization group improved weak-coupling perturbation theory to two-loop order. The constant left unknown, when the renormalization group is integrated, is determined here. The approach to asymptotic scaling is studied for various values of N. (orig.)

  6. Small-scale quantum information processing with linear optics

    International Nuclear Information System (INIS)

    Bergou, J.A.; Steinberg, A.M.; Mohseni, M.

    2005-01-01

    Full text: Photons are the ideal systems for carrying quantum information. Although performing large-scale quantum computation on optical systems is extremely demanding, non scalable linear-optics quantum information processing may prove essential as part of quantum communication networks. In addition efficient (scalable) linear-optical quantum computation proposal relies on the same optical elements. Here, by constructing multirail optical networks, we experimentally study two central problems in quantum information science, namely optimal discrimination between nonorthogonal quantum states, and controlling decoherence in quantum systems. Quantum mechanics forbids deterministic discrimination between nonorthogonal states. This is one of the central features of quantum cryptography, which leads to secure communications. Quantum state discrimination is an important primitive in quantum information processing, since it determines the limitations of a potential eavesdropper, and it has applications in quantum cloning and entanglement concentration. In this work, we experimentally implement generalized measurements in an optical system and demonstrate the first optimal unambiguous discrimination between three non-orthogonal states with a success rate of 55 %, to be compared with the 25 % maximum achievable using projective measurements. Furthermore, we present the first realization of unambiguous discrimination between a pure state and a nonorthogonal mixed state. In a separate experiment, we demonstrate how decoherence-free subspaces (DFSs) may be incorporated into a prototype optical quantum algorithm. Specifically, we present an optical realization of two-qubit Deutsch-Jozsa algorithm in presence of random noise. By introduction of localized turbulent airflow we produce a collective optical dephasing, leading to large error rates and demonstrate that using DFS encoding, the error rate in the presence of decoherence can be reduced from 35 % to essentially its pre

  7. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  8. Non-linear modelling of monthly mean vorticity time changes: an application to the western Mediterranean

    Directory of Open Access Journals (Sweden)

    M. Finizio

    Full Text Available Starting from a number of observables in the form of time-series of meteorological elements in various areas of the northern hemisphere, a model capable of fitting past records and predicting monthly vorticity time changes in the western Mediterranean is implemented. A new powerful statistical methodology is introduced (MARS in order to capture the non-linear dynamics of time-series representing the available 40-year history of the hemispheric circulation. The developed model is tested on a suitable independent data set. An ensemble forecast exercise is also carried out to check model stability in reference to the uncertainty of input quantities.

    Key words. Meteorology and atmospheric dynamics · General circulation ocean-atmosphere interactions · Synoptic-scale meteorology

  9. Linearity and Non-linearity of Photorefractive effect in Materials ...

    African Journals Online (AJOL)

    In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...

  10. Towards TeV-scale electron-positron collisions: the Compact Linear Collider (CLIC)

    Science.gov (United States)

    Doebert, Steffen; Sicking, Eva

    2018-02-01

    The Compact Linear Collider (CLIC), a future electron-positron collider at the energy frontier, has the potential to change our understanding of the universe. Proposed to follow the Large Hardron Collider (LHC) programme at CERN, it is conceived for precision measurements as well as for searches for new phenomena.

  11. Non-linear temperature-dependent curvature of a phase change composite bimorph beam

    Science.gov (United States)

    Blonder, Greg

    2017-06-01

    Bimorph films curl in response to temperature. The degree of curvature typically varies in proportion to the difference in thermal expansion of the individual layers, and linearly with temperature. In many applications, such as controlling a thermostat, this gentle linear behavior is acceptable. In other cases, such as opening or closing a valve or latching a deployable column into place, an abrupt motion at a fixed temperature is preferred. To achieve this non-linear motion, we describe the fabrication and performance of a new bilayer structure we call a ‘phase change composite bimorph (PCBM)’. In a PCBM, one layer in the bimorph is a composite containing small inclusions of phase change materials. When the inclusions melt, their large (generally positive and  >1%) expansion coefficient induces a strong, reversible step function jump in bimorph curvature. The measured jump amplitude and thermal response is consistent with theory, and can be harnessed by a new class of actuators and sensors.

  12. Large-scale dynamo action due to α fluctuations in a linear shear flow

    Science.gov (United States)

    Sridhar, S.; Singh, Nishant K.

    2014-12-01

    We present a model of large-scale dynamo action in a shear flow that has stochastic, zero-mean fluctuations of the α parameter. This is based on a minimal extension of the Kraichnan-Moffatt model, to include a background linear shear and Galilean-invariant α-statistics. Using the first-order smoothing approximation we derive a linear integro-differential equation for the large-scale magnetic field, which is non-perturbative in the shearing rate S , and the α-correlation time τα . The white-noise case, τα = 0 , is solved exactly, and it is concluded that the necessary condition for dynamo action is identical to the Kraichnan-Moffatt model without shear; this is because white-noise does not allow for memory effects, whereas shear needs time to act. To explore memory effects we reduce the integro-differential equation to a partial differential equation, valid for slowly varying fields when τα is small but non-zero. Seeking exponential modal solutions, we solve the modal dispersion relation and obtain an explicit expression for the growth rate as a function of the six independent parameters of the problem. A non-zero τα gives rise to new physical scales, and dynamo action is completely different from the white-noise case; e.g. even weak α fluctuations can give rise to a dynamo. We argue that, at any wavenumber, both Moffatt drift and Shear always contribute to increasing the growth rate. Two examples are presented: (a) a Moffatt drift dynamo in the absence of shear and (b) a Shear dynamo in the absence of Moffatt drift.

  13. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  14. A review of downscaling procedures - a contribution to the research on climate change impacts at city scale

    Science.gov (United States)

    Smid, Marek; Costa, Ana; Pebesma, Edzer; Granell, Carlos; Bhattacharya, Devanjan

    2016-04-01

    Human kind is currently predominantly urban based, and the majority of ever continuing population growth will take place in urban agglomerations. Urban systems are not only major drivers of climate change, but also the impact hot spots. Furthermore, climate change impacts are commonly managed at city scale. Therefore, assessing climate change impacts on urban systems is a very relevant subject of research. Climate and its impacts on all levels (local, meso and global scale) and also the inter-scale dependencies of those processes should be a subject to detail analysis. While global and regional projections of future climate are currently available, local-scale information is lacking. Hence, statistical downscaling methodologies represent a potentially efficient way to help to close this gap. In general, the methodological reviews of downscaling procedures cover the various methods according to their application (e.g. downscaling for the hydrological modelling). Some of the most recent and comprehensive studies, such as the ESSEM COST Action ES1102 (VALUE), use the concept of Perfect Prog and MOS. Other examples of classification schemes of downscaling techniques consider three main categories: linear methods, weather classifications and weather generators. Downscaling and climate modelling represent a multidisciplinary field, where researchers from various backgrounds intersect their efforts, resulting in specific terminology, which may be somewhat confusing. For instance, the Polynomial Regression (also called the Surface Trend Analysis) is a statistical technique. In the context of the spatial interpolation procedures, it is commonly classified as a deterministic technique, and kriging approaches are classified as stochastic. Furthermore, the terms "statistical" and "stochastic" (frequently used as names of sub-classes in downscaling methodological reviews) are not always considered as synonymous, even though both terms could be seen as identical since they are

  15. Progress on $e^{+}e^{-}$ linear colliders

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit; Siemann, Peter

    2002-01-01

    Physics issues. The physics program will be reviewed for e+e- linear colliders in the TeV energy range. At these prospective facilities central issues of particle physics can be addressed, the problem of mass, unification and structure of space-time. In this context the two lectures will focus on analyses of the Higgs mechanism, supersymmetry and extra space dimensions. Moreover, high-precision studies of the top-quark and the gauge boson sector will be discussed. Combined with LHC results, a comprehensive picture can be developed of physics at the electroweak scale and beyond. Designs and technologies (R. Siemann - 29, 30, 31 May) The physics and technologies of high energy linear colliders will be reviewed. Fundamental concepts of linear colliders will be introduced. They will be discussed in: the context of the Stanford Linear Collider where many ideas changed and new ones were developed in response to operational experience. the requirements for future linear colliders. The different approaches for reac...

  16. SLAP, Large Sparse Linear System Solution Package

    International Nuclear Information System (INIS)

    Greenbaum, A.

    1987-01-01

    1 - Description of program or function: SLAP is a set of routines for solving large sparse systems of linear equations. One need not store the entire matrix - only the nonzero elements and their row and column numbers. Any nonzero structure is acceptable, so the linear system solver need not be modified when the structure of the matrix changes. Auxiliary storage space is acquired and released within the routines themselves by use of the LRLTRAN POINTER statement. 2 - Method of solution: SLAP contains one direct solver, a band matrix factorization and solution routine, BAND, and several interactive solvers. The iterative routines are as follows: JACOBI, Jacobi iteration; GS, Gauss-Seidel Iteration; ILUIR, incomplete LU decomposition with iterative refinement; DSCG and ICCG, diagonal scaling and incomplete Cholesky decomposition with conjugate gradient iteration (for symmetric positive definite matrices only); DSCGN and ILUGGN, diagonal scaling and incomplete LU decomposition with conjugate gradient interaction on the normal equations; DSBCG and ILUBCG, diagonal scaling and incomplete LU decomposition with bi-conjugate gradient iteration; and DSOMN and ILUOMN, diagonal scaling and incomplete LU decomposition with ORTHOMIN iteration

  17. Color change of Blue butterfly wing scales in an air - Vapor ambient

    Science.gov (United States)

    Kertész, Krisztián; Piszter, Gábor; Jakab, Emma; Bálint, Zsolt; Vértesy, Zofia; Biró, László Péter

    2013-09-01

    Photonic crystals are periodic dielectric nanocomposites, which have photonic band gaps that forbid the propagation of light within certain frequency ranges. The optical response of such nanoarchitectures on chemical changes in the environment is determined by the spectral change of the reflected light, and depends on the composition of the ambient atmosphere and on the nanostructure characteristics. We carried out reflectance measurements on closely related Blue lycaenid butterfly males possessing so-called "pepper-pot" type photonic nanoarchitecture in their scales covering their dorsal wing surfaces. Experiments were carried out changing the concentration and nature of test vapors while monitoring the spectral variations in time. All the tests were done with the sample temperature set at, and below the room temperature. The spectral changes were found to be linear with the increasing of concentration and the signal amplitude is higher at lower temperatures. The mechanism of reflectance spectra modification is based on capillary condensation of the vapors penetrating in the nanostructure. These structures of natural origin may serve as cheap, environmentally free and biodegradable sensor elements. The study of these nanoarchitectures of biologic origin could be the source of various new bioinspired systems.

  18. Linearly scaling and almost Hamiltonian dielectric continuum molecular dynamics simulations through fast multipole expansions

    Energy Technology Data Exchange (ETDEWEB)

    Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)

    2015-11-14

    Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.

  19. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  20. Dryland responses to global change suggest the potential for rapid non-linear responses to some changes but resilience to others

    Science.gov (United States)

    Reed, S.; Ferrenberg, S.; Tucker, C.; Rutherford, W. A.; Wertin, T. M.; McHugh, T. A.; Morrissey, E.; Kuske, C.; Belnap, J.

    2017-12-01

    Drylands represent our planet's largest terrestrial biome, making up over 35% of Earth's land surface. In the context of this vast areal extent, it is no surprise that recent research suggests dryland inter-annual variability and responses to change have the potential to drive biogeochemical cycles and climate at the global-scale. Further, the data we do have suggest drylands can respond rapidly and non-linearly to change. Nevertheless, our understanding of the cross-system consistency of and mechanisms behind dryland responses to a changed environment remains relatively poor. This poor understanding hinders not only our larger understanding of terrestrial ecosystem function, but also our capacity to forecast future global biogeochemical cycles and climate. Here we present data from a series of Colorado Plateau manipulation experiments - including climate, land use, and nitrogen deposition manipulations - to explore how vascular plants, microbial communities, and biological soil crusts (a community of mosses, lichens, and/or cyanobacteria living in the interspace among vascular plants in arid and semiarid ecosystems worldwide) respond to a host of environmental changes. These responses include not only assessments of community composition, but of their function as well. We will explore photosynthesis, net soil CO2 exchange, soil carbon stocks and chemistry, albedo, and nutrient cycling. The experiments were begun with independent questions and cover a range of environmental change drivers and scientific approaches, but together offer a relatively holistic picture of how some drylands can change their structure and function in response to change. In particular, the data show very high ecosystem vulnerability to particular drivers, but surprising resilience to others, suggesting a multi-faceted response of these diverse systems.

  1. Inference regarding multiple structural changes in linear models with endogenous regressors

    NARCIS (Netherlands)

    Boldea, O.; Hall, A.R.; Han, S.

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares

  2. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DEFF Research Database (Denmark)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    2015-01-01

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...

  3. Grey scale, the 'crispening effect', and perceptual linearization

    NARCIS (Netherlands)

    Belaïd, N.; Martens, J.B.

    1998-01-01

    One way of optimizing a display is to maximize the number of distinguishable grey levels, which in turn is equivalent to perceptually linearizing the display. Perceptual linearization implies that equal steps in grey value evoke equal steps in brightness sensation. The key to perceptual

  4. Does scale matter? A systematic review of incorporating biological realism when predicting changes in species distributions.

    Science.gov (United States)

    Record, Sydne; Strecker, Angela; Tuanmu, Mao-Ning; Beaudrot, Lydia; Zarnetske, Phoebe; Belmaker, Jonathan; Gerstner, Beth

    2018-01-01

    There is ample evidence that biotic factors, such as biotic interactions and dispersal capacity, can affect species distributions and influence species' responses to climate change. However, little is known about how these factors affect predictions from species distribution models (SDMs) with respect to spatial grain and extent of the models. Understanding how spatial scale influences the effects of biological processes in SDMs is important because SDMs are one of the primary tools used by conservation biologists to assess biodiversity impacts of climate change. We systematically reviewed SDM studies published from 2003-2015 using ISI Web of Science searches to: (1) determine the current state and key knowledge gaps of SDMs that incorporate biotic interactions and dispersal; and (2) understand how choice of spatial scale may alter the influence of biological processes on SDM predictions. We used linear mixed effects models to examine how predictions from SDMs changed in response to the effects of spatial scale, dispersal, and biotic interactions. There were important biases in studies including an emphasis on terrestrial ecosystems in northern latitudes and little representation of aquatic ecosystems. Our results suggest that neither spatial extent nor grain influence projected climate-induced changes in species ranges when SDMs include dispersal or biotic interactions. We identified several knowledge gaps and suggest that SDM studies forecasting the effects of climate change should: 1) address broader ranges of taxa and locations; and 1) report the grain size, extent, and results with and without biological complexity. The spatial scale of analysis in SDMs did not affect estimates of projected range shifts with dispersal and biotic interactions. However, the lack of reporting on results with and without biological complexity precluded many studies from our analysis.

  5. Vanishing-Overhead Linear-Scaling Random Phase Approximation by Cholesky Decomposition and an Attenuated Coulomb-Metric.

    Science.gov (United States)

    Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian

    2017-04-11

    A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.

  6. Role of band 3 in the erythrocyte membrane structural changes under thermal fluctuations -multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana

    2015-12-01

    An attempt was made to discuss and connect various modeling approaches on various time and space scales which have been proposed in the literature in order to shed further light on the erythrocyte membrane rearrangement caused by the cortex-lipid bilayer coupling under thermal fluctuations. Roles of the main membrane constituents: (1) the actin-spectrin cortex, (2) the lipid bilayer, and (3) the trans membrane protein band 3 and their course-consequence relations were considered in the context of the cortex non linear stiffening and corresponding anomalous nature of energy dissipation. The fluctuations induce alternating expansion and compression of the membrane parts in order to ensure surface and volume conservation. The membrane structural changes were considered within two time regimes. The results indicate that the cortex non linear stiffening and corresponding anomalous nature of energy dissipation are related to the spectrin flexibility distribution and the rate of its changes. The spectrin flexibility varies from purely flexible to semi flexible. It is influenced by: (1) the number of band 3 molecules attached to single spectrin filaments, and (2) phosphorylation of the actin-junctions. The rate of spectrin flexibility changes depends on the band 3 molecules rearrangement.

  7. Continent-scale global change attribution in European birds - combining annual and decadal time scales

    DEFF Research Database (Denmark)

    Jørgensen, Peter Søgaard; Böhning-Gaese, Katrin; Thorup, Kasper

    2016-01-01

    foundation for attributing species responses to global change may be achieved by complementing an attributes-based approach by one estimating the relationship between repeated measures of organismal and environmental changes over short time scales. To assess the benefit of this multiscale perspective, we...... on or in the peak of the breeding season with the largest effect sizes observed in cooler parts of species' climatic ranges. Our results document the potential of combining time scales and integrating both species attributes and environmental variables for global change attribution. We suggest such an approach......Species attributes are commonly used to infer impacts of environmental change on multiyear species trends, e.g. decadal changes in population size. However, by themselves attributes are of limited value in global change attribution since they do not measure the changing environment. A broader...

  8. Non-linearities in Holocene floodplain sediment storage

    Science.gov (United States)

    Notebaert, Bastiaan; Nils, Broothaerts; Jean-François, Berger; Gert, Verstraeten

    2013-04-01

    Floodplain sediment storage is an important part of the sediment cascade model, buffering sediment delivery between hillslopes and oceans, which is hitherto not fully quantified in contrast to other global sediment budget components. Quantification and dating of floodplain sediment storage is data and financially demanding, limiting contemporary estimates for larger spatial units to simple linear extrapolations from a number of smaller catchments. In this paper we will present non-linearities in both space and time for floodplain sediment budgets in three different catchments. Holocene floodplain sediments of the Dijle catchment in the Belgian loess region, show a clear distinction between morphological stages: early Holocene peat accumulation, followed by mineral floodplain aggradation from the start of the agricultural period on. Contrary to previous assumptions, detailed dating of this morphological change at different shows an important non-linearity in geomorphologic changes of the floodplain, both between and within cross sections. A second example comes from the Pre-Alpine French Valdaine region, where non-linearities and complex system behavior exists between (temporal) patterns of soil erosion and floodplain sediment deposition. In this region Holocene floodplain deposition is characterized by different cut-and-fill phases. The quantification of these different phases shows a complicated image of increasing and decreasing floodplain sediment storage, which hampers the image of increasing sediment accumulation over time. Although fill stages may correspond with large quantities of deposited sediment and traditionally calculated sedimentation rates for such stages are high, they do not necessary correspond with a long-term net increase in floodplain deposition. A third example is based on the floodplain sediment storage in the Amblève catchment, located in the Belgian Ardennes uplands. Detailed floodplain sediment quantification for this catchments shows

  9. Sodium flow rate measurement method of annular linear induction pump

    International Nuclear Information System (INIS)

    Araseki, Hideo

    2011-01-01

    This report describes a method for measuring sodium flow rate of annular linear induction pumps arranged in parallel and its verification result obtained through an experiment and a numerical analysis. In the method, the leaked magnetic field is measured with measuring coils at the stator end on the outlet side and is correlated with the sodium flow rate. The experimental data and the numerical result indicate that the leaked magnetic field at the stator edge keeps almost constant when the sodium flow rate changes and that the leaked magnetic field change arising from the flow rate change is small compared with the overall leaked magnetic field. It is shown that the correlation between the leaked magnetic field and the sodium flow rate is almost linear due to this feature of the leaked magnetic field, which indicates the applicability of the method to small-scale annular linear induction pumps. (author)

  10. Fast and local non-linear evolution of steep wave-groups on deep water: A comparison of approximate models to fully non-linear simulations

    International Nuclear Information System (INIS)

    Adcock, T. A. A.; Taylor, P. H.

    2016-01-01

    The non-linear Schrödinger equation and its higher order extensions are routinely used for analysis of extreme ocean waves. This paper compares the evolution of individual wave-packets modelled using non-linear Schrödinger type equations with packets modelled using fully non-linear potential flow models. The modified non-linear Schrödinger Equation accurately models the relatively large scale non-linear changes to the shape of wave-groups, with a dramatic contraction of the group along the mean propagation direction and a corresponding extension of the width of the wave-crests. In addition, as extreme wave form, there is a local non-linear contraction of the wave-group around the crest which leads to a localised broadening of the wave spectrum which the bandwidth limited non-linear Schrödinger Equations struggle to capture. This limitation occurs for waves of moderate steepness and a narrow underlying spectrum

  11. Color change of Blue butterfly wing scales in an air – Vapor ambient

    Energy Technology Data Exchange (ETDEWEB)

    Kertész, Krisztián, E-mail: kertesz.krisztian@ttk.mta.hu [Institute of Technical Physics and Materials Science, Centre for Natural Sciences, H-1525 Budapest, PO Box 49, Hungary(http://www.nanotechnology.hu) (Hungary); Piszter, Gábor [Institute of Technical Physics and Materials Science, Centre for Natural Sciences, H-1525 Budapest, PO Box 49, Hungary(http://www.nanotechnology.hu) (Hungary); Jakab, Emma [Institute of Materials and Environmental Chemistry, Centre for Natural Sciences, H-1525 Budapest, PO Box 17 (Hungary); Bálint, Zsolt [Hungarian Natural History Museum, Baross utca 13, H-1088 Budapest (Hungary); Vértesy, Zofia; Biró, László Péter [Institute of Technical Physics and Materials Science, Centre for Natural Sciences, H-1525 Budapest, PO Box 49, Hungary(http://www.nanotechnology.hu) (Hungary)

    2013-09-15

    Photonic crystals are periodic dielectric nanocomposites, which have photonic band gaps that forbid the propagation of light within certain frequency ranges. The optical response of such nanoarchitectures on chemical changes in the environment is determined by the spectral change of the reflected light, and depends on the composition of the ambient atmosphere and on the nanostructure characteristics. We carried out reflectance measurements on closely related Blue lycaenid butterfly males possessing so-called “pepper-pot” type photonic nanoarchitecture in their scales covering their dorsal wing surfaces. Experiments were carried out changing the concentration and nature of test vapors while monitoring the spectral variations in time. All the tests were done with the sample temperature set at, and below the room temperature. The spectral changes were found to be linear with the increasing of concentration and the signal amplitude is higher at lower temperatures. The mechanism of reflectance spectra modification is based on capillary condensation of the vapors penetrating in the nanostructure. These structures of natural origin may serve as cheap, environmentally free and biodegradable sensor elements. The study of these nanoarchitectures of biologic origin could be the source of various new bioinspired systems.

  12. Color change of Blue butterfly wing scales in an air – Vapor ambient

    International Nuclear Information System (INIS)

    Kertész, Krisztián; Piszter, Gábor; Jakab, Emma; Bálint, Zsolt; Vértesy, Zofia; Biró, László Péter

    2013-01-01

    Photonic crystals are periodic dielectric nanocomposites, which have photonic band gaps that forbid the propagation of light within certain frequency ranges. The optical response of such nanoarchitectures on chemical changes in the environment is determined by the spectral change of the reflected light, and depends on the composition of the ambient atmosphere and on the nanostructure characteristics. We carried out reflectance measurements on closely related Blue lycaenid butterfly males possessing so-called “pepper-pot” type photonic nanoarchitecture in their scales covering their dorsal wing surfaces. Experiments were carried out changing the concentration and nature of test vapors while monitoring the spectral variations in time. All the tests were done with the sample temperature set at, and below the room temperature. The spectral changes were found to be linear with the increasing of concentration and the signal amplitude is higher at lower temperatures. The mechanism of reflectance spectra modification is based on capillary condensation of the vapors penetrating in the nanostructure. These structures of natural origin may serve as cheap, environmentally free and biodegradable sensor elements. The study of these nanoarchitectures of biologic origin could be the source of various new bioinspired systems.

  13. Near-linear cost increase to reduce climate-change risk

    Energy Technology Data Exchange (ETDEWEB)

    Schaeffer, M. [Environmental Systems Analysis Group, Wageningen University and Research Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Kram, T.; Van Vuuren, D.P. [Climate and Global Sustainability Group, Netherlands Environmental Assessment Agency, P.O. Box 303, 3720 AH Bilthoven (Netherlands); Meinshausen, M.; Hare, W.L. [Potsdam Institute for Climate Impact Research, P.O. Box 60 12 03, 14412 Potsdam (Germany); Schneider, S.H. (ed.) [Stanford University, Stanford, CA (United States)

    2008-12-30

    One approach in climate-change policy is to set normative long-term targets first and then infer the implied emissions pathways. An important example of a normative target is to limit the global-mean temperature change to a certain maximum. In general, reported cost estimates for limiting global warming often rise rapidly, even exponentially, as the scale of emission reductions from a reference level increases. This rapid rise may suggest that more ambitious policies may be prohibitively expensive. Here, we propose a probabilistic perspective, focused on the relationship between mitigation costs and the likelihood of achieving a climate target. We investigate the qualitative, functional relationship between the likelihood of achieving a normative target and the costs of climate-change mitigation. In contrast to the example of exponentially rising costs for lowering concentration levels, we show that the mitigation costs rise proportionally to the likelihood of meeting a temperature target, across a range of concentration levels. In economic terms investing in climate mitigation to increase the probability of achieving climate targets yields 'constant returns to scale', because of a counterbalancing rapid rise in the probabilities of meeting a temperature target as concentration is lowered.

  14. 9th International Accelerator School for Linear Colliders

    CERN Document Server

    2015-01-01

    This school is a continuation of the series of schools that began nine years ago: Japan 2006, Italy 2007, United States 2008, China 2009, Switzerland 2010, United States 2011, India 2012 and Turkey 2013. Based on needs from the accelerator community, the Linear Collider Collaboration (LCC) and ICFA Beam Dynamics Panel are organising the Ninth International Accelerator School for Linear Colliders. The school will present instruction in TeV-scale linear colliders including the ILC, CLIC and other advanced accelerators. An important change of this year’s school from previous LC schools is that it will also include the free electron laser (FEL), a natural extension for applications of the ILC/CLIC technology. The school is offered to graduate students, postdoctoral fellows and junior researchers from around the world. We welcome applications from physicists who are considering changing to a career in accelerator physics and technology. This school adopts an in depth approach. A selective course on the FEL has b...

  15. Linear collider: a preview

    Energy Technology Data Exchange (ETDEWEB)

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.

  16. Linear collider: a preview

    International Nuclear Information System (INIS)

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center

  17. The Front-End Readout as an Encoder IC for Magneto-Resistive Linear Scale Sensors

    Directory of Open Access Journals (Sweden)

    Trong-Hieu Tran

    2016-09-01

    Full Text Available This study proposes a front-end readout circuit as an encoder chip for magneto-resistance (MR linear scales. A typical MR sensor consists of two major parts: one is its base structure, also called the magnetic scale, which is embedded with multiple grid MR electrodes, while another is an “MR reader” stage with magnets inside and moving on the rails of the base. As the stage is in motion, the magnetic interaction between the moving stage and the base causes the variation of the magneto-resistances of the grid electrodes. In this study, a front-end readout IC chip is successfully designed and realized to acquire temporally-varying resistances in electrical signals as the stage is in motions. The acquired signals are in fact sinusoids and co-sinusoids, which are further deciphered by the front-end readout circuit via newly-designed programmable gain amplifiers (PGAs and analog-to-digital converters (ADCs. The PGA is particularly designed to amplify the signals up to full dynamic ranges and up to 1 MHz. A 12-bit successive approximation register (SAR ADC for analog-to-digital conversion is designed with linearity performance of ±1 in the least significant bit (LSB over the input range of 0.5–2.5 V from peak to peak. The chip was fabricated by the Taiwan Semiconductor Manufacturing Company (TSMC 0.35-micron complementary metal oxide semiconductor (CMOS technology for verification with a chip size of 6.61 mm2, while the power consumption is 56 mW from a 5-V power supply. The measured integral non-linearity (INL is −0.79–0.95 LSB while the differential non-linearity (DNL is −0.68–0.72 LSB. The effective number of bits (ENOB of the designed ADC is validated as 10.86 for converting the input analog signal to digital counterparts. Experimental validation was conducted. A digital decoder is orchestrated to decipher the harmonic outputs from the ADC via interpolation to the position of the moving stage. It was found that the displacement

  18. Matching Social and Biophysical Scales in Extensive Livestock Production as a Basis for Adaptation to Global Change

    Science.gov (United States)

    Sayre, N. F.; Bestelmeyer, B.

    2015-12-01

    Global livestock production is heterogeneous, and its benefits and costs vary widely across global contexts. Extensive grazing lands (or rangelands) constitute the vast majority of the land dedicated to livestock production globally, but they are relatively minor contributors to livestock-related environmental impacts. Indeed, the greatest potential for environmental damage in these lands lies in their potential for conversion to other uses, including agriculture, mining, energy production and urban development. Managing such conversion requires improving the sustainability of livestock production in the face of fragmentation, ecological and economic marginality and climate change. We present research from Mongolia and the United States demonstrating methods of improving outcomes on rangelands by improving the fit between the scales of social and biophysical processes. Especially in arid and semi-arid settings, rangelands exhibit highly variable productivity over space and time and non-linear or threshold dynamics in vegetation; climate change is projected to exacerbate these challenges and, in some cases, diminish overall productivity. Policy and governance frameworks that enable landscape-scale management and administration enable range livestock producers to adapt to these conditions. Similarly, livestock breeds that have evolved to withstand climate and vegetation change improve producers' prospects in the face of increasing variability and declining productivity. A focus on the relationships among primary production, animal production, spatial connectivity, and scale must underpin adaptation strategies in rangelands.

  19. A Dynamic Linear Modeling Approach to Public Policy Change

    DEFF Research Database (Denmark)

    Loftis, Matthew; Mortensen, Peter Bjerre

    2017-01-01

    Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...

  20. Proceeding of the 11th meeting on linear accelerators

    International Nuclear Information System (INIS)

    Nakahara, Kazuo; Anami, Shozo; Takasaki, Eiichi

    1986-08-01

    The study group on linear accelerators has attained the period of 10 years. The worldwide change of social structure and economical condition during this period affected also linear accelerators. For a while, the new installation of linear accelerators was limited to Japan and China, and the state of standstill continued in Europe and America. Therefore, the large scale projects of electron-positron collision type accelerators started, and LEP of CERN and HERA of DESY in Europe and Linear Collider of SLAC in USA compete the lead together with TRISTAN in Japan. Large electron rings have become the type connecting CW linear accelerators with electromagnets in circular form unlike the conventional type. The developed type of superconducting CW linacs such as CEBAF in USA is planned. In the large accelerators hereafter of CW or pulse type, the RF system of high accuracy and large power output is the key to the success of projects, instead of individual accelerating spaces, high frequency sources, waveguides or controls. When the scale of projects exceeds a certain limit, those cannot be dealt with merely by the experience and means in the past. In this book, the gists of 62 presented papers and invited lectures are collected. (Kako, I.)

  1. Parallel Quasi Newton Algorithms for Large Scale Non Linear Unconstrained Optimization

    International Nuclear Information System (INIS)

    Rahman, M. A.; Basarudin, T.

    1997-01-01

    This paper discusses about Quasi Newton (QN) method to solve non-linear unconstrained minimization problems. One of many important of QN method is choice of matrix Hk. to be positive definite and satisfies to QN method. Our interest here is the parallel QN methods which will suite for the solution of large-scale optimization problems. The QN methods became less attractive in large-scale problems because of the storage and computational requirements. How ever, it is often the case that the Hessian is space matrix. In this paper we include the mechanism of how to reduce the Hessian update and hold the Hessian properties.One major reason of our research is that the QN method may be good in solving certain type of minimization problems, but it is efficiency degenerate when is it applied to solve other category of problems. For this reason, we use an algorithm containing several direction strategies which are processed in parallel. We shall attempt to parallelized algorithm by exploring different search directions which are generated by various QN update during the minimization process. The different line search strategies will be employed simultaneously in the process of locating the minimum along each direction.The code of algorithm will be written in Occam language 2 which is run on the transputer machine

  2. Non-linear optics of nano-scale pentacene thin film

    Science.gov (United States)

    Yahia, I. S.; Alfaify, S.; Jilani, Asim; Abdel-wahab, M. Sh.; Al-Ghamdi, Attieh A.; Abutalib, M. M.; Al-Bassam, A.; El-Naggar, A. M.

    2016-07-01

    We have found the new ways to investigate the linear/non-linear optical properties of nanostructure pentacene thin film deposited by thermal evaporation technique. Pentacene is the key material in organic semiconductor technology. The existence of nano-structured thin film was confirmed by atomic force microscopy and X-ray diffraction. The wavelength-dependent transmittance and reflectance were calculated to observe the optical behavior of the pentacene thin film. It has been observed the anomalous dispersion at wavelength λ 800. The non-linear refractive index of the deposited films was investigated. The linear optical susceptibility of pentacene thin film was calculated, and we observed the non-linear optical susceptibility of pentacene thin film at about 6 × 10-13 esu. The advantage of this work is to use of spectroscopic method to calculate the liner and non-liner optical response of pentacene thin films rather than expensive Z-scan. The calculated optical behavior of the pentacene thin films could be used in the organic thin films base advanced optoelectronic devices such as telecommunications devices.

  3. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Science.gov (United States)

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  4. Climate analysis at local scale in the context of climate change

    International Nuclear Information System (INIS)

    Quenol, H.

    2013-01-01

    Issues related to climate change increasingly concern the functioning of local scale geo-systems. A global change will necessarily affect local climates. In this context, the potential impacts of climate change lead to numerous inter rogations concerning adaptation. Despite numerous studies on the impact of projected global warming on different regions global atmospheric models (GCM) are not adapted to local scales and, as a result, impacts at local scales are still approximate. Although real progress in meso-scale atmospheric modeling was realized over the past years, no operative model is in use yet to simulate climate at local scales (ten or so meters). (author)

  5. A critical oscillation constant as a variable of time scales for half-linear dynamic equations

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel

    2010-01-01

    Roč. 60, č. 2 (2010), s. 237-256 ISSN 0139-9918 R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : dynamic equation * time scale * half-linear equation * (non)oscillation criteria * Hille-Nehari criteria * Kneser criteria * critical constant * oscillation constant * Hardy inequality Subject RIV: BA - General Mathematics Impact factor: 0.316, year: 2010 http://link.springer.com/article/10.2478%2Fs12175-010-0009-7

  6. Linear and Nonlinear Finite Elements.

    Science.gov (United States)

    1983-12-01

    Metzler. Con/ ugte rapdent solution of a finite element elastic problem with high Poson rato without scaling and once with the global stiffness matrix K...nonzero c, that makes u(0) = 1. According to the linear, small deflection theory of the membrane the central displacement given to the membrane is not... theory is possible based on the approximations (l-y 2 )t = +y’ 2 +y𔃾 , (1-y𔃼)’ 1-y’ 2 - y" (6) that change eq. (5) to V𔃺) = , [yŖ(1 + y") - Qy𔃼

  7. Linear correlation of interfacial tension at water-solvent interface, solubility of water in organic solvents, and SE* scale parameters

    International Nuclear Information System (INIS)

    Mezhov, E.A.; Khananashvili, N.L.; Shmidt, V.S.

    1988-01-01

    A linear correlation has been established between the solubility of water in water-immiscible organic solvents and the interfacial tension at the water-solvent interface on the one hand and the parameters of the SE* and π* scales for these solvents on the other hand. This allows us, using the known tabulated SE* or π* parameters for each solvent, to predict the values of the interfacial tension and the solubility of water for the corresponding systems. We have shown that the SE* scale allows us to predict these values more accurately than other known solvent scales, since in contrast to other scales it characterizes solvents found in equilibrium with water

  8. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    International Nuclear Information System (INIS)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y Y

    2008-01-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency

  9. Non-linear corrections to the cosmological matter power spectrum and scale-dependent galaxy bias: implications for parameter estimation

    Science.gov (United States)

    Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y. Y.

    2008-07-01

    We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency.

  10. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere.

    Science.gov (United States)

    Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K

    2013-12-01

    Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.

  11. Do changes on MCMI-II personality disorder scales in short-term psychotherapy reflect trait or state changes?

    DEFF Research Database (Denmark)

    Jensen, Hans Henrik; Mortensen, Erik Lykke; Lotz, Martin

    2008-01-01

    The Millon Clinical Multiaxial Inventory (MCMI) has become an important and commonly used instrument to assess personality functioning. Several studies report significant changes on MCMI personality disorder scales after psychological treatment. The aim of the study was to investigate whether pre......-post-treatment changes in 39-session psychodynamic group psychotherapy as measured with the MCMI reflect real personality change or primarily reflect symptomatic state changes. Pre-post-treatment design included 236 psychotherapy outpatients. Personality changes were measured on the MCMI-II and symptomatic state changes...... on the Symptom Check List 90-R (SCL-90-R). The MCMI Schizoid, Avoidant, Self-defeating, and severe personality disorder scales revealed substantial changes, which could be predicted from changes on SCL-90-R global symptomatology (GSI) and on the SCL-90-R Depression scale. The MCMI Dependent personality score...

  12. The effect of disinfection of alginate impressions with 35% beetle juice spray on stone model linear dimensional changes

    Directory of Open Access Journals (Sweden)

    Anggra Yudha Ramadianto

    2007-07-01

    Full Text Available Dimensional stability of alginate impression is very important for treatment in dentistry. This study was to find the effect of the beetle juice spray procedure on alginate impression on gypsum model linear dimensional changes. This experimental study used 25 samples, divided into 5 groups. The first group, as control, were the alginate impressions filled with dental stone immediately after forming. The other four groups were the alginate impressions gel spray each 1,2,3, and 4 times with 35% beetle juice and then filled with dental stone. Dimensional changes were measured in the lower part of the plaster model from buccal-lingual and mesial-distal direction and also measured in the outer distance between the upper part of the stone model by using Mitutoyo digital micrometre and profile projector scaled 0,001 mm. The results of mesial-distal diameter average of the control group and group 2,3,4, and 5 were 9.909 mm, 9.852 mm, 9.845 mm, 9.824 mm, and 9.754 mm. Meanwhile, the results of buccal-lingual diameter average were 9.847 mm, 9.841 mm, 9.826 mm, 9.776 mm, and 9.729 mm. The results of the outer distance between the upper part of the stone model were 31.739 mm, 31.689 mm, 31.682 mm, 31.670 mm, and 31.670 mm. The data of this study was evaluated statistically based on the variant analysis. The conclusion of this study was statistically, there was no significant effect on gypsum model linear dimensional changes obtained from alginate impressions sprayed with 35% beetle juice.

  13. Linear colliders - prospects 1985

    International Nuclear Information System (INIS)

    Rees, J.

    1985-06-01

    We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs

  14. Genetic parameters for racing records in trotters using linear and generalized linear models.

    Science.gov (United States)

    Suontama, M; van der Werf, J H J; Juga, J; Ojala, M

    2012-09-01

    Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.

  15. Scaling law systematics

    International Nuclear Information System (INIS)

    Pfirsch, D.; Duechs, D.F.

    1985-01-01

    A number of statistical implications of empirical scaling laws in form of power products obtained by linear regression are analysed. The sensitivity of the error against a change of exponents is described by a sensitivity factor and the uncertainty of predictions by a ''range of predictions factor''. Inner relations in the statistical material is discussed, as well as the consequences of discarding variables.A recipe is given for the computations to be done. The whole is exemplified by considering scaling laws for the electron energy confinement time of ohmically heated tokamak plasmas. (author)

  16. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  17. ITMETH, Iterative Routines for Linear System

    International Nuclear Information System (INIS)

    Greenbaum, A.

    1989-01-01

    1 - Description of program or function: ITMETH is a collection of iterative routines for solving large, sparse linear systems. 2 - Method of solution: ITMETH solves general linear systems of the form AX=B using a variety of methods: Jacobi iteration; Gauss-Seidel iteration; incomplete LU decomposition or matrix splitting with iterative refinement; diagonal scaling, matrix splitting, or incomplete LU decomposition with the conjugate gradient method for the problem AA'Y=B, X=A'Y; bi-conjugate gradient method with diagonal scaling, matrix splitting, or incomplete LU decomposition; and ortho-min method with diagonal scaling, matrix splitting, or incomplete LU decomposition. ITMETH also solves symmetric positive definite linear systems AX=B using the conjugate gradient method with diagonal scaling or matrix splitting, or the incomplete Cholesky conjugate gradient method

  18. Linear DNA vaccine prepared by large-scale PCR provides protective immunity against H1N1 influenza virus infection in mice.

    Science.gov (United States)

    Wang, Fei; Chen, Quanjiao; Li, Shuntang; Zhang, Chenyao; Li, Shanshan; Liu, Min; Mei, Kun; Li, Chunhua; Ma, Lixin; Yu, Xiaolan

    2017-06-01

    Linear DNA vaccines provide effective vaccination. However, their application is limited by high cost and small scale of the conventional polymerase chain reaction (PCR) generally used to obtain sufficient amounts of DNA effective against epidemic diseases. In this study, a two-step, large-scale PCR was established using a low-cost DNA polymerase, RKOD, expressed in Pichia pastoris. Two linear DNA vaccines encoding influenza H1N1 hemagglutinin (HA) 1, LEC-HA, and PTO-LEC-HA (with phosphorothioate-modified primers), were produced by the two-step PCR. Protective effects of the vaccines were evaluated in a mouse model. BALB/c mice were immunized three times with the vaccines or a control DNA fragment. All immunized animals were challenged by intranasal administration of a lethal dose of influenza H1N1 virus 2 weeks after the last immunization. Sera of the immunized animals were tested for the presence of HA-specific antibodies, and the total IFN-γ responses induced by linear DNA vaccines were measured. The results showed that the DNA vaccines but not the control DNA induced strong antibody and IFN-γ responses. Additionally, the PTO-LEC-HA vaccine effectively protected the mice against the lethal homologous mouse-adapted virus, with a survival rate of 100% versus 70% in the LEC-HA-vaccinated group, showing that the PTO-LEC-HA vaccine was more effective than LEC-HA. In conclusion, the results indicated that the linear H1N1 HA-coding DNA vaccines induced significant immune responses and protected mice against a lethal virus challenge. Thus, the low-cost, two-step, large-scale PCR can be considered a potential tool for rapid manufacturing of linear DNA vaccines against emerging infectious diseases. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  20. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  1. Linearized spectrum correlation analysis for line emission measurements.

    Science.gov (United States)

    Nishizawa, T; Nornberg, M D; Den Hartog, D J; Sarff, J S

    2017-08-01

    A new spectral analysis method, Linearized Spectrum Correlation Analysis (LSCA), for charge exchange and passive ion Doppler spectroscopy is introduced to provide a means of measuring fast spectral line shape changes associated with ion-scale micro-instabilities. This analysis method is designed to resolve the fluctuations in the emission line shape from a stationary ion-scale wave. The method linearizes the fluctuations around a time-averaged line shape (e.g., Gaussian) and subdivides the spectral output channels into two sets to reduce contributions from uncorrelated fluctuations without averaging over the fast time dynamics. In principle, small fluctuations in the parameters used for a line shape model can be measured by evaluating the cross spectrum between different channel groupings to isolate a particular fluctuating quantity. High-frequency ion velocity measurements (100-200 kHz) were made by using this method. We also conducted simulations to compare LSCA with a moment analysis technique under a low photon count condition. Both experimental and synthetic measurements demonstrate the effectiveness of LSCA.

  2. Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems

    DEFF Research Database (Denmark)

    Elden, Lars; Hansen, Per Christian; Rojas, Marielba

    2003-01-01

    The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...

  3. Reliability, validity, and sensitivity to change of the lower extremity functional scale in individuals affected by stroke.

    Science.gov (United States)

    Verheijde, Joseph L; White, Fred; Tompkins, James; Dahl, Peder; Hentz, Joseph G; Lebec, Michael T; Cornwall, Mark

    2013-12-01

    To investigate reliability, validity, and sensitivity to change of the Lower Extremity Functional Scale (LEFS) in individuals affected by stroke. The secondary objective was to test the validity and sensitivity of a single-item linear analog scale (LAS) of function. Prospective cohort reliability and validation study. A single rehabilitation department in an academic medical center. Forty-three individuals receiving neurorehabilitation for lower extremity dysfunction after stroke were studied. Their ages ranged from 32 to 95 years, with a mean of 70 years; 77% were men. Test-retest reliability was assessed by calculating the classical intraclass correlation coefficient, and the Bland-Altman limits of agreement. Validity was assessed by calculating the Pearson correlation coefficient between the instruments. Sensitivity to change was assessed by comparing baseline scores with end of treatment scores. Measurements were taken at baseline, after 1-3 days, and at 4 and 8 weeks. The LEFS, Short-Form-36 Physical Function Scale, Berg Balance Scale, Six-Minute Walk Test, Five-Meter Walk Test, Timed Up-and-Go test, and the LAS of function were used. The test-retest reliability of the LEFS was found to be excellent (ICC = 0.96). Correlated with the 6 other measures of function studied, the validity of the LEFS was found to be moderate to high (r = 0.40-0.71). Regarding the sensitivity to change, the mean LEFS scores from baseline to study end increased 1.2 SD and for LAS 1.1 SD. LEFS exhibits good reliability, validity, and sensitivity to change in patients with lower extremity impairments secondary to stroke. Therefore, the LEFS can be a clinically efficient outcome measure in the rehabilitation of patients with subacute stroke. The LAS is shown to be a time-saving and reasonable option to track changes in a patient's functional status. Copyright © 2013 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  4. Monitoring of full scale tensegrity skeletons under temperature change

    OpenAIRE

    KAWAGUCHI, Ken'ichi; OHYA, Shunji

    2009-01-01

    p. 224-231 Strain change in the members of full-scale tensegrity skeletons has been monitored for eight years. The one-day data of one of the tensegrity frame on the hottest and the coldest day in the record are reported and discussed. Kawaguchi, K.; Ohya, S. (2009). Monitoring of full scale tensegrity skeletons under temperature change. Symposium of the International Association for Shell and Spatial Structures. Editorial Universitat Politècnica de València. http://hdl.handle.net/10...

  5. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  6. Communication: An effective linear-scaling atomic-orbital reformulation of the random-phase approximation using a contracted double-Laplace transformation

    International Nuclear Information System (INIS)

    Schurkus, Henry F.; Ochsenfeld, Christian

    2016-01-01

    An atomic-orbital (AO) reformulation of the random-phase approximation (RPA) correlation energy is presented allowing to reduce the steep computational scaling to linear, so that large systems can be studied on simple desktop computers with fully numerically controlled accuracy. Our AO-RPA formulation introduces a contracted double-Laplace transform and employs the overlap-metric resolution-of-the-identity. First timings of our pilot code illustrate the reduced scaling with systems comprising up to 1262 atoms and 10 090 basis functions. 

  7. Scaling linear colliders to 5 TeV and above

    International Nuclear Information System (INIS)

    Wilson, P.B.

    1997-04-01

    Detailed designs exist at present for linear colliders in the 0.5-1.0 TeV center-of-mass energy range. For linear colliders driven by discrete rf sources (klystrons), the rf operating frequencies range from 1.3 GHz to 14 GHz, and the unloaded accelerating gradients from 21 MV/m to 100 MV/m. Except for the collider design at 1.3 GHz (TESLA) which uses superconducting accelerating structures, the accelerating gradients vary roughly linearly with the rf frequency. This correlation between gradient and frequency follows from the necessity to keep the ac open-quotes wall plugclose quotes power within reasonable bounds. For linear colliders at energies of 5 TeV and above, even higher accelerating gradients and rf operating frequencies will be required if both the total machine length and ac power are to be kept within reasonable limits. An rf system for a 5 TeV collider operating at 34 GHz is outlined, and it is shown that there are reasonable candidates for microwave tube sources which, together with rf pulse compression, are capable of supplying the required rf power. Some possibilities for a 15 TeV collider at 91 GHz are briefly discussed

  8. Genome-scale regression analysis reveals a linear relationship for promoters and enhancers after combinatorial drug treatment

    KAUST Repository

    Rapakoulia, Trisevgeni

    2017-08-09

    Motivation: Drug combination therapy for treatment of cancers and other multifactorial diseases has the potential of increasing the therapeutic effect, while reducing the likelihood of drug resistance. In order to reduce time and cost spent in comprehensive screens, methods are needed which can model additive effects of possible drug combinations. Results: We here show that the transcriptional response to combinatorial drug treatment at promoters, as measured by single molecule CAGE technology, is accurately described by a linear combination of the responses of the individual drugs at a genome wide scale. We also find that the same linear relationship holds for transcription at enhancer elements. We conclude that the described approach is promising for eliciting the transcriptional response to multidrug treatment at promoters and enhancers in an unbiased genome wide way, which may minimize the need for exhaustive combinatorial screens.

  9. Scaling of the magnetic entropy change of Fe3−xMnxSi

    International Nuclear Information System (INIS)

    Said, M.R.; Hamam, Y.A.; Abu-Aljarayesh, I.

    2014-01-01

    The magnetic entropy change of Fe 3−x Mn x Si (for x=1.15, 1.3 and 1.5) has been extracted from isothermal magnetization measurements near the Curie temperature. We used the scaling hypotheses of the thermodynamic potentials to scale the magnetic entropy change to a single universal curve for each sample. The effect of the exchange field and the Curie temperature on the maximum entropy change is discussed. - Highlights: • The maximum of the magnetic entropy change occurs at temperatures T>T C . • The exchange field enhances the magnetic entropy change. • The magnetic entropy change at T C is inversely proportional to T C . • Scaling hypothesis is used to scale the magnetic entropy change

  10. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Directory of Open Access Journals (Sweden)

    Xiaocui Wu

    2015-02-01

    Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  11. Linearity and Non-linearity of Photorefractive effect in Materials ...

    African Journals Online (AJOL)

    Linearity and Non-linearity of Photorefractive effect in Materials using the Band transport ... For low light beam intensities the change in the refractive index is ... field is spatially phase shifted by /2 relative to the interference fringe pattern, which ...

  12. Nonlinear price impact from linear models

    Science.gov (United States)

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2017-12-01

    The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.

  13. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  14. Scaling Factor Estimation Using an Optimized Mass Change Strategy, Part 1: Theory

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Fernández, Pelayo Fernández; Brincker, Rune

    2007-01-01

    In natural input modal analysis, only un-scaled mode shapes can be obtained. The mass change method is, in many cases, the simplest way to estimate the scaling factors, which involves repeated modal testing after changing the mass in different points of the structure where the mode shapes are known....... The scaling factors are determined using the natural frequencies and mode shapes of both the modified and the unmodified structure. However, the uncertainty on the scaling factor estimation depends on the modal analysis and the mass change strategy (number, magnitude and location of the masses) used to modify...

  15. Regional scaling of annual mean precipitation and water availability with global temperature change

    Science.gov (United States)

    Greve, Peter; Gudmundsson, Lukas; Seneviratne, Sonia I.

    2018-03-01

    Changes in regional water availability belong to the most crucial potential impacts of anthropogenic climate change, but are highly uncertain. It is thus of key importance for stakeholders to assess the possible implications of different global temperature thresholds on these quantities. Using a subset of climate model simulations from the fifth phase of the Coupled Model Intercomparison Project (CMIP5), we derive here the sensitivity of regional changes in precipitation and in precipitation minus evapotranspiration to global temperature changes. The simulations span the full range of available emission scenarios, and the sensitivities are derived using a modified pattern scaling approach. The applied approach assumes linear relationships on global temperature changes while thoroughly addressing associated uncertainties via resampling methods. This allows us to assess the full distribution of the simulations in a probabilistic sense. Northern high-latitude regions display robust responses towards wetting, while subtropical regions display a tendency towards drying but with a large range of responses. Even though both internal variability and the scenario choice play an important role in the overall spread of the simulations, the uncertainty stemming from the climate model choice usually accounts for about half of the total uncertainty in most regions. We additionally assess the implications of limiting global mean temperature warming to values below (i) 2 K or (ii) 1.5 K (as stated within the 2015 Paris Agreement). We show that opting for the 1.5 K target might just slightly influence the mean response, but could substantially reduce the risk of experiencing extreme changes in regional water availability.

  16. A scale-entropy diffusion equation to describe the multi-scale features of turbulent flames near a wall

    Science.gov (United States)

    Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.

    2008-12-01

    Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.

  17. Successful adaptation to climate change across scales

    International Nuclear Information System (INIS)

    Adger, W.N.; Arnell, N.W.; University of Southampton; Tompkins, E.L.; University of East Anglia, Norwich; University of Southampton

    2005-01-01

    Climate change impacts and responses are presently observed in physical and ecological systems. Adaptation to these impacts is increasingly being observed in both physical and ecological systems as well as in human adjustments to resource availability and risk at different spatial and societal scales. We review the nature of adaptation and the implications of different spatial scales for these processes. We outline a set of normative evaluative criteria for judging the success of adaptations at different scales. We argue that elements of effectiveness, efficiency, equity and legitimacy are important in judging success in terms of the sustainability of development pathways into an uncertain future. We further argue that each of these elements of decision-making is implicit within presently formulated scenarios of socio-economic futures of both emission trajectories and adaptation, though with different weighting. The process by which adaptations are to be judged at different scales will involve new and challenging institutional processes. (author)

  18. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  19. Scale changes in air quality modelling and assessment of associated uncertainties

    International Nuclear Information System (INIS)

    Korsakissok, Irene

    2009-01-01

    After an introduction of issues related to a scale change in the field of air quality (existing scales for emissions, transport, turbulence and loss processes, hierarchy of data and models, methods of scale change), the author first presents Gaussian models which have been implemented within the Polyphemus modelling platform. These models are assessed by comparison with experimental observations and with other commonly used Gaussian models. The second part reports the coupling of the puff-based Gaussian model with the Eulerian Polair3D model for the sub-mesh processing of point sources. This coupling is assessed at the continental scale for a passive tracer, and at the regional scale for photochemistry. Different statistical methods are assessed

  20. Nuclear resonant scattering measurements on (57)Fe by multichannel scaling with a 64-pixel silicon avalanche photodiode linear-array detector.

    Science.gov (United States)

    Kishimoto, S; Mitsui, T; Haruki, R; Yoda, Y; Taniguchi, T; Shimazaki, S; Ikeno, M; Saito, M; Tanaka, M

    2014-11-01

    We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm(2)) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10(7) cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrum of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on (57)Fe.

  1. Simulation of electron energy loss spectra of nanomaterials with linear-scaling density functional theory

    International Nuclear Information System (INIS)

    Tait, E W; Payne, M C; Ratcliff, L E; Haynes, P D; Hine, N D M

    2016-01-01

    Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree with those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. Finally, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable. (paper)

  2. Linear stability of liquid films with phase change at the interface

    International Nuclear Information System (INIS)

    Spindler, Bertrand

    1980-01-01

    The objective of this research thesis is to study the linear stability of the flow of a liquid film on an inclined plane with a heat flow on the wall and an interfacial phase change, and to highlight the influence of the phase change on the flow stability. In order to do so, the author first proposed a rational simplification of equations by studying the order of magnitude of different terms, and based on some simple hypotheses regarding flow physics. Two stability studies are then addressed, one regarding a flow with a pre-existing film, and the other regarding the flow of a condensation film. In both cases, it is assumed that there is no imposed heat flow, but that the driving effect of vapour by the liquid film is taken into account [fr

  3. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    Science.gov (United States)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  4. Reduced linear noise approximation for biochemical reaction networks with time-scale separation: The stochastic tQSSA+

    Science.gov (United States)

    Herath, Narmada; Del Vecchio, Domitilla

    2018-03-01

    Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.

  5. Multiple time scale analysis of pressure oscillations in solid rocket motors

    Science.gov (United States)

    Ahmed, Waqas; Maqsood, Adnan; Riaz, Rizwan

    2018-03-01

    In this study, acoustic pressure oscillations for single and coupled longitudinal acoustic modes in Solid Rocket Motor (SRM) are investigated using Multiple Time Scales (MTS) method. Two independent time scales are introduced. The oscillations occur on fast time scale whereas the amplitude and phase changes on slow time scale. Hopf bifurcation is employed to investigate the properties of the solution. The supercritical bifurcation phenomenon is observed for linearly unstable system. The amplitude of the oscillations result from equal energy gain and loss rates of longitudinal acoustic modes. The effect of linear instability and frequency of longitudinal modes on amplitude and phase of oscillations are determined for both single and coupled modes. For both cases, the maximum amplitude of oscillations decreases with the frequency of acoustic mode and linear instability of SRM. The comparison of analytical MTS results and numerical simulations demonstrate an excellent agreement.

  6. Change in Urban Albedo in London: A Multi-scale Perspective

    Science.gov (United States)

    Susca, T.; Kotthaus, S.; Grimmond, S.

    2013-12-01

    Urbanization-induced change in land use has considerable implications for climate, air quality, resources and ecosystems. Urban-induced warming is one of the most well-known impacts. This directly and indirectly can extend beyond the city. One way to reduce the size of this is to modify the surface atmosphere exchanges through changing the urban albedo. As increased rugosity caused by the morphology of a city results in lower albedo with constant material characteristics, the impacts of changing the albedo has impacts across a range of scales. Here a multi-scale assessment of the potential effects of the increase in albedo in London is presented. This includes modeling at the global and meso-scale informed by local and micro-scale measurements. In this study the first order calculations are conducted for the impact of changing the albedo (e.g. a 0.01 increase) on the radiative exchange. For example, when incoming solar radiation and cloud cover are considered, based on data retrieved from NASA (http://power.larc.nasa.gov/) for ~1600 km2 area of London, would produce a mean decrease in the instantaneous solar radiative forcing on the same surface of 0.40 W m-2. The nature of the surface is critical in terms of considering the impact of changes in albedo. For example, in the Central Activity Zone in London pavement and building can vary from 10 to 100% of the plan area. From observations the albedo is seen to change dramatically with changes in building materials. For example, glass surfaces which are being used increasingly in the central business district results in dramatic changes in albedo. Using the documented albedo variations determined across different scales the impacts are considered. For example, the effect of the increase in urban albedo is translated into the corresponding amount of avoided emission of carbon dioxide that produces the same effect on climate. At local scale, the effect that the increase in urban albedo can potentially have on local

  7. Linear programming using Matlab

    CERN Document Server

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  8. On linear correlation between interfacial tension of water-solvent interface solubility of water in organic solvents and parameters of diluent effect scale

    International Nuclear Information System (INIS)

    Mezhov, Eh.A.; Khananashvili, N.L.; Shmidt, V.S.

    1988-01-01

    Presence of linear correlation between water solubility in nonmiscible with it organic solvents, interfacial tension of water-solvent interface, on the one hand, and solvent effect scale parameters and these solvents π* - on the other hand, is established. It allows, using certain tabular parameters of solvent effect or each solvent π*, to predict values of interfacial tension and water solubility for corresponding systems. It is shown, that solvent effect scale allows to predict values more accurately, than other known solvent scales, as it in contrast to other scales characterizes solvents, which are in equilibrium with water

  9. Study of load change control in PWRs using the methods of linear optimal control

    International Nuclear Information System (INIS)

    Yang, T.

    1983-01-01

    This thesis investigates the application of modern control theory to the problem of controlling load changes in PWR power plants. A linear optimal state feedback scheme resulting from linear optimal control theory with a quadratic cost function is reduced to a partially decentralized control system using mode preservation techniques. Minimum information transfer among major components of the plant is investigated to provide an adequate coordination, simple implementation, and a reliable control system. Two control approaches are proposed: servo and model following. Each design considers several information structures for performance comparison. Integrated output error has been included in the control systems to accommodate external and plant parameter disturbances. In addition, the cross limit feature, specific to certain modern reactor control systems, is considered in the study to prevent low pressure reactor trip conditions. An 11th order nonlinear model for the reactor and boiler is derived based on theoretical principles, and simulation tests are performed for 10% load change as an illustration of system performance

  10. On Feature Extraction from Large Scale Linear LiDAR Data

    Science.gov (United States)

    Acharjee, Partha Pratim

    Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are

  11. Global non-linear effect of temperature on economic production.

    Science.gov (United States)

    Burke, Marshall; Hsiang, Solomon M; Miguel, Edward

    2015-11-12

    Growing evidence demonstrates that climatic conditions can have a profound impact on the functioning of modern human societies, but effects on economic activity appear inconsistent. Fundamental productive elements of modern economies, such as workers and crops, exhibit highly non-linear responses to local temperature even in wealthy countries. In contrast, aggregate macroeconomic productivity of entire wealthy countries is reported not to respond to temperature, while poor countries respond only linearly. Resolving this conflict between micro and macro observations is critical to understanding the role of wealth in coupled human-natural systems and to anticipating the global impact of climate change. Here we unify these seemingly contradictory results by accounting for non-linearity at the macro scale. We show that overall economic productivity is non-linear in temperature for all countries, with productivity peaking at an annual average temperature of 13 °C and declining strongly at higher temperatures. The relationship is globally generalizable, unchanged since 1960, and apparent for agricultural and non-agricultural activity in both rich and poor countries. These results provide the first evidence that economic activity in all regions is coupled to the global climate and establish a new empirical foundation for modelling economic loss in response to climate change, with important implications. If future adaptation mimics past adaptation, unmitigated warming is expected to reshape the global economy by reducing average global incomes roughly 23% by 2100 and widening global income inequality, relative to scenarios without climate change. In contrast to prior estimates, expected global losses are approximately linear in global mean temperature, with median losses many times larger than leading models indicate.

  12. Global non-linear effect of temperature on economic production

    Science.gov (United States)

    Burke, Marshall; Hsiang, Solomon M.; Miguel, Edward

    2015-11-01

    Growing evidence demonstrates that climatic conditions can have a profound impact on the functioning of modern human societies, but effects on economic activity appear inconsistent. Fundamental productive elements of modern economies, such as workers and crops, exhibit highly non-linear responses to local temperature even in wealthy countries. In contrast, aggregate macroeconomic productivity of entire wealthy countries is reported not to respond to temperature, while poor countries respond only linearly. Resolving this conflict between micro and macro observations is critical to understanding the role of wealth in coupled human-natural systems and to anticipating the global impact of climate change. Here we unify these seemingly contradictory results by accounting for non-linearity at the macro scale. We show that overall economic productivity is non-linear in temperature for all countries, with productivity peaking at an annual average temperature of 13 °C and declining strongly at higher temperatures. The relationship is globally generalizable, unchanged since 1960, and apparent for agricultural and non-agricultural activity in both rich and poor countries. These results provide the first evidence that economic activity in all regions is coupled to the global climate and establish a new empirical foundation for modelling economic loss in response to climate change, with important implications. If future adaptation mimics past adaptation, unmitigated warming is expected to reshape the global economy by reducing average global incomes roughly 23% by 2100 and widening global income inequality, relative to scenarios without climate change. In contrast to prior estimates, expected global losses are approximately linear in global mean temperature, with median losses many times larger than leading models indicate.

  13. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  14. Fourier imaging of non-linear structure formation

    Energy Technology Data Exchange (ETDEWEB)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk [Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, DK-8000 Aarhus C (Denmark)

    2017-04-01

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important, and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.

  15. Fourier imaging of non-linear structure formation

    International Nuclear Information System (INIS)

    Brandbyge, Jacob; Hannestad, Steen

    2017-01-01

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important, and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.

  16. Position and out-of-straightness measurement of a precision linear air-bearing stage by using a two-degree-of-freedom linear encoder

    International Nuclear Information System (INIS)

    Kimura, Akihide; Gao, Wei; Lijiang, Zeng

    2010-01-01

    This paper presents measurement of the X-directional position and the Z-directional out-of-straightness of a precision linear air-bearing stage with a two-degree-of-freedom (two-DOF) linear encoder, which is an optical displacement sensor for simultaneous measurement of the two-DOF displacements. The two-DOF linear encoder is composed of a reflective-type one-axis scale grating and an optical sensor head. A reference grating is placed perpendicular to the scale grating in the optical sensor head. Two-DOF displacements can be obtained from interference signals generated by the ±1 order diffracted beams from two gratings. A prototype two-DOF linear encoder employing the scale grating with the grating period of approximately 1.67 µm measured the X-directional position and the Z-directional out-of-straightness of the linear air-bearing stage

  17. Impact of climate change on Taiwanese power market determined using linear complementarity model

    International Nuclear Information System (INIS)

    Tung, Ching-Pin; Tseng, Tze-Chi; Huang, An-Lei; Liu, Tzu-Ming; Hu, Ming-Che

    2013-01-01

    Highlights: ► Impact of climate change on average temperature is estimated. ► Temperature elasticity of demand is measured. ► Impact of climate change on Taiwanese power market determined. -- Abstract: The increase in the greenhouse gas concentration in the atmosphere causes significant changes in climate patterns. In turn, this climate change affects the environment, ecology, and human behavior. The emission of greenhouse gases from the power industry has been analyzed in many studies. However, the impact of climate change on the electricity market has received less attention. Hence, the purpose of this research is to determine the impact of climate change on the electricity market, and a case study involving the Taiwanese power market is conducted. First, the impact of climate change on temperature is estimated. Next, because electricity demand can be expressed as a function of temperature, the temperature elasticity of demand is measured. Then, a linear complementarity model is formulated to simulate the Taiwanese power market and climate change scenarios are discussed. Therefore, this paper establishes a simulation framework for calculating the impact of climate change on electricity demand change. In addition, the impact of climate change on the Taiwanese market is examined and presented.

  18. Exact spectrum of non-linear chirp scaling and its application in geosynchronous synthetic aperture radar imaging

    Directory of Open Access Journals (Sweden)

    Chen Qi

    2013-07-01

    Full Text Available Non-linear chirp scaling (NLCS is a feasible method to deal with time-variant frequency modulation (FM rate problem in synthetic aperture radar (SAR imaging. However, approximations in derivation of NLCS spectrum lead to performance decline in some cases. Presented is the exact spectrum of the NLCS function. Simulation with a geosynchronous synthetic aperture radar (GEO-SAR configuration is implemented. The results show that using the presented spectrum can significantly improve imaging performance, and the NLCS algorithm is suitable for GEO-SAR imaging after modification.

  19. Quantitative assessment of changes in landslide risk using a regional scale run-out model

    Science.gov (United States)

    Hussin, Haydar; Chen, Lixia; Ciurean, Roxana; van Westen, Cees; Reichenbach, Paola; Sterlacchini, Simone

    2015-04-01

    The risk of landslide hazard continuously changes in time and space and is rarely a static or constant phenomena in an affected area. However one of the main challenges of quantitatively assessing changes in landslide risk is the availability of multi-temporal data for the different components of risk. Furthermore, a truly "quantitative" landslide risk analysis requires the modeling of the landslide intensity (e.g. flow depth, velocities or impact pressures) affecting the elements at risk. Such a quantitative approach is often lacking in medium to regional scale studies in the scientific literature or is left out altogether. In this research we modelled the temporal and spatial changes of debris flow risk in a narrow alpine valley in the North Eastern Italian Alps. The debris flow inventory from 1996 to 2011 and multi-temporal digital elevation models (DEMs) were used to assess the susceptibility of debris flow triggering areas and to simulate debris flow run-out using the Flow-R regional scale model. In order to determine debris flow intensities, we used a linear relationship that was found between back calibrated physically based Flo-2D simulations (local scale models of five debris flows from 2003) and the probability values of the Flow-R software. This gave us the possibility to assign flow depth to a total of 10 separate classes on a regional scale. Debris flow vulnerability curves from the literature and one curve specifically for our case study area were used to determine the damage for different material and building types associated with the elements at risk. The building values were obtained from the Italian Revenue Agency (Agenzia delle Entrate) and were classified per cadastral zone according to the Real Estate Observatory data (Osservatorio del Mercato Immobiliare, Agenzia Entrate - OMI). The minimum and maximum market value for each building was obtained by multiplying the corresponding land-use value (€/msq) with building area and number of floors

  20. Non-linear laws of echoic memory and auditory change detection in humans.

    Science.gov (United States)

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-07-03

    The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms), while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms). The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  1. An online re-linearization scheme suited for Model Predictive and Linear Quadratic Control

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the equations for primal-dual interior-point quadratic programming problem solver used for MPC. The algorithm exploits the special structure of the MPC problem and is able to reduce the computational burden such that the computational burden scales with prediction...... horizon length in a linear way rather than cubic, which would be the case if the structure was not exploited. It is also shown how models used for design of model-based controllers, e.g. linear quadratic and model predictive, can be linearized both at equilibrium and non-equilibrium points, making...

  2. Change Analysis and Decision Tree Based Detection Model for Residential Objects across Multiple Scales

    Directory of Open Access Journals (Sweden)

    CHEN Liyan

    2018-03-01

    Full Text Available Change analysis and detection plays important role in the updating of multi-scale databases.When overlap an updated larger-scale dataset and a to-be-updated smaller-scale dataset,people usually focus on temporal changes caused by the evolution of spatial entities.Little attention is paid to the representation changes influenced by map generalization.Using polygonal building data as an example,this study examines the changes from different perspectives,such as the reasons for their occurrence,their performance format.Based on this knowledge,we employ decision tree in field of machine learning to establish a change detection model.The aim of the proposed model is to distinguish temporal changes that need to be applied as updates to the smaller-scale dataset from representation changes.The proposed method is validated through tests using real-world building data from Guangzhou city.The experimental results show the overall precision of change detection is more than 90%,which indicates our method is effective to identify changed objects.

  3. Piecewise Linear-Linear Latent Growth Mixture Models with Unknown Knots

    Science.gov (United States)

    Kohli, Nidhi; Harring, Jeffrey R.; Hancock, Gregory R.

    2013-01-01

    Latent growth curve models with piecewise functions are flexible and useful analytic models for investigating individual behaviors that exhibit distinct phases of development in observed variables. As an extension of this framework, this study considers a piecewise linear-linear latent growth mixture model (LGMM) for describing segmented change of…

  4. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard

    2017-01-01

    also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...

  5. Land use change impacts on floods at the catchment scale

    NARCIS (Netherlands)

    Rogger, M.; Agnoletti, M.; Alaoui, A.; Bathurst, J.C.; Bodner, G.; Borga, M.; Chaplot, Vincent; Gallart, F.; Glatzel, G.; Hall, J.; Holden, J.; Holko, L.; Horn, R.; Kiss, A.; Kohnová, S.; Leitinger, G.; Lennartz, B.; Parajka, J.; Perdigão, R.; Peth, S.; Plavcová, L.; Quinton, John N.; Robinson, Matthew R.; Salinas, J.L.; Santoro, A.; Szolgay, J.; Tron, S.; Akker, van den J.J.H.; Viglione, A.; Blöschl, G.

    2017-01-01

    Research gaps in understanding flood changes at the catchment scale caused by changes in forest management, agricultural practices, artificial drainage, and terracing are identified. Potential strategies in addressing these gaps are proposed, such as complex systems approaches to link processes

  6. Designing for scale: How relationships shape curriculum change

    NARCIS (Netherlands)

    Pareja Roblin, Natalie; Corbalan, Gemma; McKenney, Susan; Nieveen, Nienke; Van den Akker, Jan

    2012-01-01

    Pareja Roblin, N., Corbalan Perez, G., McKenney, S., Nieveen, N., & Van den Akker, J. (2012, 13-17 April). Designing for scale: How relationships shape curriculum change. Presentation at the AERA annual meeting, Vancouver, Canada. Please see also http://hdl.handle.net/1820/4679

  7. Designing for scale: How relationships shape curriculum change

    NARCIS (Netherlands)

    Pareja Roblin, Natalie; Corbalan, Gemma; McKenney, Susan; Nieveen, Nienke; Van den Akker, Jan

    2012-01-01

    Pareja Roblin, N., Corbalan Perez, G., McKenney, S., Nieveen, N., & Van den Akker, J. (2012, 13-17 April). Designing for scale: How relationships shape curriculum change. Paper presentation at the AERA annual meeting, Vancouver, Canada. Please see also: http://hdl.handle.net/1820/4678

  8. A new method for large-scale assessment of change in ecosystem functioning in relation to land degradation

    Science.gov (United States)

    Horion, Stephanie; Ivits, Eva; Verzandvoort, Simone; Fensholt, Rasmus

    2017-04-01

    Ongoing pressures on European land are manifold with extreme climate events and non-sustainable use of land resources being amongst the most important drivers altering the functioning of the ecosystems. The protection and conservation of European natural capital is one of the key objectives of the 7th Environmental Action Plan (EAP). The EAP stipulates that European land must be managed in a sustainable way by 2020 and the UN Sustainable development goals define a Land Degradation Neutral world as one of the targets. This implies that land degradation (LD) assessment of European ecosystems must be performed repeatedly allowing for the assessment of the current state of LD as well as changes compared to a baseline adopted by the UNCCD for the objective of land degradation neutrality. However, scientifically robust methods are still lacking for large-scale assessment of LD and repeated consistent mapping of the state of terrestrial ecosystems. Historical land degradation assessments based on various methods exist, but methods are generally non-replicable or difficult to apply at continental scale (Allan et al. 2007). The current lack of research methods applicable at large spatial scales is notably caused by the non-robust definition of LD, the scarcity of field data on LD, as well as the complex inter-play of the processes driving LD (Vogt et al., 2011). Moreover, the link between LD and changes in land use (how land use changes relates to change in vegetation productivity and ecosystem functioning) is not straightforward. In this study we used the segmented trend method developed by Horion et al. (2016) for large-scale systematic assessment of hotspots of change in ecosystem functioning in relation to LD. This method alleviates shortcomings of widely used linear trend model that does not account for abrupt change, nor adequately captures the actual changes in ecosystem functioning (de Jong et al. 2013; Horion et al. 2016). Here we present a new methodology for

  9. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    Science.gov (United States)

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  10. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  11. Non-linear laws of echoic memory and auditory change detection in humans

    Directory of Open Access Journals (Sweden)

    Takeshima Yasuyuki

    2010-07-01

    Full Text Available Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1 of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms, while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms. The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. Conclusions The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  12. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  13. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  14. Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.

    Science.gov (United States)

    Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van

    2017-06-01

    In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.

  15. A Sawmill Manager Adapts To Change With Linear Programming

    Science.gov (United States)

    George F. Dutrow; James E. Granskog

    1973-01-01

    Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.

  16. Detection of kinetic change points in piece-wise linear single molecule motion

    Science.gov (United States)

    Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.

    2018-03-01

    Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.

  17. Linear and non-linear Modified Gravity forecasts with future surveys

    Science.gov (United States)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  18. Scale-dependent three-dimensional charged black holes in linear and non-linear electrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Rincon, Angel; Koch, Benjamin [Pontificia Universidad Catolica de Chile, Instituto de Fisica, Santiago (Chile); Contreras, Ernesto; Bargueno, Pedro; Hernandez-Arboleda, Alejandro [Universidad de los Andes, Departamento de Fisica, Bogota, Distrito Capital (Colombia); Panotopoulos, Grigorios [Universidade de Lisboa, CENTRA, Instituto Superior Tecnico, Lisboa (Portugal)

    2017-07-15

    In the present work we study the scale dependence at the level of the effective action of charged black holes in Einstein-Maxwell as well as in Einstein-power-Maxwell theories in (2 + 1)-dimensional spacetimes without a cosmological constant. We allow for scale dependence of the gravitational and electromagnetic couplings, and we solve the corresponding generalized field equations imposing the null energy condition. Certain properties, such as horizon structure and thermodynamics, are discussed in detail. (orig.)

  19. A large-scale linear complementarity model of the North American natural gas market

    International Nuclear Information System (INIS)

    Gabriel, Steven A.; Jifang Zhuang; Kiet, Supat

    2005-01-01

    The North American natural gas market has seen significant changes recently due to deregulation and restructuring. For example, third party marketers can contract for transportation and purchase of gas to sell to end-users. While the intent was a more competitive market, the potential for market power exists. We analyze this market using a linear complementarity equilibrium model including producers, storage and peak gas operators, third party marketers and four end-use sectors. The marketers are depicted as Nash-Cournot players determining supply to meet end-use consumption, all other players are in perfect competition. Results based on National Petroleum Council scenarios are presented. (Author)

  20. Inference regarding multiple structural changes in linear models with endogenous regressors☆

    Science.gov (United States)

    Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021

  1. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  2. Polaron effects on the linear and the nonlinear optical absorption coefficients and refractive index changes in cylindrical quantum dots with applied magnetic field

    International Nuclear Information System (INIS)

    Wu Qingjie; Guo Kangxian; Liu Guanghui; Wu Jinghe

    2013-01-01

    Polaron effects on the linear and the nonlinear optical absorption coefficients and refractive index changes in cylindrical quantum dots with the radial parabolic potential and the z-direction linear potential with applied magnetic field are theoretically investigated. The optical absorption coefficients and refractive index changes are presented by using the compact-density-matrix approach and iterative method. Numerical calculations are presented for GaAs/AlGaAs. It is found that taking into account the electron-LO-phonon interaction, not only are the linear, the nonlinear and the total optical absorption coefficients and refractive index changes enhanced, but also the total optical absorption coefficients are more sensitive to the incident optical intensity. It is also found that no matter whether the electron-LO-phonon interaction is considered or not, the absorption coefficients and refractive index changes above are strongly dependent on the radial frequency, the magnetic field and the linear potential coefficient.

  3. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  4. Regional-Scale Forcing and Feedbacks from Alternative Scenarios of Global-Scale Land Use Change

    Science.gov (United States)

    Jones, A. D.; Chini, L. P.; Collins, W.; Janetos, A. C.; Mao, J.; Shi, X.; Thomson, A. M.; Torn, M. S.

    2011-12-01

    Future patterns of land use change depend critically on the degree to which terrestrial carbon management strategies, such as biological carbon sequestration and biofuels, are utilized in order to mitigate global climate change. Furthermore, land use change associated with terrestrial carbon management induces biogeophysical changes to surface energy budgets that perturb climate at regional and possibly global scales, activating different feedback processes depending on the nature and location of the land use change. As a first step in a broader effort to create an integrated earth system model, we examine two scenarios of future anthropogenic activity generated by the Global Change Assessment Model (GCAM) within the full-coupled Community Earth System Model (CESM). Each scenario stabilizes radiative forcing from greenhouse gases and aerosols at 4.5 W/m^2. In the first, stabilization is achieved through a universal carbon tax that values terrestrial carbon equally with fossil carbon, leading to modest afforestation globally and low biofuel utilization. In the second scenario, stabilization is achieved with a tax on fossil fuel and industrial carbon alone. In this case, biofuel utilization increases dramatically and crop area expands to claim approximately 50% of forest cover globally. By design, these scenarios exhibit identical climate forcing from atmospheric constituents. Thus, differences among them can be attributed to the biogeophysical effects of land use change. In addition, we utilize offline radiative transfer and offline land model simulations to identify forcing and feedback mechanisms operating in different regions. We find that boreal deforestation has a strong climatic signature due to significant albedo change coupled with a regional-scale water vapor feedback. Tropical deforestation, on the other hand, has more subtle effects on climate. Globally, the two scenarios yield warming trends over the 21st century that differ by 0.5 degrees Celsius. This

  5. Recent advances toward a general purpose linear-scaling quantum force field.

    Science.gov (United States)

    Giese, Timothy J; Huang, Ming; Chen, Haoyuan; York, Darrin M

    2014-09-16

    Conspectus There is need in the molecular simulation community to develop new quantum mechanical (QM) methods that can be routinely applied to the simulation of large molecular systems in complex, heterogeneous condensed phase environments. Although conventional methods, such as the hybrid quantum mechanical/molecular mechanical (QM/MM) method, are adequate for many problems, there remain other applications that demand a fully quantum mechanical approach. QM methods are generally required in applications that involve changes in electronic structure, such as when chemical bond formation or cleavage occurs, when molecules respond to one another through polarization or charge transfer, or when matter interacts with electromagnetic fields. A full QM treatment, rather than QM/MM, is necessary when these features present themselves over a wide spatial range that, in some cases, may span the entire system. Specific examples include the study of catalytic events that involve delocalized changes in chemical bonds, charge transfer, or extensive polarization of the macromolecular environment; drug discovery applications, where the wide range of nonstandard residues and protonation states are challenging to model with purely empirical MM force fields; and the interpretation of spectroscopic observables. Unfortunately, the enormous computational cost of conventional QM methods limit their practical application to small systems. Linear-scaling electronic structure methods (LSQMs) make possible the calculation of large systems but are still too computationally intensive to be applied with the degree of configurational sampling often required to make meaningful comparison with experiment. In this work, we present advances in the development of a quantum mechanical force field (QMFF) suitable for application to biological macromolecules and condensed phase simulations. QMFFs leverage the benefits provided by the LSQM and QM/MM approaches to produce a fully QM method that is able to

  6. Climate change adaptation strategies by small-scale farmers in ...

    African Journals Online (AJOL)

    Mburu

    SPSS) ... were financial constraints (93.4%), lack of relevant skills (74.5%) and lack of ... Key words: Climate change, small-scale farmers, adaptation strategies. ... investment in irrigation infrastructure, high post-harvest ..... 72.0 School drop out.

  7. Reactivity-induced time-dependencies of EBR-II linear and non-linear feedbacks

    International Nuclear Information System (INIS)

    Grimm, K.N.; Meneghetti, D.

    1988-01-01

    Time-dependent linear feedback reactivities are calculated for stereotypical subassemblies in the EBR-II reactor. These quantities are calculated from nodal reactivities obtained from a kinetic code analysis of an experiment in which the change in power resulted from the dropping of a control rod. Shown with these linear reactivities are the reactivity associated with the control-rod shaft contraction and also time-dependent non-linear (mainly bowing) component deduced from the inverse kinetics of the experimentally measured fission power and the calculated linear reactivities. (author)

  8. Phase Behavior of Blends of Linear and Branched Polyethylenes on Micron-Length Scales via Ultra-Small-Angle Neutron Scattering (USANS)

    International Nuclear Information System (INIS)

    Agamalian, M.M.; Alamo, R.G.; Londono, J.D.; Mandelkern, L.; Wignall, G.D.

    1999-01-01

    SANS experiments on blends of linear, high density (HD) and long chain branched, low density (LD) polyethylenes indicate that these systems form a one-phase mixture in the melt. However, the maximum spatial resolution of pinhole cameras is approximately equal to 10 3 and it has therefore been suggested that data might also be interpreted as arising from a bi-phasic melt with large a particle size ( 1 m), because most of the scattering from the different phases would not be resolved. We have addressed this hypothesis by means of USANS experiments, which confirm that HDPEILDPE blends are homogenous in the melt on length scales up to 20 m. We have also studied blends of HDPE and short-chain branched linear low density polyethylenes (LLDPEs), which phase separate when the branch content is sufficiently high. LLDPEs prepared with Ziegler-Natta catalysts exhibit a wide distribution of compositions, and may therefore be thought of as a blend of different species. When the composition distribution is broad enough, a fraction of highly branched chains may phase separate on m-length scales, and USANS has also been used to quantify this phenomenon

  9. Hybrid MPI-OpenMP Parallelism in the ONETEP Linear-Scaling Electronic Structure Code: Application to the Delamination of Cellulose Nanofibrils.

    Science.gov (United States)

    Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton

    2014-11-11

    We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.

  10. Capturing subregional variability in regional-scale climate change vulnerability assessments of natural resources

    Science.gov (United States)

    Polly C. Buotte; David L. Peterson; Kevin S. McKelvey; Jeffrey A. Hicke

    2016-01-01

    Natural resource vulnerability to climate change can depend on the climatology and ecological conditions at a particular site. Here we present a conceptual framework for incorporating spatial variability in natural resource vulnerability to climate change in a regional-scale assessment. The framework was implemented in the first regional-scale vulnerability...

  11. Linear inflation from quartic potential

    Energy Technology Data Exchange (ETDEWEB)

    Kannike, Kristjan; Racioppi, Antonio [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Raidal, Martti [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia); Institute of Physics, University of Tartu,Tartu (Estonia)

    2016-01-07

    We show that if the inflaton has a non-minimal coupling to gravity and the Planck scale is dynamically generated, the results of Coleman-Weinberg inflation are confined in between two attractor solutions: quadratic inflation, which is ruled out by the recent measurements, and linear inflation which, instead, is in the experimental allowed region. The minimal scenario has only one free parameter — the inflaton’s non-minimal coupling to gravity — that determines all physical parameters such as the tensor-to-scalar ratio and the reheating temperature of the Universe. Should the more precise future measurements of inflationary parameters point towards linear inflation, further interest in scale-invariant scenarios would be motivated.

  12. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    OpenAIRE

    Zuidwijk, Rob

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...

  13. Resent advance in electron linear accelerators

    International Nuclear Information System (INIS)

    Takeda, Seishi; Tsumori, Kunihiko; Takamuku, Setsuo; Okada, Toichi; Hayashi, Koichiro; Kawanishi, Masaharu

    1986-01-01

    In recently constructed electron linear accelerators, there has been remarkable advance both in acceleration of a high-current single bunch electron beam for radiation research and in generation of high accelerating gradient for high energy accelerators. The ISIR single bunch electron linear accelerator has been modified an injector to increase a high-current single bunch charge up to 67 nC, which is ten times greater than the single bunch charge expected in early stage of construction. The linear collider projects require a high accelerating gradient of the order of 100 MeV/m in the linear accelerators. High-current and high-gradient linear accelerators make it possible to obtain high-energy electron beam with small-scale linear accelerators. The advance in linear accelerators stimulates the applications of linear accelerators not only to fundamental research of science but also to industrial uses. (author)

  14. Watershed scale response to climate change--Trout Lake Basin, Wisconsin

    Science.gov (United States)

    Walker, John F.; Hunt, Randall J.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    General Circulation Model simulations of future climate through 2099 project a wide range of possible scenarios. To determine the sensitivity and potential effect of long-term climate change on the freshwater resources of the United States, the U.S. Geological Survey Global Change study, "An integrated watershed scale response to global change in selected basins across the United States" was started in 2008. The long-term goal of this national study is to provide the foundation for hydrologically based climate change studies across the nation.

  15. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    Science.gov (United States)

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  16. On the non-linear scale of cosmological perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Garny, Mathias; Konstandin, Thomas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-04-15

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  17. On the non-linear scale of cosmological perturbation theory

    International Nuclear Information System (INIS)

    Blas, Diego; Garny, Mathias; Konstandin, Thomas

    2013-04-01

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  18. On the non-linear scale of cosmological perturbation theory

    CERN Document Server

    Blas, Diego; Konstandin, Thomas

    2013-01-01

    We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.

  19. Non-linear variability in geophysics scaling and fractals

    CERN Document Server

    Lovejoy, S

    1991-01-01

    consequences of broken symmetry -here parity-is studied. In this model, turbulence is dominated by a hierarchy of helical (corkscrew) structures. The authors stress the unique features of such pseudo-scalar cascades as well as the extreme nature of the resulting (intermittent) fluctuations. Intermittent turbulent cascades was also the theme of a paper by us in which we show that universality classes exist for continuous cascades (in which an infinite number of cascade steps occur over a finite range of scales). This result is the multiplicative analogue of the familiar central limit theorem for the addition of random variables. Finally, an interesting paper by Pasmanter investigates the scaling associated with anomolous diffusion in a chaotic tidal basin model involving a small number of degrees of freedom. Although the statistical literature is replete with techniques for dealing with those random processes characterized by both exponentially decaying (non-scaling) autocorrelations and exponentially decaying...

  20. A Non-Linear Upscaling Approach for Wind Turbines Blades Based on Stresses

    NARCIS (Netherlands)

    Castillo Capponi, P.; Van Bussel, G.J.W.; Ashuri, T.; Kallesoe, B.

    2011-01-01

    The linear scaling laws for upscaling wind turbine blades show a linear increase of stresses due to the weight. However, the stresses should remain the same for a suitable design. Application of linear scaling laws may lead to an upscaled blade that may not be any more a feasible design. In this

  1. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  2. Analysis of Multi-Scale Changes in Arable Land and Scale Effects of the Driving Factors in the Loess Areas in Northern Shaanxi, China

    Directory of Open Access Journals (Sweden)

    Lina Zhong

    2014-04-01

    Full Text Available In this study, statistical data on the national economic and social development, including the year-end actual area of arable land, the crop yield per unit area and 10 factors, were obtained for the period between 1980 and 2010 and used to analyze the factors driving changes in the arable land of the Loess Plateau in northern Shaanxi, China. The following areas of arable land, which represent different spatial scales, were investigated: the Baota District, the city of Yan’an, and the Northern Shaanxi region. The scale effects of the factors driving the changes to the arable land were analyzed using a canonical correlation analysis and a principal component analysis. Because it was difficult to quantify the impact of the national government policies on the arable land changes, the contributions of the national government policies to the changes in arable land were analyzed qualitatively. The primary conclusions of the study were as follows: between 1980 and 2010, the arable land area decreased. The trends of the year-end actual arable land proportion of the total area in the northern Shaanxi region and Yan’an City were broadly consistent, whereas the proportion in the Baota District had no obvious similarity with the northern Shaanxi region and Yan’an City. Remarkably different factors were shown to influence the changes in the arable land at different scales. Environmental factors exerted a greater effect for smaller scale arable land areas (the Baota District. The effect of socio-economic development was a major driving factor for the changes in the arable land area at the city and regional scales. At smaller scales, population change, urbanization and socio-economic development affected the crop yield per unit area either directly or indirectly. Socio-economic development and the modernization of agricultural technology had a greater effect on the crop yield per unit area at the large-scales. Furthermore, the qualitative analysis

  3. Assessment of Change in Psychoanalysis: Another Way of Using the Change After Psychotherapy Scales.

    Science.gov (United States)

    Pires, António Pazo; Gonçalves, João; Sá, Vânia; Silva, Andrea; Sandell, Rolf

    2016-04-01

    A systematic method is presented whereby material from a full course of psychoanalytic treatment is analyzed to assess changes and identify patterns of change. Through an analysis of session notes, changes were assessed using the CHange After Psychotherapy scales (CHAP; Sandell 1987a), which evaluate changes in five rating variables (symptoms, adaptive capacity, insight, basic conflicts, and extratherapeutic factors). Change incidents were identified in nearly every session. Early in the analysis, relatively more change incidents related to insight were found than were found for the other types of change. By contrast, in the third year and part of the fourth year, relatively more change incidents related to basic conflicts and adaptive capacity were found. While changes related to symptoms occurred throughout the course of treatment, such changes were never more frequent than other types of change. A content analysis of the change incidents allowed a determination of when in the treatment the patient's main conflicts (identified clinically) were overcome. A crossing of quantitative data with clinical and qualitative data allowed a better understanding of the patterns of change. © 2016 by the American Psychoanalytic Association.

  4. Methods for assessment of climate variability and climate changes in different time-space scales

    International Nuclear Information System (INIS)

    Lobanov, V.; Lobanova, H.

    2004-01-01

    Main problem of hydrology and design support for water projects connects with modern climate change and its impact on hydrological characteristics as observed as well as designed. There are three main stages of this problem: - how to extract a climate variability and climate change from complex hydrological records; - how to assess the contribution of climate change and its significance for the point and area; - how to use the detected climate change for computation of design hydrological characteristics. Design hydrological characteristic is the main generalized information, which is used for water management and design support. First step of a research is a choice of hydrological characteristic, which can be as a traditional one (annual runoff for assessment of water resources, maxima, minima runoff, etc) as well as a new one, which characterizes an intra-annual function or intra-annual runoff distribution. For this aim a linear model has been developed which has two coefficients connected with an amplitude and level (initial conditions) of seasonal function and one parameter, which characterizes an intensity of synoptic and macro-synoptic fluctuations inside a year. Effective statistical methods have been developed for a separation of climate variability and climate change and extraction of homogeneous components of three time scales from observed long-term time series: intra annual, decadal and centural. The first two are connected with climate variability and the last (centural) with climate change. Efficiency of new methods of decomposition and smoothing has been estimated by stochastic modeling and well as on the synthetic examples. For an assessment of contribution and statistical significance of modern climate change components statistical criteria and methods have been used. Next step has been connected with a generalization of the results of detected climate changes over the area and spatial modeling. For determination of homogeneous region with the same

  5. Introducing PROFESS 2.0: A parallelized, fully linear scaling program for orbital-free density functional theory calculations

    Science.gov (United States)

    Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.

    2010-12-01

    Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer

  6. Large scale atmospheric tropical circulation changes and consequences during global warming

    International Nuclear Information System (INIS)

    Gastineau, G.

    2008-01-01

    The changes of the tropical large scale circulation during climate change can have large impacts on human activities. In a first part, the meridional atmospheric tropical circulation was studied in the different coupled models. During climate change, we find, on the one hand, that the Hadley meridional circulation and the subtropical jet are significantly shifted poleward, and on the other hand, that the intensity of the tropical circulation weakens. The slow down of the atmospheric circulation results from the dry static stability changes affecting the tropical troposphere. Secondly, idealized simulations are used to explain the tropical circulation changes. Ensemble simulation using the model LMDZ4 are set up to study the results from the coupled model IPSLCM4. The weakening of the large scale tropical circulation and the poleward shift of the Hadley cells are explained by both the uniform change and the meridional gradient change of the sea surface temperature. Then, we used the atmospheric model LMDZ4 in an aqua-planet configuration. The Hadley circulation changes are explained in a simple framework by the required poleward energy transport. In a last part, we focus on the water vapor distribution and feedback in the climate models. The Hadley circulation changes were shown to have a significant impact on the water vapour feedback during climate change. (author)

  7. Basic linear algebra

    CERN Document Server

    Blyth, T S

    2002-01-01

    Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...

  8. The renormalization group: scale transformations and changes of scheme

    International Nuclear Information System (INIS)

    Roditi, I.

    1983-01-01

    Starting from a study of perturbation theory, the renormalization group is expressed, not only for changes of scale but also within the original view of Stueckelberg and Peterman, for changes of renormalization scheme. The consequences that follow from using that group are investigated. Following a more general point of view a method to obtain an improvement of the perturbative results for physical quantities is proposed. The results obtained with this method are compared with those of other existing methods. (L.C.) [pt

  9. Future changes in large-scale transport and stratosphere-troposphere exchange

    Science.gov (United States)

    Abalos, M.; Randel, W. J.; Kinnison, D. E.; Garcia, R. R.

    2017-12-01

    Future changes in large-scale transport are investigated in long-term (1955-2099) simulations of the Community Earth System Model - Whole Atmosphere Community Climate Model (CESM-WACCM) under an RCP6.0 climate change scenario. We examine artificial passive tracers in order to isolate transport changes from future changes in emissions and chemical processes. The model suggests enhanced stratosphere-troposphere exchange in both directions (STE), with decreasing tropospheric and increasing stratospheric tracer concentrations in the troposphere. Changes in the different transport processes are evaluated using the Transformed Eulerian Mean continuity equation, including parameterized convective transport. Dynamical changes associated with the rise of the tropopause height are shown to play a crucial role on future transport trends.

  10. Pattern recognition invariant under changes of scale and orientation

    Science.gov (United States)

    Arsenault, Henri H.; Parent, Sebastien; Moisan, Sylvain

    1997-08-01

    We have used a modified method proposed by neiberg and Casasent to successfully classify five kinds of military vehicles. The method uses a wedge filter to achieve scale invariance, and lines in a multi-dimensional feature space correspond to each target with out-of-plane orientations over 360 degrees around a vertical axis. The images were not binarized, but were filtered in a preprocessing step to reduce aliasing. The feature vectors were normalized and orthogonalized by means of a neural network. Out-of-plane rotations of 360 degrees and scale changes of a factor of four were considered. Error-free classification was achieved.

  11. The non-linear power spectrum of the Lyman alpha forest

    International Nuclear Information System (INIS)

    Arinyo-i-Prats, Andreu; Miralda-Escudé, Jordi; Viel, Matteo; Cen, Renyue

    2015-01-01

    The Lyman alpha forest power spectrum has been measured on large scales by the BOSS survey in SDSS-III at z∼ 2.3, has been shown to agree well with linear theory predictions, and has provided the first measurement of Baryon Acoustic Oscillations at this redshift. However, the power at small scales, affected by non-linearities, has not been well examined so far. We present results from a variety of hydrodynamic simulations to predict the redshift space non-linear power spectrum of the Lyα transmission for several models, testing the dependence on resolution and box size. A new fitting formula is introduced to facilitate the comparison of our simulation results with observations and other simulations. The non-linear power spectrum has a generic shape determined by a transition scale from linear to non-linear anisotropy, and a Jeans scale below which the power drops rapidly. In addition, we predict the two linear bias factors of the Lyα forest and provide a better physical interpretation of their values and redshift evolution. The dependence of these bias factors and the non-linear power on the amplitude and slope of the primordial fluctuations power spectrum, the temperature-density relation of the intergalactic medium, and the mean Lyα transmission, as well as the redshift evolution, is investigated and discussed in detail. A preliminary comparison to the observations shows that the predicted redshift distortion parameter is in good agreement with the recent determination of Blomqvist et al., but the density bias factor is lower than observed. We make all our results publicly available in the form of tables of the non-linear power spectrum that is directly obtained from all our simulations, and parameters of our fitting formula

  12. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    Science.gov (United States)

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  13. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    Science.gov (United States)

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  14. Linear infrastructure impacts on landscape hydrology.

    Science.gov (United States)

    Raiter, Keren G; Prober, Suzanne M; Possingham, Hugh P; Westcott, Fiona; Hobbs, Richard J

    2018-01-15

    The extent of roads and other forms of linear infrastructure is burgeoning worldwide, but their impacts are inadequately understood and thus poorly mitigated. Previous studies have identified many potential impacts, including alterations to the hydrological functions and soil processes upon which ecosystems depend. However, these impacts have seldom been quantified at a regional level, particularly in arid and semi-arid systems where the gap in knowledge is the greatest, and impacts potentially the most severe. To explore the effects of extensive track, road, and rail networks on surface hydrology at a regional level we assessed over 1000 km of linear infrastructure, including approx. 300 locations where ephemeral streams crossed linear infrastructure, in the largely intact landscapes of Australia's Great Western Woodlands. We found a high level of association between linear infrastructure and altered surface hydrology, with erosion and pooling 5 and 6 times as likely to occur on-road than off-road on average (1.06 erosional and 0.69 pooling features km -1 on vehicle tracks, compared with 0.22 and 0.12 km -1 , off-road, respectively). Erosion severity was greater in the presence of tracks, and 98% of crossings of ephemeral streamlines showed some evidence of impact on water movement (flow impedance (62%); diversion of flows (73%); flow concentration (76%); and/or channel initiation (31%)). Infrastructure type, pastoral land use, culvert presence, soil clay content and erodibility, mean annual rainfall, rainfall erosivity, topography and bare soil cover influenced the frequency and severity of these impacts. We conclude that linear infrastructure frequently affects ephemeral stream flows and intercepts natural overland and near-surface flows, artificially changing site-scale moisture regimes, with some parts of the landscape becoming abnormally wet and other parts becoming water-starved. In addition, linear infrastructure frequently triggers or exacerbates erosion

  15. Linear polarized fluctuations in the cosmic microwave background

    International Nuclear Information System (INIS)

    Partridge, R.B.; Nowakowski, J.; Martin, H.M.

    1988-01-01

    We report here limits on the linear (and circular) polarization of the cosmic microwave background on small angular scales, 18''≤ θ ≤ 160''. The limits are based on radio maps of Stokes parameters and polarisation (linear and circular). (author)

  16. Simulations of nanocrystals under pressure: Combining electronic enthalpy and linear-scaling density-functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Corsini, Niccolò R. C., E-mail: niccolo.corsini@imperial.ac.uk; Greco, Andrea; Haynes, Peter D. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Hine, Nicholas D. M. [Department of Physics and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Cavendish Laboratory, J. J. Thompson Avenue, Cambridge CB3 0HE (United Kingdom); Molteni, Carla [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom)

    2013-08-28

    We present an implementation in a linear-scaling density-functional theory code of an electronic enthalpy method, which has been found to be natural and efficient for the ab initio calculation of finite systems under hydrostatic pressure. Based on a definition of the system volume as that enclosed within an electronic density isosurface [M. Cococcioni, F. Mauri, G. Ceder, and N. Marzari, Phys. Rev. Lett.94, 145501 (2005)], it supports both geometry optimizations and molecular dynamics simulations. We introduce an approach for calibrating the parameters defining the volume in the context of geometry optimizations and discuss their significance. Results in good agreement with simulations using explicit solvents are obtained, validating our approach. Size-dependent pressure-induced structural transformations and variations in the energy gap of hydrogenated silicon nanocrystals are investigated, including one comparable in size to recent experiments. A detailed analysis of the polyamorphic transformations reveals three types of amorphous structures and their persistence on depressurization is assessed.

  17. Multi-scale connectivity and graph theory highlight critical areas for conservation under climate change

    Science.gov (United States)

    Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.

    2016-01-01

    Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.

  18. Exploiting the atmosphere's memory for monthly, seasonal and interannual temperature forecasting using Scaling LInear Macroweather Model (SLIMM)

    Science.gov (United States)

    Del Rio Amador, Lenin; Lovejoy, Shaun

    2016-04-01

    Traditionally, most of the models for prediction of the atmosphere behavior in the macroweather and climate regimes follow a deterministic approach. However, modern ensemble forecasting systems using stochastic parameterizations are in fact deterministic/ stochastic hybrids that combine both elements to yield a statistical distribution of future atmospheric states. Nevertheless, the result is both highly complex (both numerically and theoretically) as well as being theoretically eclectic. In principle, it should be advantageous to exploit higher level turbulence type scaling laws. Concretely, in the case for the Global Circulation Models (GCM's), due to sensitive dependence on initial conditions, there is a deterministic predictability limit of the order of 10 days. When these models are coupled with ocean, cryosphere and other process models to make long range, climate forecasts, the high frequency "weather" is treated as a driving noise in the integration of the modelling equations. Following Hasselman, 1976, this has led to stochastic models that directly generate the noise, and model the low frequencies using systems of integer ordered linear ordinary differential equations, the most well-known are the Linear Inverse Models (LIM). For annual global scale forecasts, they are somewhat superior to the GCM's and have been presented as a benchmark for surface temperature forecasts with horizons up to decades. A key limitation for the LIM approach is that it assumes that the temperature has only short range (exponential) decorrelations. In contrast, an increasing body of evidence shows that - as with the models - the atmosphere respects a scale invariance symmetry leading to power laws with potentially enormous memories so that LIM greatly underestimates the memory of the system. In this talk we show that, due to the relatively low macroweather intermittency, the simplest scaling models - fractional Gaussian noise - can be used for making greatly improved forecasts

  19. Scaling Factor Estimation Using Optimized Mass Change Strategy, Part 2: Experimental Results

    DEFF Research Database (Denmark)

    Fernández, Pelayo Fernández; Aenlle, Manuel López; Garcia, Luis M. Villa

    2007-01-01

    The mass change method is used to estimate the scaling factors, the uncertainty is reduced when, for each mode, the frequency shift is maximized and the changes in the mode shapes are minimized, which in turn, depends on the mass change strategy chosen to modify the dynamic behavior of the struct...

  20. Large-scale impact of climate change vs. land-use change on future biome shifts in Latin America

    NARCIS (Netherlands)

    Boit, Alice; Sakschewski, Boris; Boysen, Lena; Cano-Crespo, Ana; Clement, Jan; Garcia-alaniz, Nashieli; Kok, Kasper; Kolb, Melanie; Langerwisch, Fanny; Rammig, Anja; Sachse, René; Eupen, van Michiel; Bloh, von Werner; Clara Zemp, Delphine; Thonicke, Kirsten

    2016-01-01

    Climate change and land-use change are two major drivers of biome shifts causing habitat and biodiversity loss. What is missing is a continental-scale future projection of the estimated relative impacts of both drivers on biome shifts over the course of this century. Here, we provide such a

  1. How preservation time changes the linear viscoelastic properties of porcine liver.

    Science.gov (United States)

    Wex, C; Stoll, A; Fröhlich, M; Arndt, S; Lippert, H

    2013-01-01

    The preservation time of a liver graft is one of the crucial factors for the success of a liver transplantation. Grafts are kept in a preservation solution to delay cell destruction and cellular edema and to maximize organ function after transplantation. However, longer preservation times are not always avoidable. In this paper we focus on the mechanical changes of porcine liver with increasing preservation time, in order to establish an indicator for the quality of a liver graft dependent on preservation time. A time interval of 26 h was covered and the rheological properties of liver tissue studied using a stress-controlled rheometer. For samples of 1 h preservation time 0.8% strain was found as the limit of linear viscoelasticity. With increasing preservation time a decrease in the complex shear modulus as an indicator for stiffness was observed for the frequency range from 0.1 to 10 Hz. A simple fractional derivative representation of the Kelvin Voigt model was applied to gain further information about the changes of the mechanical properties of liver with increasing preservation time. Within the small shear rate interval of 0.0001-0.01 s⁻¹ the liver showed Newtonian-like flow behavior.

  2. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  3. Linear Viscoelasticity of Spherical SiO 2 Nanoparticle-Tethered Poly(butyl acrylate) Hybrids

    KAUST Repository

    Goel, Vivek; Pietrasik, Joanna; Matyjaszewski, Krzysztof; Krishnamoorti, Ramanan

    2010-01-01

    The melt state linear viscoelastic properties of spherical silica nanoparticles with grafted poly(n-butyl acrylate) chains of varying molecular weight were probed using linear small amplitude dynamic oscillatory measurements and complementary linear stress relaxation measurements. While the pure silica-tethered-polymer hybrids with no added homopolymer exhibit solid-like response, addition of matched molecular weight free matrix homopolymer chains to this hybrid, at low concentrations of added homopolymer, maintains the solid-like response with a lowered modulus that can be factored into a silica concentration dependence and a molecular weight dependence. While the silica concentration dependence of the modulus is strong, the dependence on molecular weight is weak. On the other hand, increasing the amount of added homopolymer changes the viscoelastic response to that of a liquid with a relaxation time that scales exponentially with hybrid concentration. © 2010 American Chemical Society.

  4. Properties of Confined Star-Branched and Linear Chains. A Monte Carlo Simulation Study

    International Nuclear Information System (INIS)

    Romiszowski, P.; Sikorski, A.

    2004-01-01

    A model of linear and star-branched polymer chains confined between two parallel and impenetrable surfaces was built. The polymer chains were restricted to a simple cubic lattice. Two macromolecular architectures of the chain: linear and star branched (consisted of f = 3 branches of equal length) were studied. The excluded volume was the only potential introduced into the model (the athermal system). Monte Carlo simulations were carried out using a sampling algorithm based on chain's local changes of conformation. The simulations were carried out at different confinement conditions: from light to high chain's compression. The scaling of chain's size with the chain length was studied and discussed. The influence of the confinement and the macromolecular architecture on the shape of a chain was studied. The differences in the shape of linear and star-branched chains were pointed out. (author)

  5. Linear Viscoelasticity of Spherical SiO 2 Nanoparticle-Tethered Poly(butyl acrylate) Hybrids

    KAUST Repository

    Goel, Vivek

    2010-12-01

    The melt state linear viscoelastic properties of spherical silica nanoparticles with grafted poly(n-butyl acrylate) chains of varying molecular weight were probed using linear small amplitude dynamic oscillatory measurements and complementary linear stress relaxation measurements. While the pure silica-tethered-polymer hybrids with no added homopolymer exhibit solid-like response, addition of matched molecular weight free matrix homopolymer chains to this hybrid, at low concentrations of added homopolymer, maintains the solid-like response with a lowered modulus that can be factored into a silica concentration dependence and a molecular weight dependence. While the silica concentration dependence of the modulus is strong, the dependence on molecular weight is weak. On the other hand, increasing the amount of added homopolymer changes the viscoelastic response to that of a liquid with a relaxation time that scales exponentially with hybrid concentration. © 2010 American Chemical Society.

  6. Service Providers’ Willingness to Change as Innovation Inductor in Services: Validating a Scale

    Directory of Open Access Journals (Sweden)

    Marina Figueiredo Moreir

    2016-12-01

    Full Text Available This study explores the willingness of service providers to incorporate changes suggested by clients altering previously planned services during its delivery, hereby named Willingness to Change in Services [WCS]. We apply qualitative research techniques to map seven dimensions related to this phenomenon: Client relationship management; Organizational conditions for change; Software characteristics and development; Conditions affecting teams; Administrative procedures and decision-making conditions; Entrepreneurial behavior; Interaction with supporting organizations. These dimensions have been converted into variables composing a WCS scale later submitted to theoretical and semantic validations. A scale with 26 variables resulted from such procedures was applied on a large survey carried out with 351 typical Brazilian software development service companies operating all over the country. Data from our sample have been submitted to multivariate statistical analysis to provide validation for the scale. After factorial analysis procedures, 24 items have been validated and assigned to three factors representative of WCS: Organizational Routines and Values – 12 variables; Organizational Structure for Change – 6 variables; and Service Specificities – 6 variables. As future contributions, we expect to see further testing for the WCS scale on alternative service activities to provide evidence about its limits and contributions to general service innovation theory.

  7. Efficient Non Linear Loudspeakers

    DEFF Research Database (Denmark)

    Petersen, Bo R.; Agerkvist, Finn T.

    2006-01-01

    Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....

  8. Analysis of the efficiency of the linearization techniques for solving multi-objective linear fractional programming problems by goal programming

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2017-01-01

    Full Text Available This paper presents and analyzes the applicability of three linearization techniques used for solving multi-objective linear fractional programming problems using the goal programming method. The three linearization techniques are: (1 Taylor’s polynomial linearization approximation, (2 the method of variable change, and (3 a modification of the method of variable change proposed in [20]. All three linearization techniques are presented and analyzed in two variants: (a using the optimal value of the objective functions as the decision makers’ aspirations, and (b the decision makers’ aspirations are given by the decision makers. As the criteria for the analysis we use the efficiency of the obtained solutions and the difficulties the analyst comes upon in preparing the linearization models. To analyze the applicability of the linearization techniques incorporated in the linear goal programming method we use an example of a financial structure optimization problem.

  9. Linear disturbances on discontinuous permafrost: implications for thaw-induced changes to land cover and drainage patterns

    International Nuclear Information System (INIS)

    Williams, Tyler J; Quinton, William L; Baltzer, Jennifer L

    2013-01-01

    Within the zone of discontinuous permafrost, linear disturbances such as winter roads and seismic lines severely alter the hydrology, ecology, and ground thermal regime. Continued resource exploration in this environment has created a need to better understand the processes causing permafrost thaw and concomitant changes to the terrain and ground cover, in order to efficiently reduce the environmental impact of future exploration through the development of best management practices. In a peatland 50 km south of Fort Simpson, NWT, permafrost thaw and the resulting ground surface subsidence have produced water-logged linear disturbances that appear not to be regenerating permafrost, and in many cases have altered the land cover type to resemble that of a wetland bog or fen. Subsidence alters the hydrology of plateaus, developing a fill and spill drainage pattern that allows some disturbances to be hydrologically connected with adjacent wetlands via surface flow paths during periods of high water availability. The degree of initial disturbance is an important control on the extent of permafrost thaw and thus the overall potential recovery of the linear disturbance. Low impact techniques that minimize ground surface disturbance and maintain original surface topography by eliminating windrows are needed to minimize the impact of these linear disturbances. (letter)

  10. On Numerical Stability in Large Scale Linear Algebraic Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Liesen, J.

    2005-01-01

    Roč. 85, č. 5 (2005), s. 307-325 ISSN 0044-2267 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear algebraic systems * eigenvalue problems * convergence * numerical stability * backward error * accuracy * Lanczos method * conjugate gradient method * GMRES method Subject RIV: BA - General Mathematics Impact factor: 0.351, year: 2005

  11. Burgers' turbulence problem with linear or quadratic external potential

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Leonenko, N.N.

    2005-01-01

    We consider solutions of Burgers' equation with linear or quadratic external potential and stationary random initial conditions of Ornstein-Uhlenbeck type. We study a class of limit laws that correspond to a scale renormalization of the solutions.......We consider solutions of Burgers' equation with linear or quadratic external potential and stationary random initial conditions of Ornstein-Uhlenbeck type. We study a class of limit laws that correspond to a scale renormalization of the solutions....

  12. Permafrost Hazards and Linear Infrastructure

    Science.gov (United States)

    Stanilovskaya, Julia; Sergeev, Dmitry

    2014-05-01

    The international experience of linear infrastructure planning, construction and exploitation in permafrost zone is being directly tied to the permafrost hazard assessment. That procedure should also consider the factors of climate impact and infrastructure protection. The current global climate change hotspots are currently polar and mountain areas. Temperature rise, precipitation and land ice conditions change, early springs occur more often. The big linear infrastructure objects cross the territories with different permafrost conditions which are sensitive to the changes in air temperature, hydrology, and snow accumulation which are connected to climatic dynamics. One of the most extensive linear structures built on permafrost worldwide are Trans Alaskan Pipeline (USA), Alaska Highway (Canada), Qinghai-Xizang Railway (China) and Eastern Siberia - Pacific Ocean Oil Pipeline (Russia). Those are currently being influenced by the regional climate change and permafrost impact which may act differently from place to place. Thermokarst is deemed to be the most dangerous process for linear engineering structures. Its formation and development depend on the linear structure type: road or pipeline, elevated or buried one. Zonal climate and geocryological conditions are also of the determining importance here. All the projects are of the different age and some of them were implemented under different climatic conditions. The effects of permafrost thawing have been recorded every year since then. The exploration and transportation companies from different countries maintain the linear infrastructure from permafrost degradation in different ways. The highways in Alaska are in a good condition due to governmental expenses on annual reconstructions. The Chara-China Railroad in Russia is under non-standard condition due to intensive permafrost response. Standards for engineering and construction should be reviewed and updated to account for permafrost hazards caused by the

  13. A Linear Electromagnetic Piston Pump

    Science.gov (United States)

    Hogan, Paul H.

    Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.

  14. Non-linearities in Theory-of-Mind Development.

    Science.gov (United States)

    Blijd-Hoogewys, Els M A; van Geert, Paul L C

    2016-01-01

    Research on Theory-of-Mind (ToM) has mainly focused on ages of core ToM development. This article follows a quantitative approach focusing on the level of ToM understanding on a measurement scale, the ToM Storybooks, in 324 typically developing children between 3 and 11 years of age. It deals with the eventual occurrence of developmental non-linearities in ToM functioning, using smoothing techniques, dynamic growth model building and additional indicators, namely moving skewness, moving growth rate changes and moving variability. The ToM sum-scores showed an overall developmental trend that leveled off toward the age of 10 years. Within this overall trend two non-linearities in the group-based change pattern were found: a plateau at the age of around 56 months and a dip at the age of 72-78 months. These temporary regressions in ToM sum-score were accompanied by a decrease in growth rate and variability, and a change in skewness of the ToM data, all suggesting a developmental shift in ToM understanding. The temporary decreases also occurred in the different ToM sub-scores and most clearly so in the core ToM component of beliefs. It was also found that girls had an earlier growth spurt than boys and that the underlying developmental path was more salient in girls than in boys. The consequences of these findings are discussed from various theoretical points of view, with an emphasis on a dynamic systems interpretation of the underlying developmental paths.

  15. Modeling and simulation of nuclear fuel in scenarios with long time scales

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa, Carlos E.; Bodmann, Bardo E.J., E-mail: eduardo.espinosa@ufrgs.br, E-mail: bardo.bodmann@ufrgs.br [Universidade Federal do Rio Grande do Sul (DENUC/PROMEC/UFRGS), Porto Alegre, RS (Brazil). Departamento de Engenharia Nuclear. Programa de Pos Graduacao em Engenharia Mecanica

    2015-07-01

    Nuclear reactors play a key role in defining the energy matrix. A study by the Fraunhofer Society shows in different time scales for long periods of time the distribution of energy sources. Regardless of scale, the use of nuclear energy is practically constant. In these scenarios, the nuclear fuel behavior over time is of interest. For kinetics of long-term scales, changing the chemical composition of fuel is significant. Thus, it is appropriate to consider fission products called neutron poisons. Such products are of interest in the nuclear reactor, since they become parasitic neutron absorbers and result in long thermal heat sources. The objective of this work is to solve the kinetics system coupled to neutron poison products. To solve this system, we use similar ideas to the method of Adomian decomposition. Initially, one separates the system of equations as the sum of a linear part and a non-linear part in order to solve a recursive system. The nonlinearity is treated as Adomian polynomial. We present numerical results of the effects of changing the power of a reactor, scenarios such as start-up and shut-down. For these results we consider time dependent reactivity, such as linear reactivity, quadratic polynomial and oscillatory. With these results one can simulate the chemical composition of the fuel due to the reuse of the spent fuel in subsequent cycles. (author)

  16. Modeling and simulation of nuclear fuel in scenarios with long time scales

    International Nuclear Information System (INIS)

    Espinosa, Carlos E.; Bodmann, Bardo E.J.

    2015-01-01

    Nuclear reactors play a key role in defining the energy matrix. A study by the Fraunhofer Society shows in different time scales for long periods of time the distribution of energy sources. Regardless of scale, the use of nuclear energy is practically constant. In these scenarios, the nuclear fuel behavior over time is of interest. For kinetics of long-term scales, changing the chemical composition of fuel is significant. Thus, it is appropriate to consider fission products called neutron poisons. Such products are of interest in the nuclear reactor, since they become parasitic neutron absorbers and result in long thermal heat sources. The objective of this work is to solve the kinetics system coupled to neutron poison products. To solve this system, we use similar ideas to the method of Adomian decomposition. Initially, one separates the system of equations as the sum of a linear part and a non-linear part in order to solve a recursive system. The nonlinearity is treated as Adomian polynomial. We present numerical results of the effects of changing the power of a reactor, scenarios such as start-up and shut-down. For these results we consider time dependent reactivity, such as linear reactivity, quadratic polynomial and oscillatory. With these results one can simulate the chemical composition of the fuel due to the reuse of the spent fuel in subsequent cycles. (author)

  17. Large-Scale Ocean Circulation-Cloud Interactions Reduce the Pace of Transient Climate Change

    Science.gov (United States)

    Trossman, D. S.; Palter, J. B.; Merlis, T. M.; Huang, Y.; Xia, Y.

    2016-01-01

    Changes to the large scale oceanic circulation are thought to slow the pace of transient climate change due, in part, to their influence on radiative feedbacks. Here we evaluate the interactions between CO2-forced perturbations to the large-scale ocean circulation and the radiative cloud feedback in a climate model. Both the change of the ocean circulation and the radiative cloud feedback strongly influence the magnitude and spatial pattern of surface and ocean warming. Changes in the ocean circulation reduce the amount of transient global warming caused by the radiative cloud feedback by helping to maintain low cloud coverage in the face of global warming. The radiative cloud feedback is key in affecting atmospheric meridional heat transport changes and is the dominant radiative feedback mechanism that responds to ocean circulation change. Uncertainty in the simulated ocean circulation changes due to CO2 forcing may contribute a large share of the spread in the radiative cloud feedback among climate models.

  18. Large linear magnetoresistance from neutral defects in Bi$_2$Se$_3$

    OpenAIRE

    Kumar, Devendra; Lakhani, Archana

    2016-01-01

    The chalcogenide Bi$_2$Se$_3$ can attain the three dimensional (3D) Dirac semimetal state under the influence of strain and microstrain. Here we report the presnece of large linear magnetoresistance in such a Bi$_2$Se$_3$ crystal. The magnetoresistance has quadratic form at low fields which crossovers to linear above 4 T. The temperature dependence of magnetoresistance scales with carrier mobility and the crossover field scales with inverse of mobility. Our analysis suggest that the linear ma...

  19. Millennial-scale temperature change velocity in the continental northern Neotropics.

    Directory of Open Access Journals (Sweden)

    Alexander Correa-Metrio

    Full Text Available Climate has been inherently linked to global diversity patterns, and yet no empirical data are available to put modern climate change into a millennial-scale context. High tropical species diversity has been linked to slow rates of climate change during the Quaternary, an assumption that lacks an empirical foundation. Thus, there is the need for quantifying the velocity at which the bioclimatic space changed during the Quaternary in the tropics. Here we present rates of climate change for the late Pleistocene and Holocene from Mexico and Guatemala. An extensive modern pollen survey and fossil pollen data from two long sedimentary records (30,000 and 86,000 years for highlands and lowlands, respectively were used to estimate past temperatures. Derived temperature profiles show a parallel long-term trend and a similar cooling during the Last Glacial Maximum in the Guatemalan lowlands and the Mexican highlands. Temperature estimates and digital elevation models were used to calculate the velocity of isotherm displacement (temperature change velocity for the time period contained in each record. Our analyses showed that temperature change velocities in Mesoamerica during the late Quaternary were at least four times slower than values reported for the last 50 years, but also at least twice as fast as those obtained from recent models. Our data demonstrate that, given extremely high temperature change velocities, species survival must have relied on either microrefugial populations or persistence of suppressed individuals. Contrary to the usual expectation of stable climates being associated with high diversity, our results suggest that Quaternary tropical diversity was probably maintained by centennial-scale oscillatory climatic variability that forestalled competitive exclusion. As humans have simplified modern landscapes, thereby removing potential microrefugia, and climate change is occurring monotonically at a very high velocity, extinction risk

  20. Millennial-scale temperature change velocity in the continental northern Neotropics.

    Science.gov (United States)

    Correa-Metrio, Alexander; Bush, Mark; Lozano-García, Socorro; Sosa-Nájera, Susana

    2013-01-01

    Climate has been inherently linked to global diversity patterns, and yet no empirical data are available to put modern climate change into a millennial-scale context. High tropical species diversity has been linked to slow rates of climate change during the Quaternary, an assumption that lacks an empirical foundation. Thus, there is the need for quantifying the velocity at which the bioclimatic space changed during the Quaternary in the tropics. Here we present rates of climate change for the late Pleistocene and Holocene from Mexico and Guatemala. An extensive modern pollen survey and fossil pollen data from two long sedimentary records (30,000 and 86,000 years for highlands and lowlands, respectively) were used to estimate past temperatures. Derived temperature profiles show a parallel long-term trend and a similar cooling during the Last Glacial Maximum in the Guatemalan lowlands and the Mexican highlands. Temperature estimates and digital elevation models were used to calculate the velocity of isotherm displacement (temperature change velocity) for the time period contained in each record. Our analyses showed that temperature change velocities in Mesoamerica during the late Quaternary were at least four times slower than values reported for the last 50 years, but also at least twice as fast as those obtained from recent models. Our data demonstrate that, given extremely high temperature change velocities, species survival must have relied on either microrefugial populations or persistence of suppressed individuals. Contrary to the usual expectation of stable climates being associated with high diversity, our results suggest that Quaternary tropical diversity was probably maintained by centennial-scale oscillatory climatic variability that forestalled competitive exclusion. As humans have simplified modern landscapes, thereby removing potential microrefugia, and climate change is occurring monotonically at a very high velocity, extinction risk for tropical

  1. THE STRUCTURE AND LINEAR POLARIZATION OF THE KILOPARSEC-SCALE JET OF THE QUASAR 3C 345

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, David H.; Wardle, John F. C.; Marchenko, Valerie V., E-mail: roberts@brandeis.edu [Department of Physics MS-057, Brandeis University, Waltham, MA 02454-0911 (United States)

    2013-02-01

    Deep Very Large Array imaging of the quasar 3C 345 at 4.86 and 8.44 GHz has been used to study the structure and linear polarization of its radio jet on scales ranging from 2 to 30 kpc. There is a 7-8 Jy unresolved core with spectral index {alpha} {approx_equal} -0.24 (I{sub {nu}}{proportional_to}{nu}{sup {alpha}}). The jet (typical intensity 15 mJy beam{sup -1}) consists of a 2.''5 straight section containing two knots, and two additional non-co-linear knots at the end. The jet's total projected length is about 27 kpc. The spectral index of the jet varies over -1.1 {approx}< {alpha} {approx}< -0.5. The jet diverges with a semi-opening angle of about 9 Degree-Sign , and is nearly constant in integrated brightness over its length. A faint feature northeast of the core does not appear to be a true counter-jet, but rather an extended lobe of this FR-II radio source seen in projection. The absence of a counter-jet is sufficient to place modest constraints on the speed of the jet on these scales, requiring {beta} {approx}> 0.5. Despite the indication of jet precession in the total intensity structure, the polarization images suggest instead a jet re-directed at least twice by collisions with the external medium. Surprisingly, the electric vector position angles in the main body of the jet are neither longitudinal nor transverse, but make an angle of about 55 Degree-Sign with the jet axis in the middle while along the edges the vectors are transverse, suggesting a helical magnetic field. There is no significant Faraday rotation in the source, so that is not the cause of the twist. The fractional polarization in the jet averages 25% and is higher at the edges. In a companion paper, Roberts and Wardle show that differential Doppler boosting in a diverging relativistic velocity field can explain the electric vector pattern in the jet.

  2. Distinguishing globally-driven changes from regional- and local-scale impacts: The case for long-term and broad-scale studies of recovery from pollution.

    Science.gov (United States)

    Hawkins, S J; Evans, A J; Mieszkowska, N; Adams, L C; Bray, S; Burrows, M T; Firth, L B; Genner, M J; Leung, K M Y; Moore, P J; Pack, K; Schuster, H; Sims, D W; Whittington, M; Southward, E C

    2017-11-30

    Marine ecosystems are subject to anthropogenic change at global, regional and local scales. Global drivers interact with regional- and local-scale impacts of both a chronic and acute nature. Natural fluctuations and those driven by climate change need to be understood to diagnose local- and regional-scale impacts, and to inform assessments of recovery. Three case studies are used to illustrate the need for long-term studies: (i) separation of the influence of fishing pressure from climate change on bottom fish in the English Channel; (ii) recovery of rocky shore assemblages from the Torrey Canyon oil spill in the southwest of England; (iii) interaction of climate change and chronic Tributyltin pollution affecting recovery of rocky shore populations following the Torrey Canyon oil spill. We emphasize that "baselines" or "reference states" are better viewed as envelopes that are dependent on the time window of observation. Recommendations are made for adaptive management in a rapidly changing world. Copyright © 2017. Published by Elsevier Ltd.

  3. Metric preheating and limitations of linearized gravity

    International Nuclear Information System (INIS)

    Bassett, Bruce A.; Tamburini, Fabrizio; Kaiser, David I.; Maartens, Roy

    1999-01-01

    During the preheating era after inflation, resonant amplification of quantum field fluctuations takes place. Recently it has become clear that this must be accompanied by resonant amplification of scalar metric fluctuations, since the two are united by Einstein's equations. Furthermore, this 'metric preheating' enhances particle production, and leads to gravitational rescattering effects even at linear order. In multi-field models with strong preheating (q>>1), metric perturbations are driven non-linear, with the strongest amplification typically on super-Hubble scales (k→0). This amplification is causal, being due to the super-Hubble coherence of the inflaton condensate, and is accompanied by resonant growth of entropy perturbations. The amplification invalidates the use of the linearized Einstein field equations, irrespective of the amount of fine-tuning of the initial conditions. This has serious implications on all scales - from large-angle cosmic microwave background (CMB) anisotropies to primordial black holes. We investigate the (q,k) parameter space in a two-field model, and introduce the time to non-linearity, t nl , as the timescale for the breakdown of the linearized Einstein equations. t nl is a robust indicator of resonance behavior, showing the fine structure in q and k that one expects from a quasi-Floquet system, and we argue that t nl is a suitable generalization of the static Floquet index in an expanding universe. Backreaction effects are expected to shut down the linear resonances, but cannot remove the existing amplification, which threatens the viability of strong preheating when confronted with the CMB. Mode-mode coupling and turbulence tend to re-establish scale invariance, but this process is limited by causality and for small k the primordial scale invariance of the spectrum may be destroyed. We discuss ways to escape the above conclusions, including secondary phases of inflation and preheating solely to fermions. The exclusion principle

  4. Climate change impacts and adaptations on small-scale livestock production

    Directory of Open Access Journals (Sweden)

    Taruvinga, A.

    2013-06-01

    Full Text Available The paper estimated the impacts of climate change and adaptations on small-scale livestock production. The study is based on a survey of 1484 small-scale livestock rural farmers across the Eastern Cape Province of South Africa. Regression estimates finds that with warming, the probability of choosing the following species increases; goats, dual purpose chicken (DPC, layers, donkeys and ducks. High precipitation increases the probability of choosing the following animals; beef, goats, DPC and donkeys. Further, socio-economic estimates indicate that livestock selection choices are also conditioned by gender, age, marital status, education and household size. The paper therefore concluded that as climate changes, rural farmers switch their livestock combinations as a coping strategy. Unfortunately, rural farmers face a limited preferred livestock selection pool that is combatable to harsh climate which might translate to a bleak future for rural livestock farmers.

  5. Computerized implementation of higher-order electron-correlation methods and their linear-scaling divide-and-conquer extensions.

    Science.gov (United States)

    Nakano, Masahiko; Yoshikawa, Takeshi; Hirata, So; Seino, Junji; Nakai, Hiromi

    2017-11-05

    We have implemented a linear-scaling divide-and-conquer (DC)-based higher-order coupled-cluster (CC) and Møller-Plesset perturbation theories (MPPT) as well as their combinations automatically by means of the tensor contraction engine, which is a computerized symbolic algebra system. The DC-based energy expressions of the standard CC and MPPT methods and the CC methods augmented with a perturbation correction were proposed for up to high excitation orders [e.g., CCSDTQ, MP4, and CCSD(2) TQ ]. The numerical assessment for hydrogen halide chains, polyene chains, and first coordination sphere (C1) model of photoactive yellow protein has revealed that the DC-based correlation methods provide reliable correlation energies with significantly less computational cost than that of the conventional implementations. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Modified ocean circulation, albedo instability and ice-flow instability. Risks of non-linear climate change

    Energy Technology Data Exchange (ETDEWEB)

    Ham, J. van; Beer, R.J. van; Builtjes, P.J.H.; Roemer, M.G.M. [TNO Inst. of Environmental Sciences, Delft (Netherlands); Koennen, G.P. [KNMI, Royal Netherlands Meteorological Inst., de Bilt (Netherlands); Oerlemans, J. [Utrecht Univ. (Netherlands). Inst. for Meteorological and Atmospheric Research

    1995-12-31

    In this presentation part of an investigation is described into risks for climate change which are presently not adequately covered in General Circulation Models. In the concept of climate change as a result of the enhanced greenhouse effect it is generally assumed that the radiative forcings from increased concentrations of greenhouse gases (GHG) will result in a proportional or quasilinear global warming. Though correlations of this kind are known from palaeoclimate research, the variability of the climate seems to prevent the direct proof of a causal relation between recent greenhouse gas concentrations and temperature observations. In order to resolve the issue the use of General Circulation Models (GCMs), though still inadequate at present, is indispensable. Around the world some 10 leading GCMs exist which have been the subject of evaluation and intercomparison in a number of studies. Their results are regularly assessed in the IPCC process. A discussion on their performance in simulating present or past climates and the causes of their weak points shows that the depiction of clouds is a major weakness of GCMs. A second element which is virtually absent in GCMs are the feedbacks from natural biogeochemical cycles. These cycles are influenced by man in a number of ways. GCMs have a limited performance in simulating regional effects on climate. Moreover, albedo instability, in part due to its interaction with cloudiness, is only roughly represented. Apparently, not all relevant processes have been included in the GCMs. That situation constitutes a risk, since it cannot be ruled out that a missing process could cause or trigger a non-linear climate change. In the study non-linear climate change is connected with those processes which could provide feedbacks with a risk for non-monotonous or discontinuous behaviour of the climate system, or which are unpredictable or could cause rapid transitions

  7. Modified ocean circulation, albedo instability and ice-flow instability. Risks of non-linear climate change

    Energy Technology Data Exchange (ETDEWEB)

    Ham, J van; Beer, R.J. van; Builtjes, P J.H.; Roemer, M G.M. [TNO Inst. of Environmental Sciences, Delft (Netherlands); Koennen, G P [KNMI, Royal Netherlands Meteorological Inst., de Bilt (Netherlands); Oerlemans, J [Utrecht Univ. (Netherlands). Inst. for Meteorological and Atmospheric Research

    1996-12-31

    In this presentation part of an investigation is described into risks for climate change which are presently not adequately covered in General Circulation Models. In the concept of climate change as a result of the enhanced greenhouse effect it is generally assumed that the radiative forcings from increased concentrations of greenhouse gases (GHG) will result in a proportional or quasilinear global warming. Though correlations of this kind are known from palaeoclimate research, the variability of the climate seems to prevent the direct proof of a causal relation between recent greenhouse gas concentrations and temperature observations. In order to resolve the issue the use of General Circulation Models (GCMs), though still inadequate at present, is indispensable. Around the world some 10 leading GCMs exist which have been the subject of evaluation and intercomparison in a number of studies. Their results are regularly assessed in the IPCC process. A discussion on their performance in simulating present or past climates and the causes of their weak points shows that the depiction of clouds is a major weakness of GCMs. A second element which is virtually absent in GCMs are the feedbacks from natural biogeochemical cycles. These cycles are influenced by man in a number of ways. GCMs have a limited performance in simulating regional effects on climate. Moreover, albedo instability, in part due to its interaction with cloudiness, is only roughly represented. Apparently, not all relevant processes have been included in the GCMs. That situation constitutes a risk, since it cannot be ruled out that a missing process could cause or trigger a non-linear climate change. In the study non-linear climate change is connected with those processes which could provide feedbacks with a risk for non-monotonous or discontinuous behaviour of the climate system, or which are unpredictable or could cause rapid transitions

  8. An experimental verification of the compensation of length change of line scales caused by ambient air pressure

    International Nuclear Information System (INIS)

    Takahashi, Akira; Miwa, Nobuharu

    2010-01-01

    Line scales are used as a working standard of length for the calibration of optical measuring instruments such as profile projectors, measuring microscopes and video measuring systems. The authors have developed a one-dimensional calibration system for line scales to obtain a lower uncertainty of measurement. The scale calibration system, named Standard Scale Calibrator SSC-05, employs a vacuum interferometer system for length measurement, a 633 nm iodine-stabilized He–Ne laser to calibrate the oscillating frequency of the interferometer laser light source and an Abbe's error compensation structure. To reduce the uncertainty of measurement, the uncertainty factors of the line scale and ambient conditions should not be neglected. Using the length calibration system, the expansion and contraction of a line scale due to changes in ambient air pressure were observed and the measured scale length was corrected into the length under standard atmospheric pressure, 1013.25 hPa. Utilizing a natural rapid change in the air pressure caused by a tropical storm (typhoon), we carried out an experiment on the length measurement of a 1000 mm long line scale made of glass ceramic with a low coefficient of thermal expansion. Using a compensation formula for the length change caused by changes in ambient air pressure, the length change of the 1000 mm long line scale was compensated with a standard deviation of less than 1 nm

  9. An alternative test for verifying electronic balance linearity

    International Nuclear Information System (INIS)

    Thomas, I.R.

    1998-02-01

    This paper presents an alternative method for verifying electronic balance linearity and accuracy. This method is being developed for safeguards weighings (weighings for the control and accountability of nuclear material) at the Idaho National Engineering and Environmental Laboratory (INEEL). With regard to balance linearity and accuracy, DOE Order 5633.3B, Control and Accountability of Nuclear Materials, Paragraph 2, 4, e, (1), (a) Scales and Balances Program, states: ''All scales and balances used for accountability purposes shall be maintained in good working condition, recalibrated according to an established schedule, and checked for accuracy and linearity on each day that the scale or balance is used for accountability purposes.'' Various tests have been proposed for testing accuracy and linearity. In the 1991 Measurement Science Conference, Dr. Walter E. Kupper presented a paper entitled: ''Validation of High Accuracy Weighing Equipment.'' Dr. Kupper emphasized that tolerance checks for calibrated, state-of-the-art electronic equipment need not be complicated, and he presented four easy steps for verifying that a calibrated balance is operating correctly. These tests evaluate the standard deviation of successive weighings (of the same load), the off-center error, the calibration error, and the error due to nonlinearity. This method of balance validation is undoubtedly an authoritative means of ensuring balance operability, yet it could have two drawbacks: one, the test for linearity is not intuitively obvious, especially from a statistical viewpoint; and two, there is an absence of definitively defined testing limits. Hence, this paper describes an alternative means of verifying electronic balance linearity and accuracy that is being developed for safeguards measurements at the INEEL

  10. The effect of changes in sea surface temperature on linear growth of Porites coral in Ambon Bay

    International Nuclear Information System (INIS)

    Corvianawatie, Corry; Putri, Mutiara R.; Cahyarini, Sri Y.

    2015-01-01

    Coral is one of the most important organisms in the coral reef ecosystem. There are several factors affecting coral growth, one of them is changes in sea surface temperature (SST). The purpose of this research is to understand the influence of SST variability on the annual linear growth of Porites coral taken from Ambon Bay. The annual coral linear growth was calculated and compared to the annual SST from the Extended Reconstructed Sea Surface Temperature version 3b (ERSST v3b) model. Coral growth was calculated by using Coral X-radiograph Density System (CoralXDS) software. Coral sample X-radiographs were used as input data. Chronology was developed by calculating the coral’s annual growth bands. A pair of high and low density banding patterns observed in the coral’s X-radiograph represent one year of coral growth. The results of this study shows that Porites coral extents from 2001-2009 and had an average growth rate of 1.46 cm/year. Statistical analysis shows that the annual coral linear growth declined by 0.015 cm/year while the annual SST declined by 0.013°C/year. SST and the annual linear growth of Porites coral in the Ambon Bay is insignificantly correlated with r=0.304 (n=9, p>0.05). This indicates that annual SST variability does not significantly influence the linear growth of Porites coral from Ambon Bay. It is suggested that sedimentation load, salinity, pH or other environmental factors may affect annual linear coral growth

  11. The effect of changes in sea surface temperature on linear growth of Porites coral in Ambon Bay

    Energy Technology Data Exchange (ETDEWEB)

    Corvianawatie, Corry, E-mail: corvianawatie@students.itb.ac.id; Putri, Mutiara R., E-mail: mutiara.putri@fitb.itb.ac.id [Oceanography Study Program, Bandung Institute of Technology (ITB), Jl. Ganesha 10 Bandung (Indonesia); Cahyarini, Sri Y., E-mail: yuda@geotek.lipi.go.id [Research Center for Geotechnology, Indonesian Institute of Sciences (LIPI), Bandung (Indonesia)

    2015-09-30

    Coral is one of the most important organisms in the coral reef ecosystem. There are several factors affecting coral growth, one of them is changes in sea surface temperature (SST). The purpose of this research is to understand the influence of SST variability on the annual linear growth of Porites coral taken from Ambon Bay. The annual coral linear growth was calculated and compared to the annual SST from the Extended Reconstructed Sea Surface Temperature version 3b (ERSST v3b) model. Coral growth was calculated by using Coral X-radiograph Density System (CoralXDS) software. Coral sample X-radiographs were used as input data. Chronology was developed by calculating the coral’s annual growth bands. A pair of high and low density banding patterns observed in the coral’s X-radiograph represent one year of coral growth. The results of this study shows that Porites coral extents from 2001-2009 and had an average growth rate of 1.46 cm/year. Statistical analysis shows that the annual coral linear growth declined by 0.015 cm/year while the annual SST declined by 0.013°C/year. SST and the annual linear growth of Porites coral in the Ambon Bay is insignificantly correlated with r=0.304 (n=9, p>0.05). This indicates that annual SST variability does not significantly influence the linear growth of Porites coral from Ambon Bay. It is suggested that sedimentation load, salinity, pH or other environmental factors may affect annual linear coral growth.

  12. Crystallization characteristic and scaling behavior of germanium antimony thin films for phase change memory.

    Science.gov (United States)

    Wu, Weihua; Zhao, Zihan; Shen, Bo; Zhai, Jiwei; Song, Sannian; Song, Zhitang

    2018-04-19

    Amorphous Ge8Sb92 thin films with various thicknesses were deposited by magnetron sputtering. The crystallization kinetics and optical properties of the Ge8Sb92 thin films and related scaling effects were investigated by an in situ thermally induced method and an optical technique. With a decrease in film thickness, the crystallization temperature, crystallization activation energy and data retention ability increased significantly. The changed crystallization behavior may be ascribed to the smaller grain size and larger surface-to-volume ratio as the film thickness decreased. Regardless of whether the state was amorphous or crystalline, the film resistance increased remarkably as the film thickness decreased to 3 nm. The optical band gap calculated from the reflection spectra increases distinctly with a reduction in film thickness. X-ray diffraction patterns confirm that the scaling of the Ge8Sb92 thin film can inhibit the crystallization process and reduce the grain size. The values of exponent indices that were obtained indicate that the crystallization mechanism experiences a series of changes with scaling of the film thickness. The crystallization time was estimated to determine the scaling effect on the phase change speed. The scaling effect on the electrical switching performance of a phase change memory cell was also determined. The current-voltage and resistance-voltage characteristics indicate that phase change memory cells based on a thinner Ge8Sb92 film will exhibit a higher threshold voltage, lower RESET operational voltage and greater pulse width, which implies higher thermal stability, lower power consumption and relatively lower switching velocity.

  13. LANDIS PRO: a landscape model that predicts forest composition and structure changes at regional scales

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Jacob S. Fraser; Frank R. Thompson; Stephen R. Shifley; Martin A. Spetich

    2014-01-01

    LANDIS PRO predicts forest composition and structure changes incorporating species-, stand-, and landscape-scales processes at regional scales. Species-scale processes include tree growth, establishment, and mortality. Stand-scale processes contain density- and size-related resource competition that regulates self-thinning and seedling establishment. Landscapescale...

  14. Wind direction variations in the natural wind – A new length scale

    DEFF Research Database (Denmark)

    Johansson, Jens; Christensen, Silas Sverre

    2018-01-01

    During an observation period of e.g. 10min, the wind direction will differ from its mean direction for short periods of time, and a body of air will pass by from that direction before the direction changes once again. The present paper introduces a new length scale which we have labeled the angular...... length scale. This length scale expresses the average size of the body of air passing by from any deviation of wind direction away from the mean direction. Using metrological observations from two different sites under varying conditions we have shown that the size of the body of air relative to the mean...... size decreases linearly with the deviation from the mean wind direction when the deviation is normalized with the standard deviation of the wind direction. It is shown that this linear variation is independent of the standard deviation of the wind direction, and that the two full-scale data sets follow...

  15. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    NARCIS (Netherlands)

    R.A. Zuidwijk (Rob)

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an

  16. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  17. Exact classical scaling formalism for nonreactive processes

    International Nuclear Information System (INIS)

    DePristo, A.E.

    1981-01-01

    A general nonreactive collision system is considered with internal molecular variables (p, r) and/or (I, theta) of arbitrary dimensions and relative translational variables (P, R) of three or less dimensions. We derive an exact classical scaling formalism which relates the collisional change in any function of molecular variables directly to the initial values of these variables. The collision dynamics is then described by an explicit function of the initial point in the internal molecular phase space, for a fixed point in the relative translational phase space. In other words, the systematic variation of the internal molecular properties (e.g., actions and average internal kinetic energies) is given as a function of the initial internal action-angle variables. A simple three term approximation to the exact formalism is derived, the natural variables of which are the internal action I and internal linear momenta p. For the final average internal kinetic energies T, the result is T-T/sup( 0 ) = α+βp/sup( 0 )+γI/sup( 0 ), where the superscripted ''0'' indicates the initial value. The parameters α, β, and γ in this scaling theory are directly related to the moments of the change in average internal kinetic energy. Utilizing a very limited number of input moments generated from classical trajectory calculations, the scaling can be used to predict the entire distribution of final internal variables as a function of initial internal actions and linear momenta. Initial examples for atom--collinear harmonic oscillator collision systems are presented in detail, with the scaling predictions (e.g., moments and quasiclassical histogram transition probabilities) being generally very good to excellent quantitatively

  18. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  19. The principles and construction of linear colliders

    International Nuclear Information System (INIS)

    Rees, J.

    1986-09-01

    The problems posed to the designers and builders of high-energy linear colliders are discussed. Scaling laws of linear colliders are considered. The problem of attainment of small interaction areas is addressed. The physics of damping rings, which are designed to condense beam bunches in phase space, is discussed. The effect of wake fields on a particle bunch in a linac, particularly the conventional disk-loaded microwave linac structures, are discussed, as well as ways of dealing with those effects. Finally, the SLAC Linear Collider is described. 18 refs., 17 figs

  20. Applications of Data Assimilation to Analysis of the Ocean on Large Scales

    Science.gov (United States)

    Miller, Robert N.; Busalacchi, Antonio J.; Hackert, Eric C.

    1997-01-01

    It is commonplace to begin talks on this topic by noting that oceanographic data are too scarce and sparse to provide complete initial and boundary conditions for large-scale ocean models. Even considering the availability of remotely-sensed data such as radar altimetry from the TOPEX and ERS-1 satellites, a glance at a map of available subsurface data should convince most observers that this is still the case. Data are still too sparse for comprehensive treatment of interannual to interdecadal climate change through the use of models, since the new data sets have not been around for very long. In view of the dearth of data, we must note that the overall picture is changing rapidly. Recently, there have been a number of large scale ocean analysis and prediction efforts, some of which now run on an operational or at least quasi-operational basis, most notably the model based analyses of the tropical oceans. These programs are modeled on numerical weather prediction. Aside from the success of the global tide models, assimilation of data in the tropics, in support of prediction and analysis of seasonal to interannual climate change, is probably the area of large scale ocean modeling and data assimilation in which the most progress has been made. Climate change is a problem which is particularly suited to advanced data assimilation methods. Linear models are useful, and the linear theory can be exploited. For the most part, the data are sufficiently sparse that implementation of advanced methods is worthwhile. As an example of a large scale data assimilation experiment with a recent extensive data set, we present results of a tropical ocean experiment in which the Kalman filter was used to assimilate three years of altimetric data from Geosat into a coarsely resolved linearized long wave shallow water model. Since nonlinear processes dominate the local dynamic signal outside the tropics, subsurface dynamical quantities cannot be reliably inferred from surface height

  1. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    Science.gov (United States)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2017-09-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  2. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    Science.gov (United States)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2018-06-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  3. Changes in channel morphology over human time scales [Chapter 32

    Science.gov (United States)

    John M. Buffington

    2012-01-01

    Rivers are exposed to changing environmental conditions over multiple spatial and temporal scales, with the imposed environmental conditions and response potential of the river modulated to varying degrees by human activity and our exploitation of natural resources. Watershed features that control river morphology include topography (valley slope and channel...

  4. Multiple linear regression to develop strength scaled equations for knee and elbow joints based on age, gender and segment mass

    DEFF Research Database (Denmark)

    D'Souza, Sonia; Rasmussen, John; Schwirtz, Ansgar

    2012-01-01

    and valuable ergonomic tool. Objective: To investigate age and gender effects on the torque-producing ability in the knee and elbow in older adults. To create strength scaled equations based on age, gender, upper/lower limb lengths and masses using multiple linear regression. To reduce the number of dependent...... flexors. Results: Males were signifantly stronger than females across all age groups. Elbow peak torque (EPT) was better preserved from 60s to 70s whereas knee peak torque (KPT) reduced significantly (PGender, thigh mass and age best...... predicted KPT (R2=0.60). Gender, forearm mass and age best predicted EPT (R2=0.75). Good crossvalidation was established for both elbow and knee models. Conclusion: This cross-sectional study of muscle strength created and validated strength scaled equations of EPT and KPT using only gender, segment mass...

  5. Scaling Sparse Matrices for Optimization Algorithms

    OpenAIRE

    Gajulapalli Ravindra S; Lasdon Leon S

    2006-01-01

    To iteratively solve large scale optimization problems in various contexts like planning, operations, design etc., we need to generate descent directions that are based on linear system solutions. Irrespective of the optimization algorithm or the solution method employed for the linear systems, ill conditioning introduced by problem characteristics or the algorithm or both need to be addressed. In [GL01] we used an intuitive heuristic approach in scaling linear systems that improved performan...

  6. Introduction to the Special Issue: Across the horizon: scale effects in global change research.

    Science.gov (United States)

    Gornish, Elise S; Leuzinger, Sebastian

    2015-01-01

    As a result of the increasing speed and magnitude in which habitats worldwide are experiencing environmental change, making accurate predictions of the effects of global change on ecosystems and the organisms that inhabit them have become an important goal for ecologists. Experimental and modelling approaches aimed at understanding the linkages between factors of global change and biotic responses have become numerous and increasingly complex in order to adequately capture the multifarious dynamics associated with these relationships. However, constrained by resources, experiments are often conducted at small spatiotemporal scales (e.g. looking at a plot of a few square metres over a few years) and at low organizational levels (looking at organisms rather than ecosystems) in spite of both theoretical and experimental work that suggests ecological dynamics across scales can be dissimilar. This phenomenon has been hypothesized to occur because the mechanisms that drive dynamics across scales differ. A good example is the effect of elevated CO2 on transpiration. While at the leaf level, transpiration can be reduced, at the stand level, transpiration can increase because leaf area per unit ground area increases. The reported net effect is then highly dependent on the spatiotemporal scale. This special issue considers the biological relevancy inherent in the patterns associated with the magnitude and type of response to changing environmental conditions, across scales. This collection of papers attempts to provide a comprehensive treatment of this phenomenon in order to help develop an understanding of the extent of, and mechanisms involved with, ecological response to global change. Published by Oxford University Press on behalf of the Annals of Botany Company.

  7. Age related neuromuscular changes in sEMG of m. Tibialis Anterior using higher order statistics (Gaussianity & linearity test).

    Science.gov (United States)

    Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K

    2016-08-01

    Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.

  8. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    Science.gov (United States)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  9. The application of two-step linear temperature program to thermal analysis for monitoring the lipid induction of Nostoc sp. KNUA003 in large scale cultivation.

    Science.gov (United States)

    Kang, Bongmun; Yoon, Ho-Sung

    2015-02-01

    Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. High-performance small-scale solvers for linear Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    , with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...

  11. Leading Educational Change and Improvement at Scale: Some Inconvenient Truths about System Performance

    Science.gov (United States)

    Harris, Alma; Jones, Michelle

    2017-01-01

    The challenges of securing educational change and transformation, at scale, remain considerable. While sustained progress has been made in some education systems (Fullan, 2009; Hargreaves & Shirley, 2009) generally, it remains the case that the pathway to large-scale, system improvement is far from easy or straightforward. While large-scale…

  12. Identification of the Scale of Changes in Personnel Motivation Techniques at Mechanical-Engineering Enterprises

    Directory of Open Access Journals (Sweden)

    Melnyk Olga G.

    2016-02-01

    Full Text Available The method for identification of the scale of changes in personnel motivation techniques at mechanical-engineering enterprises based on structural and logical sequence of implementation of relevant stages (identification of the mission, strategy and objectives of the enterprise; forecasting the development of the enterprise business environment; SWOT-analysis of actual motivation techniques, deciding on the scale of changes in motivation techniques, choosing providers for changing personnel motivation techniques, choosing an alternative to changing motivation techniques, implementation of changes in motivation techniques; control over changes in motivation techniques. It has been substantiated that the improved method enables providing a systematic and analytical justification for management decisionmaking in this field and choosing the best for the mechanical-engineering enterprise scale and variant of changes in motivation techniques. The method for identification of the scale of changes in motivation techniques at mechanical-engineering enterprises takes into account the previous, current and prospective character. Firstly, the approach is based on considering the past state in the motivational sphere of the mechanical-engineering enterprise; secondly, the method involves identifying the current state of personnel motivation techniques; thirdly, within the method framework the prospective, which is manifested in strategic vision of the enterprise development as well as in forecasting the development of its business environment, is taken into account. The advantage of the proposed method is that the level of its specification may vary depending on the set goals, resource constraints and necessity. Among other things, this method allows integrating various formalized and non-formalized causal relationships in the sphere of personnel motivation at machine-building enterprises and management of relevant processes. This creates preconditions for a

  13. Invariant relationships deriving from classical scaling transformations

    International Nuclear Information System (INIS)

    Bludman, Sidney; Kennedy, Dallas C.

    2011-01-01

    Because scaling symmetries of the Euler-Lagrange equations are generally not variational symmetries of the action, they do not lead to conservation laws. Instead, an extension of Noether's theorem reduces the equations of motion to evolutionary laws that prove useful, even if the transformations are not symmetries of the equations of motion. In the case of scaling, symmetry leads to a scaling evolutionary law, a first-order equation in terms of scale invariants, linearly relating kinematic and dynamic degrees of freedom. This scaling evolutionary law appears in dynamical and in static systems. Applied to dynamical central-force systems, the scaling evolutionary equation leads to generalized virial laws, which linearly connect the kinetic and potential energies. Applied to barotropic hydrostatic spheres, the scaling evolutionary equation linearly connects the gravitational and internal energy densities. This implies well-known properties of polytropes, describing degenerate stars and chemically homogeneous nondegenerate stellar cores.

  14. Climatic changes on orbital and sub-orbital time scale recorded by the Guliya ice core in Tibetan Plateau

    Institute of Scientific and Technical Information of China (English)

    姚檀栋; 徐柏青; 蒲健辰

    2001-01-01

    Based on ice core records in the Tibetan Plateau and Greenland, the features and possible causes of climatic changes on orbital and sub-orbital time scale were discussed. Orbital time scale climatic change recorded in ice core from the Tibetan Plateau is typically ahead of that from polar regions, which indicates that climatic change in the Tibetan Plateau might be earlier than polar regions. The solar radiation change is a major factor that dominates the climatic change on orbital time scale. However, climatic events on sub-orbital time scale occurred later in the Tibetan Plateau than in the Arctic Region, indicating a different mechanism. For example, the Younger Dryas and Heinrich events took place earlier in Greenland ice core record than in Guliya ice core record. It is reasonable to propose the hypothesis that these climatic events were affected possibly by the Laurentide Ice Sheet. Therefore, ice sheet is critically important to climatic change on sub-orbital time scale in some ice ages.

  15. Ramp injector scale effects on supersonic combustion

    Science.gov (United States)

    Trebs, Adam

    The combustion field downstream of a 10 degree compression ramp injector has been studied experimentally using wall static pressure measurement, OH-PLIF, and 2 kHz intensified video filtered for OH emission at 320 nm. Nominal test section entrance conditions were Mach 2, 131 kPa static pressure, and 756K stagnation temperature. The experiment was equipped with a variable length inlet duct that facilitated varying the boundary layer development length while the injector shock structure in relation to the combustor geometry remained nearly fixed. As the boundary within an engine varies with flight condition and does not scale linearly with the physical scale of the engine, the boundary layer scale relative to mixing structures of the engine becomes relevant to the problem of engine scaling and general engine performance. By varying the boundary layer thickness from 40% of the ramp height to 150% of the ramp height, changes in the combustion flowfield downstream of the injector could be diagnosed. It was found that flame shape changed, the persistence of the vortex cores was reduced, and combustion efficiency rose as the incident boundary layer grew.

  16. Attaining high luminosity in linear e+e- colliders

    International Nuclear Information System (INIS)

    Palmer, R.B.

    1990-11-01

    The attainment of high luminosity in linear colliders is a complex problem because of the interdependence of the critical parameters. For instance, changing the number of particles per bunch affects the damping ring design and thus the emittance; it affects the wakefields in the linac and thus the momentum spread; the momentum spread affects the final focus design and thus the final β*; but the emittance change also affects the final focus design; and all these come together to determine the luminosity, disruption and beamstrahlung at the intersection. Changing the bunch length, or almost any other parameter, has a similar chain reaction. Dealing with this problem by simple scaling laws is very difficult because one does not know which parameter is going to be critical, and thus which should be held constant. One can only maximize the luminosity by a process of search and iteration. The process can be facilitated with the aid of a computer program. Examples can then be optimized for maximum luminosity, and compared to the optimized solutions with different approaches. This paper discusses these approaches

  17. Grassland/atmosphere response to changing climate: Coupling regional and local scales

    International Nuclear Information System (INIS)

    Coughenour, M.B.; Kittel, T.G.F.; Pielke, R.A.; Eastman, J.

    1993-10-01

    The objectives of the study were: to evaluate the response of grassland ecosystems to atmospheric change at regional and site scales, and to develop multiscaled modeling systems to relate ecological and atmospheric models with different spatial and temporal resolutions. A menu-driven shell was developed to facilitate use of models at different temporal scales and to facilitate exchange information between models at different temporal scales. A detailed ecosystem model predicted that C 3 temperate grasslands wig respond more strongly to elevated CO 2 than temperate C 4 grasslands in the short-term while a large positive N-PP response was predicted for a C 4 Kenyan grassland. Long-term climate change scenarios produced either decreases or increases in Colorado plant productivity (NPP) depending on rainfall, but uniform increases in N-PP were predicted in Kenya. Elevated CO 2 is likely to have little effect on ecosystem carbon storage in Colorado while it will increase carbon storage in Kenya. A synoptic climate classification processor (SCP) was developed to evaluate results of GCM climate sensitivity experiments. Roughly 80% agreement was achieved with manual classifications. Comparison of lx and 2xCO 2 GCM Simulations revealed relatively small differences

  18. EDITORIAL: Non-linear and non-Gaussian cosmological perturbations Non-linear and non-Gaussian cosmological perturbations

    Science.gov (United States)

    Sasaki, Misao; Wands, David

    2010-06-01

    In recent years there has been a resurgence of interest in the study of non-linear perturbations of cosmological models. This has been the result of both theoretical developments and observational advances. New theoretical challenges arise at second and higher order due to mode coupling and the need to develop new gauge-invariant variables beyond first order. In particular, non-linear interactions lead to deviations from a Gaussian distribution of primordial perturbations even if initial vacuum fluctuations are exactly Gaussian. These non-Gaussianities provide an important probe of models for the origin of structure in the very early universe. We now have a detailed picture of the primordial distribution of matter from surveys of the cosmic microwave background, notably NASA's WMAP satellite. The situation will continue to improve with future data from the ESA Planck satellite launched in 2009. To fully exploit these data cosmologists need to extend non-linear cosmological perturbation theory beyond the linear theory that has previously been sufficient on cosmological scales. Another recent development has been the realization that large-scale structure, revealed in high-redshift galaxy surveys, could also be sensitive to non-linearities in the primordial curvature perturbation. This focus section brings together a collection of invited papers which explore several topical issues in this subject. We hope it will be of interest to theoretical physicists and astrophysicists alike interested in understanding and interpreting recent developments in cosmological perturbation theory and models of the early universe. Of course it is only an incomplete snapshot of a rapidly developing field and we hope the reader will be inspired to read further work on the subject and, perhaps, fill in some of the missing pieces. This focus section is dedicated to the memory of Lev Kofman (1957-2009), an enthusiastic pioneer of inflationary cosmology and non-Gaussian perturbations.

  19. Effect of cellulosic fiber scale on linear and non-linear mechanical performance of starch-based composites.

    Science.gov (United States)

    Karimi, Samaneh; Abdulkhani, Ali; Tahir, Paridah Md; Dufresne, Alain

    2016-10-01

    Cellulosic nanofibers (NFs) from kenaf bast were used to reinforce glycerol plasticized thermoplastic starch (TPS) matrices with varying contents (0-10wt%). The composites were prepared by casting/evaporation method. Raw fibers (RFs) reinforced TPS films were prepared with the same contents and conditions. The aim of study was to investigate the effects of filler dimension and loading on linear and non-linear mechanical performance of fabricated materials. Obtained results clearly demonstrated that the NF-reinforced composites had significantly greater mechanical performance than the RF-reinforced counterparts. This was attributed to the high aspect ratio and nano dimension of the reinforcing agents, as well as their compatibility with the TPS matrix, resulting in strong fiber/matrix interaction. Tensile strength and Young's modulus increased by 313% and 343%, respectively, with increasing NF content from 0 to 10wt%. Dynamic mechanical analysis (DMA) revealed an elevational trend in the glass transition temperature of amylopectin-rich domains in composites. The most eminent record was +18.5°C shift in temperature position of the film reinforced with 8% NF. This finding implied efficient dispersion of nanofibers in the matrix and their ability to form a network and restrict mobility of the system. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A Differential Monolithically Integrated Inductive Linear Displacement Measurement Microsystem

    Directory of Open Access Journals (Sweden)

    Matija Podhraški

    2016-03-01

    Full Text Available An inductive linear displacement measurement microsystem realized as a monolithic Application-Specific Integrated Circuit (ASIC is presented. The system comprises integrated microtransformers as sensing elements, and analog front-end electronics for signal processing and demodulation, both jointly fabricated in a conventional commercially available four-metal 350-nm CMOS process. The key novelty of the presented system is its full integration, straightforward fabrication, and ease of application, requiring no external light or magnetic field source. Such systems therefore have the possibility of substituting certain conventional position encoder types. The microtransformers are excited by an AC signal in MHz range. The displacement information is modulated into the AC signal by a metal grating scale placed over the microsystem, employing a differential measurement principle. Homodyne mixing is used for the demodulation of the scale displacement information, returned by the ASIC as a DC signal in two quadrature channels allowing the determination of linear position of the target scale. The microsystem design, simulations, and characterization are presented. Various system operating conditions such as frequency, phase, target scale material and distance have been experimentally evaluated. The best results have been achieved at 4 MHz, demonstrating a linear resolution of 20 µm with steel and copper scale, having respective sensitivities of 0.71 V/mm and 0.99 V/mm.

  1. Linear independence of localized magnon states

    International Nuclear Information System (INIS)

    Schmidt, Heinz-Juergen; Richter, Johannes; Moessner, Roderich

    2006-01-01

    At the magnetic saturation field, certain frustrated lattices have a class of states known as 'localized multi-magnon states' as exact ground states. The number of these states scales exponentially with the number N of spins and hence they have a finite entropy also in the thermodynamic limit N → ∞ provided they are sufficiently linearly independent. In this paper, we present rigorous results concerning the linear dependence or independence of localized magnon states and investigate special examples. For large classes of spin lattices, including what we call the orthogonal type and the isolated type, as well as the kagome, the checkerboard and the star lattice, we have proven linear independence of all localized multi-magnon states. On the other hand, the pyrochlore lattice provides an example of a spin lattice having localized multi-magnon states with considerable linear dependence

  2. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  3. Non-linear statistical downscaling of present and LGM precipitation and temperatures over Europe

    Directory of Open Access Journals (Sweden)

    M. Vrac

    2007-12-01

    Full Text Available Local-scale climate information is increasingly needed for the study of past, present and future climate changes. In this study we develop a non-linear statistical downscaling method to generate local temperatures and precipitation values from large-scale variables of a Earth System Model of Intermediate Complexity (here CLIMBER. Our statistical downscaling scheme is based on the concept of Generalized Additive Models (GAMs, capturing non-linearities via non-parametric techniques. Our GAMs are calibrated on the present Western Europe climate. For this region, annual GAMs (i.e. models based on 12 monthly values per location are fitted by combining two types of large-scale explanatory variables: geographical (e.g. topographical information and physical (i.e. entirely simulated by the CLIMBER model.

    To evaluate the adequacy of the non-linear transfer functions fitted on the present Western European climate, they are applied to different spatial and temporal large-scale conditions. Local projections for present North America and Northern Europe climates are obtained and compared to local observations. This partially addresses the issue of spatial robustness of our transfer functions by answering the question "does our statistical model remain valid when applied to large-scale climate conditions from a region different from the one used for calibration?". To asses their temporal performances, local projections for the Last Glacial Maximum period are derived and compared to local reconstructions and General Circulation Model outputs.

    Our downscaling methodology performs adequately for the Western Europe climate. Concerning the spatial and temporal evaluations, it does not behave as well for Northern America and Northern Europe climates because the calibration domain may be too different from the targeted regions. The physical explanatory variables alone are not capable of downscaling realistic values. However, the inclusion of

  4. Linear and Differential Ion Mobility Separations of Middle-Down Proteoforms

    DEFF Research Database (Denmark)

    Garabedian, Alyssa; Baird, Matthew A; Porter, Jacob

    2018-01-01

    . Separations using traveling-wave (TWIMS) and/or involving various time scales and electrospray ionization source conditions are similar (with lower resolution for TWIMS), showing the transferability of results across linear IMS instruments. The linear IMS and FAIMS dimensions are substantially orthogonal...

  5. Vanilla Technicolor at Linear Colliders

    DEFF Research Database (Denmark)

    T. Frandsen, Mads; Jarvinen, Matti; Sannino, Francesco

    2011-01-01

    We analyze the reach of Linear Colliders (LC)s for models of dynamical electroweak symmetry breaking. We show that LCs can efficiently test the compositeness scale, identified with the mass of the new spin-one resonances, till the maximum energy in the center-of-mass of the colliding leptons. In ...

  6. Comments on a time-dependent version of the linear-quadratic model

    International Nuclear Information System (INIS)

    Tucker, S.L.; Travis, E.L.

    1990-01-01

    The accuracy and interpretation of the 'LQ + time' model are discussed. Evidence is presented, based on data in the literature, that this model does not accurately describe the changes in isoeffect dose occurring with protraction of the overall treatment time during fractionated irradiation of the lung. This lack of fit of the model explains, in part, the surprisingly large values of γ/α that have been derived from experimental lung data. The large apparent time factors for lung suggested by the model are also partly explained by the fact that γT/α, despite having units of dose, actually measures the influence of treatment time on the effect scale, not the dose scale, and is shown to consistently overestimate the change in total dose. The unusually high values of α/β that have been derived for lung using the model are shown to be influenced by the method by which the model was fitted to data. Reanalyses of the data using a more statistically valid regression procedure produce estimates of α/β more typical of those usually cited for lung. Most importantly, published isoeffect data from lung indicate that the true deviation from the linear-quadratic (LQ) model is nonlinear in time, instead of linear, and also depends on other factors such as the effect level and the size of dose per fraction. Thus, the authors do not advocate the use of the 'LQ + time' expression as a general isoeffect model. (author). 32 refs.; 3 figs.; 1 tab

  7. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Science.gov (United States)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  8. Anaerobic degradation of linear alkylbenzene sulfonate

    DEFF Research Database (Denmark)

    Mogensen, Anders Skibsted; Haagensen, Frank; Ahring, Birgitte Kiær

    2003-01-01

    Linear alkylbenzene sulfonate (LAS) found in wastewater is removed in the wastewater treatment facilities by sorption and aerobic biodegradation. The anaerobic digestion of sewage sludge has not been shown to contribute to the removal. The concentration of LAS based on dry matter typically...... increases during anaerobic stabilization due to transformation of easily degradable organic matter. Hence, LAS is regarded as resistant to biodegradation under anaerobic conditions. We present data from a lab-scale semi-continuously stirred tank reactor (CSTR) spiked with linear dodecylbenzene sulfonate (C...

  9. A new type of change blindness: smooth, isoluminant color changes are monitored on a coarse spatial scale.

    Science.gov (United States)

    Goddard, Erin; Clifford, Colin W G

    2013-04-22

    Attending selectively to changes in our visual environment may help filter less important, unchanging information within a scene. Here, we demonstrate that color changes can go unnoticed even when they occur throughout an otherwise static image. The novelty of this demonstration is that it does not rely upon masking by a visual disruption or stimulus motion, nor does it require the change to be very gradual and restricted to a small section of the image. Using a two-interval, forced-choice change-detection task and an odd-one-out localization task, we showed that subjects were slowest to respond and least accurate (implying that change was hardest to detect) when the color changes were isoluminant, smoothly varying, and asynchronous with one another. This profound change blindness offers new constraints for theories of visual change detection, implying that, in the absence of transient signals, changes in color are typically monitored at a coarse spatial scale.

  10. Comparison of linear and non-linear models for the adsorption of fluoride onto geo-material: limonite.

    Science.gov (United States)

    Sahin, Rubina; Tapadia, Kavita

    2015-01-01

    The three widely used isotherms Langmuir, Freundlich and Temkin were examined in an experiment using fluoride (F⁻) ion adsorption on a geo-material (limonite) at four different temperatures by linear and non-linear models. Comparison of linear and non-linear regression models were given in selecting the optimum isotherm for the experimental results. The coefficient of determination, r², was used to select the best theoretical isotherm. The four Langmuir linear equations (1, 2, 3, and 4) are discussed. Langmuir isotherm parameters obtained from the four Langmuir linear equations using the linear model differed but they were the same when using the nonlinear model. Langmuir-2 isotherm is one of the linear forms, and it had the highest coefficient of determination (r² = 0.99) compared to the other Langmuir linear equations (1, 3 and 4) in linear form, whereas, for non-linear, Langmuir-4 fitted best among all the isotherms because it had the highest coefficient of determination (r² = 0.99). The results showed that the non-linear model may be a better way to obtain the parameters. In the present work, the thermodynamic parameters show that the absorption of fluoride onto limonite is both spontaneous (ΔG 0). Scanning electron microscope and X-ray diffraction images also confirm the adsorption of F⁻ ion onto limonite. The isotherm and kinetic study reveals that limonite can be used as an adsorbent for fluoride removal. In future we can develop new technology for fluoride removal in large scale by using limonite which is cost-effective, eco-friendly and is easily available in the study area.

  11. Extreme daily precipitation in Western Europe with climate change at appropriate spatial scales

    NARCIS (Netherlands)

    Booij, Martijn J.

    2002-01-01

    Extreme daily precipitation for the current and changed climate at appropriate spatial scales is assessed. This is done in the context of the impact of climate change on flooding in the river Meuse in Western Europe. The objective is achieved by determining and comparing extreme precipitation from

  12. HESS Opinions: Linking Darcy's equation to the linear reservoir

    Science.gov (United States)

    Savenije, Hubert H. G.

    2018-03-01

    In groundwater hydrology, two simple linear equations exist describing the relation between groundwater flow and the gradient driving it: Darcy's equation and the linear reservoir. Both equations are empirical and straightforward, but work at different scales: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they appear similar, it is not trivial to upscale Darcy's equation to the watershed scale without detailed knowledge of the structure or shape of the underlying aquifers. This paper shows that these two equations, combined by the water balance, are indeed identical provided there is equal resistance in space for water entering the subsurface network. This implies that groundwater systems make use of an efficient drainage network, a mostly invisible pattern that has evolved over geological timescales. This drainage network provides equally distributed resistance for water to access the system, connecting the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance. As a result, the timescale of the linear reservoir appears to be inversely proportional to Darcy's conductance, the proportionality being the product of the porosity and the resistance to entering the drainage network. The main question remaining is which physical law lies behind pattern formation in groundwater systems, evolving in a way that resistance to drainage is constant in space. But that is a fundamental question that is equally relevant for understanding the hydraulic properties of leaf veins in plants or of blood veins in animals.

  13. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  14. The Study of Non-Linear Acceleration of Particles during Substorms Using Multi-Scale Simulations

    International Nuclear Information System (INIS)

    Ashour-Abdalla, Maha

    2011-01-01

    To understand particle acceleration during magnetospheric substorms we must consider the problem on multple scales ranging from the large scale changes in the entire magnetosphere to the microphysics of wave particle interactions. In this paper we present two examples that demonstrate the complexity of substorm particle acceleration and its multi-scale nature. The first substorm provided us with an excellent example of ion acceleration. On March 1, 2008 four THEMIS spacecraft were in a line extending from 8 R E to 23 R E in the magnetotail during a very large substorm during which ions were accelerated to >500 keV. We used a combination of a global magnetohydrodynamic and large scale kinetic simulations to model the ion acceleration and found that the ions gained energy by non-adiabatic trajectories across the substorm electric field in a narrow region extending across the magnetotail between x = -10 R E and x = -15 R E . In this strip called the 'wall region' the ions move rapidly in azimuth and gain 100s of keV. In the second example we studied the acceleration of electrons associated with a pair of dipolarization fronts during a substorm on February 15, 2008. During this substorm three THEMIS spacecraft were grouped in the near-Earth magnetotail (x ∼-10 R E ) and observed electron acceleration of >100 keV accompanied by intense plasma waves. We used the MHD simulations and analytic theory to show that adiabatic motion (betatron and Fermi acceleration) was insufficient to account for the electron acceleration and that kinetic processes associated with the plasma waves were important.

  15. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C. [Cavendish Laboratory, J. J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Hine, N. D. M. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Haynes, P. D. [Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thomas Young Centre for Theory and Simulation of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  16. Study on TVD parameters sensitivity of a crankshaft using multiple scale and state space method considering quadratic and cubic non-linearities

    Directory of Open Access Journals (Sweden)

    R. Talebitooti

    Full Text Available In this paper the effect of quadratic and cubic non-linearities of the system consisting of the crankshaft and torsional vibration damper (TVD is taken into account. TVD consists of non-linear elastomer material used for controlling the torsional vibration of crankshaft. The method of multiple scales is used to solve the governing equations of the system. Meanwhile, the frequency response of the system for both harmonic and sub-harmonic resonances is extracted. In addition, the effects of detuning parameters and other dimensionless parameters for a case of harmonic resonance are investigated. Moreover, the external forces including both inertia and gas forces are simultaneously applied into the model. Finally, in order to study the effectiveness of the parameters, the dimensionless governing equations of the system are solved, considering the state space method. Then, the effects of the torsional damper as well as all corresponding parameters of the system are discussed.

  17. Photogeologic study of small-scale linear features near a potential nuclear-waste repository site at Yucca Mountain, southern Nye County, Nevada

    International Nuclear Information System (INIS)

    Throckmorton, C.K.

    1987-01-01

    Linear features were mapped from 1:2400-scale aerial photographs of the northern half of the potential underground nuclear-waste repository site at Yucca Mountain by means of a Kern PG 2 stereoplotter. These features were thought to be the expression of fractures at the ground surface (fracture traces), and were mapped in the caprock, upper lithophysal, undifferentiated lower lithophysal and hackly units of the Tiva Canyon Member of the Miocene Paintbrush Tuff. To determine if the linear features corresponded to fracture traces observed in the field, stations (areas) were selected on the map where the traces were both abundant and located solely within one unit. These areas were visited in the field, where fracture-trace bearings and fracture-trace lengths were recorded. Additional data on fracture-trace length and fracture abundance, obtained from ground-based studies of cleared pavements located within the study area were used to help evaluate data collected for this study. 16 refs., 4 figs., 2 tabs

  18. Formulating and testing a method for perturbing precipitation time series to reflect anticipated climatic changes

    DEFF Research Database (Denmark)

    Sørup, Hjalte Jomo Danielsen; Georgiadis, Stylianos; Gregersen, Ida Bülow

    2017-01-01

    Urban water infrastructure has very long planning horizons, and planning is thus very dependent on reliable estimates of the impacts of climate change. Many urban water systems are designed using time series with a high temporal resolution. To assess the impact of climate change on these systems......, similarly high-resolution precipitation time series for future climate are necessary. Climate models cannot at their current resolutions provide these time series at the relevant scales. Known methods for stochastic downscaling of climate change to urban hydrological scales have known shortcomings...... in constructing realistic climate-changed precipitation time series at the sub-hourly scale. In the present study we present a deterministic methodology to perturb historical precipitation time series at the minute scale to reflect non-linear expectations to climate change. The methodology shows good skill...

  19. The Non-linear Trajectory of Change in Play Profiles of Three Children in Psychodynamic Play Therapy.

    Science.gov (United States)

    Halfon, Sibel; Çavdar, Alev; Orsucci, Franco; Schiepek, Gunter K; Andreassi, Silvia; Giuliani, Alessandro; de Felice, Giulio

    2016-01-01

    Aim: Even though there is substantial evidence that play based therapies produce significant change, the specific play processes in treatment remain unexamined. For that purpose, processes of change in long-term psychodynamic play therapy are assessed through a repeated systematic assessment of three children's "play profiles," which reflect patterns of organization among play variables that contribute to play activity in therapy, indicative of the children's coping strategies, and an expression of their internal world. The main aims of the study are to investigate the kinds of play profiles expressed in treatment, and to test whether there is emergence of new and more adaptive play profiles using dynamic systems theory as a methodological framework. Methods and Procedures: Each session from the long-term psychodynamic treatment (mean number of sessions = 55) of three 6-year-old good outcome cases presenting with Separation Anxiety were recorded, transcribed and coded using items from the Children's Play Therapy Instrument (CPTI), created to assess the play activity of children in psychotherapy, generating discrete and measurable units of play activity arranged along a continuum of four play profiles: "Adaptive," "Inhibited," "Impulsive," and "Disorganized." The play profiles were clustered through K -means Algorithm, generating seven discrete states characterizing the course of treatment and the transitions between these states were analyzed by Markov Transition Matrix, Recurrence Quantification Analysis (RQA) and odds ratios comparing the first and second halves of psychotherapy. Results: The Markov Transitions between the states scaled almost perfectly and also showed the ergodicity of the system, meaning that the child can reach any state or shift to another one in play. The RQA and odds ratios showed two trends of change, first concerning the decrease in the use of "less adaptive" strategies, second regarding the reduction of play interruptions. Conclusion

  20. The SLAC linear collider

    International Nuclear Information System (INIS)

    Phinney, N.

    1992-01-01

    The SLAC Linear Collider has begun a new era of operation with the SLD detector. During 1991 there was a first engineering run for the SLD in parallel with machine improvements to increase luminosity and reliability. For the 1992 run, a polarized electron source was added and more than 10,000 Zs with an average of 23% polarization have been logged by the SLD. This paper discusses the performance of the SLC in 1991 and 1992 and the technical advances that have produced higher luminosity. Emphasis will be placed on issues relevant to future linear colliders such as producing and maintaining high current, low emittance beams and focusing the beams to the micron scale for collisions. (Author) tab., 2 figs., 18 refs

  1. Non-linear, non-monotonic effect of nano-scale roughness on particle deposition in absence of an energy barrier: Experiments and modeling

    Science.gov (United States)

    Jin, Chao; Glawdel, Tomasz; Ren, Carolyn L.; Emelko, Monica B.

    2015-12-01

    Deposition of colloidal- and nano-scale particles on surfaces is critical to numerous natural and engineered environmental, health, and industrial applications ranging from drinking water treatment to semi-conductor manufacturing. Nano-scale surface roughness-induced hydrodynamic impacts on particle deposition were evaluated in the absence of an energy barrier to deposition in a parallel plate system. A non-linear, non-monotonic relationship between deposition surface roughness and particle deposition flux was observed and a critical roughness size associated with minimum deposition flux or “sag effect” was identified. This effect was more significant for nanoparticles (<1 μm) than for colloids and was numerically simulated using a Convective-Diffusion model and experimentally validated. Inclusion of flow field and hydrodynamic retardation effects explained particle deposition profiles better than when only the Derjaguin-Landau-Verwey-Overbeek (DLVO) force was considered. This work provides 1) a first comprehensive framework for describing the hydrodynamic impacts of nano-scale surface roughness on particle deposition by unifying hydrodynamic forces (using the most current approaches for describing flow field profiles and hydrodynamic retardation effects) with appropriately modified expressions for DLVO interaction energies, and gravity forces in one model and 2) a foundation for further describing the impacts of more complicated scales of deposition surface roughness on particle deposition.

  2. Strength and reversibility of stereotypes for a rotary control with linear scales.

    Science.gov (United States)

    Chan, Alan H S; Chan, W H

    2008-02-01

    Using real mechanical controls, this experiment studied strength and reversibility of direction-of-motion stereotypes and response times for a rotary control with horizontal and vertical scales. Thirty-eight engineering undergraduates (34 men and 4 women) ages 23 to 47 years (M=29.8, SD=7.7) took part in the experiment voluntarily. The effects of instruction of change of pointer position and control plane on movement compatibility were analyzed with precise quantitative measures of strength and a reversibility index of stereotype. Comparisons of the strength and reversibility values of these two configurations with those of rotary control-circular display, rotary control-digital counter, four-way lever-circular display, and four-way lever-digital counter were made. The results of this study provided significant implications for the industrial design of control panels for improved human performance.

  3. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    Energy Technology Data Exchange (ETDEWEB)

    Pavanello, Michele [Department of Chemistry, Rutgers University, Newark, New Jersey 07102-1811 (United States); Van Voorhis, Troy [Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139-4307 (United States); Visscher, Lucas [Amsterdam Center for Multiscale Modeling, VU University, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Neugebauer, Johannes [Theoretische Organische Chemie, Organisch-Chemisches Institut der Westfaelischen Wilhelms-Universitaet Muenster, Corrensstrasse 40, 48149 Muenster (Germany)

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  4. Fast Solvers for Dense Linear Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kauers, Manuel [Research Institute for Symbolic Computation (RISC), Altenbergerstrasse 69, A4040 Linz (Austria)

    2008-10-15

    It appears that large scale calculations in particle physics often require to solve systems of linear equations with rational number coefficients exactly. If classical Gaussian elimination is applied to a dense system, the time needed to solve such a system grows exponentially in the size of the system. In this tutorial paper, we present a standard technique from computer algebra that avoids this exponential growth: homomorphic images. Using this technique, big dense linear systems can be solved in a much more reasonable time than using Gaussian elimination over the rationals.

  5. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    Directory of Open Access Journals (Sweden)

    Wang Hao

    2010-01-01

    Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  6. Reflections on the nature of non-linear responses of the climate to forcing

    Science.gov (United States)

    Ditlevsen, Peter

    2017-04-01

    On centennial to multi-millennial time scales the paleoclimatic record shows that climate responds in a very non-linear way to the external forcing. Perhaps most puzzling is the change in glacial period duration at the Middle Pleistocene Transition. From a dynamical systems perspective, this could be a change in frequency locking between the orbital forcing and the climatic response or it could be a non-linear resonance phenomenon. In both cases the climate system shows a non-trivial oscillatory behaviour. From the records it seems that this behaviour can be described by an effective dynamics on a low-dimensional slow manifold. These different possible dynamical behaviours will be discussed. References: Arianna Marchionne, Peter Ditlevsen, and Sebastian Wieczorek, "Three types of nonlinear resonances", arXiv:1605.00858 Peter Ashwin and Peter Ditlevsen, "The middle Pleistocene transition as a generic bifurcation on a slow manifold", Climate Dynamics, 45, 2683, 2015. Peter D. Ditlevsen, "The bifurcation structure and noise assisted transitions in the Pleistocene glacial cycles", Paleoceanography, 24, PA3204, 2009

  7. A linear graph for digoxin radioimmunoassay

    International Nuclear Information System (INIS)

    Smith, S.E.; Richter, A.

    1975-01-01

    The determination of drug or hormone concentrations by radio-immunoassay involves interpolation of values for radioisotope counts within standard curves, a technique which requires some dexterity in curve drawing and which results in some inaccuracy in practice. Most of the procedures designed to overcome these difficulties are complex and time-consuming. In radioimmunoassays involving saturation of the antibody-binding sites a special case exists in that the bound radioactivity is directly proportional to the specific activity of the ligand in the system. Thus a graph of the ratio of radioactivity bound in the absence to that in the presence of added non-radioactive ligand is linear against the concentration of added ligand (Hales,C.N., and Randle, P.J., 1963, Biochem. J., vol. 88, 137). A description is given of a simple and convenient modification of their method, and its application to the routine clinical determination of digoxin using a commercial kit (Lanoxitest β digoxin radioimmunoassay kit, Wellcome Reagents Ltd.). Specially constructed graph paper, which yields linearity with standard solutions, was designed so that it could be used directly without data transmission. The specific activity function appears as the upper arithmetical horizontal scale; corresponding values of the concentration of non-radioactive ligand in the solution added were individually calculated and appear on the lower scale opposite the appropriate values of the upper scale. The linearity of the graphs obtained confirmed that binding of digoxin was approximately constant through the range of clinical concentrations tested (0.5 to 8ng/ml), although binding declined slightly at higher concentrations. (U.K.)

  8. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    International Nuclear Information System (INIS)

    Gene Golub; Kwok Ko

    2009-01-01

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  9. Linear Subspace Ranking Hashing for Cross-Modal Retrieval.

    Science.gov (United States)

    Li, Kai; Qi, Guo-Jun; Ye, Jun; Hua, Kien A

    2017-09-01

    Hashing has attracted a great deal of research in recent years due to its effectiveness for the retrieval and indexing of large-scale high-dimensional multimedia data. In this paper, we propose a novel ranking-based hashing framework that maps data from different modalities into a common Hamming space where the cross-modal similarity can be measured using Hamming distance. Unlike existing cross-modal hashing algorithms where the learned hash functions are binary space partitioning functions, such as the sign and threshold function, the proposed hashing scheme takes advantage of a new class of hash functions closely related to rank correlation measures which are known to be scale-invariant, numerically stable, and highly nonlinear. Specifically, we jointly learn two groups of linear subspaces, one for each modality, so that features' ranking orders in different linear subspaces maximally preserve the cross-modal similarities. We show that the ranking-based hash function has a natural probabilistic approximation which transforms the original highly discontinuous optimization problem into one that can be efficiently solved using simple gradient descent algorithms. The proposed hashing framework is also flexible in the sense that the optimization procedures are not tied up to any specific form of loss function, which is typical for existing cross-modal hashing methods, but rather we can flexibly accommodate different loss functions with minimal changes to the learning steps. We demonstrate through extensive experiments on four widely-used real-world multimodal datasets that the proposed cross-modal hashing method can achieve competitive performance against several state-of-the-arts with only moderate training and testing time.

  10. Large-scale genome-wide association studies and meta-analyses of longitudinal change in adult lung function.

    Directory of Open Access Journals (Sweden)

    Wenbo Tang

    Full Text Available Genome-wide association studies (GWAS have identified numerous loci influencing cross-sectional lung function, but less is known about genes influencing longitudinal change in lung function.We performed GWAS of the rate of change in forced expiratory volume in the first second (FEV1 in 14 longitudinal, population-based cohort studies comprising 27,249 adults of European ancestry using linear mixed effects model and combined cohort-specific results using fixed effect meta-analysis to identify novel genetic loci associated with longitudinal change in lung function. Gene expression analyses were subsequently performed for identified genetic loci. As a secondary aim, we estimated the mean rate of decline in FEV1 by smoking pattern, irrespective of genotypes, across these 14 studies using meta-analysis.The overall meta-analysis produced suggestive evidence for association at the novel IL16/STARD5/TMC3 locus on chromosome 15 (P  =  5.71 × 10(-7. In addition, meta-analysis using the five cohorts with ≥3 FEV1 measurements per participant identified the novel ME3 locus on chromosome 11 (P  =  2.18 × 10(-8 at genome-wide significance. Neither locus was associated with FEV1 decline in two additional cohort studies. We confirmed gene expression of IL16, STARD5, and ME3 in multiple lung tissues. Publicly available microarray data confirmed differential expression of all three genes in lung samples from COPD patients compared with controls. Irrespective of genotypes, the combined estimate for FEV1 decline was 26.9, 29.2 and 35.7 mL/year in never, former, and persistent smokers, respectively.In this large-scale GWAS, we identified two novel genetic loci in association with the rate of change in FEV1 that harbor candidate genes with biologically plausible functional links to lung function.

  11. TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY

    International Nuclear Information System (INIS)

    Wang Xin; Chen Xuelei; Park, Changbom

    2012-01-01

    The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.

  12. Land use change impacts on floods at the catchment scale: Challenges and opportunities for future research

    Science.gov (United States)

    Rogger, M.; Agnoletti, M.; Alaoui, A.; Bathurst, J. C.; Bodner, G.; Borga, M.; Chaplot, V.; Gallart, F.; Glatzel, G.; Hall, J.; Holden, J.; Holko, L.; Horn, R.; Kiss, A.; Kohnová, S.; Leitinger, G.; Lennartz, B.; Parajka, J.; Perdigão, R.; Peth, S.; Plavcová, L.; Quinton, J. N.; Robinson, M.; Salinas, J. L.; Santoro, A.; Szolgay, J.; Tron, S.; van den Akker, J. J. H.; Viglione, A.; Blöschl, G.

    2017-07-01

    Research gaps in understanding flood changes at the catchment scale caused by changes in forest management, agricultural practices, artificial drainage, and terracing are identified. Potential strategies in addressing these gaps are proposed, such as complex systems approaches to link processes across time scales, long-term experiments on physical-chemical-biological process interactions, and a focus on connectivity and patterns across spatial scales. It is suggested that these strategies will stimulate new research that coherently addresses the issues across hydrology, soil and agricultural sciences, forest engineering, forest ecology, and geomorphology.

  13. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  14. Linear programming mathematics, theory and algorithms

    CERN Document Server

    1996-01-01

    Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.

  15. Non-Linear Dynamics of Saturn's Rings

    Science.gov (United States)

    Esposito, L. W.

    2016-12-01

    Non-linear processes can explain why Saturn's rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. Stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, that push the system across thresholds that lead to persistent states. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like `straw' that can explain the halo morphology and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; this requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping explains both small and large particles at resonances. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating it as an asymmetric random walk with reflecting boundaries

  16. Large Scale Chromosome Folding Is Stable against Local Changes in Chromatin Structure.

    Directory of Open Access Journals (Sweden)

    Ana-Maria Florescu

    2016-06-01

    Full Text Available Characterizing the link between small-scale chromatin structure and large-scale chromosome folding during interphase is a prerequisite for understanding transcription. Yet, this link remains poorly investigated. Here, we introduce a simple biophysical model where interphase chromosomes are described in terms of the folding of chromatin sequences composed of alternating blocks of fibers with different thicknesses and flexibilities, and we use it to study the influence of sequence disorder on chromosome behaviors in space and time. By employing extensive computer simulations, we thus demonstrate that chromosomes undergo noticeable conformational changes only on length-scales smaller than 105 basepairs and time-scales shorter than a few seconds, and we suggest there might exist effective upper bounds to the detection of chromosome reorganization in eukaryotes. We prove the relevance of our framework by modeling recent experimental FISH data on murine chromosomes.

  17. Evaluating Change in Behavioral Preferences: Multidimensional Scaling Single-Ideal Point Model

    Science.gov (United States)

    Ding, Cody

    2016-01-01

    The purpose of the article is to propose a multidimensional scaling single-ideal point model as a method to evaluate changes in individuals' preferences under the explicit methodological framework of behavioral preference assessment. One example is used to illustrate the approach for a clear idea of what this approach can accomplish.

  18. Shape shifting predicts ontogenetic changes in metabolic scaling in diverse aquatic invertebrates.

    Science.gov (United States)

    Glazier, Douglas S; Hirst, Andrew G; Atkinson, David

    2015-03-07

    Metabolism fuels all biological activities, and thus understanding its variation is fundamentally important. Much of this variation is related to body size, which is commonly believed to follow a 3/4-power scaling law. However, during ontogeny, many kinds of animals and plants show marked shifts in metabolic scaling that deviate from 3/4-power scaling predicted by general models. Here, we show that in diverse aquatic invertebrates, ontogenetic shifts in the scaling of routine metabolic rate from near isometry (bR = scaling exponent approx. 1) to negative allometry (bR < 1), or the reverse, are associated with significant changes in body shape (indexed by bL = the scaling exponent of the relationship between body mass and body length). The observed inverse correlations between bR and bL are predicted by metabolic scaling theory that emphasizes resource/waste fluxes across external body surfaces, but contradict theory that emphasizes resource transport through internal networks. Geometric estimates of the scaling of surface area (SA) with body mass (bA) further show that ontogenetic shifts in bR and bA are positively correlated. These results support new metabolic scaling theory based on SA influences that may be applied to ontogenetic shifts in bR shown by many kinds of animals and plants. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  19. The International Linear Collider Progress Report 2015

    Energy Technology Data Exchange (ETDEWEB)

    Evans, L. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Yamamoto, A. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2015-07-15

    The International Committee for Future Accelerators (ICFA) set up the Global Design Effort (GDE) for the design of the International Linear Collider (ILC) in 2005. Drawing on the resources of over 300 national laboratories, universities and institutes worldwide, the GDE produced a Reference Design Report in 2007, followed by a more detailed Technical Design Report (TDR) in 2013. Following this report, the GDE was disbanded. A compact core team, the Linear Collider Collaboration (LCC), replaced it. This is still under the auspices of ICFA and is directly overseen by the Linear Collider Board, which reports to ICFA. The LCC is charged with continuing the design effort on a much-reduced scale until the Project is approved for construction. An additional mandate of the LCC was to bring together all linear collider work, including the CERN-based Compact Linear Collider (CLIC) under one structure in order to exploit synergies between the two studies.

  20. Water limited agriculture in Africa: Climate change sensitivity of large scale land investments

    Science.gov (United States)

    Rulli, M. C.; D'Odorico, P.; Chiarelli, D. D.; Davis, K. F.

    2015-12-01

    The past few decades have seen unprecedented changes in the global agricultural system with a dramatic increase in the rates of food production fueled by an escalating demand for food calories, as a result of demographic growth, dietary changes, and - more recently - new bioenergy policies. Food prices have become consistently higher and increasingly volatile with dramatic spikes in 2007-08 and 2010-11. The confluence of these factors has heightened demand for land and brought a wave of land investment to the developing world: some of the more affluent countries are trying to secure land rights in areas suitable for agriculture. According to some estimates, to date, roughly 38 million hectares have been acquired worldwide by large scale investors, 16 million of which in Africa. More than 85% of large scale land acquisitions in Africa are by foreign investors. Many land deals are motivated not only by the need for fertile land but for the water resources required for crop production. Despite some recent assessments of the water appropriation associated with large scale land investments, their impact on the water resources of the target countries under present conditions and climate change scenarios remains poorly understood. Here we investigate irrigation water requirements by various crops planted in the acquired land as an indicator of the pressure likely placed by land investors on ("blue") water resources of target regions in Africa and evaluate the sensitivity to climate changes scenarios.

  1. Linear and Non-linear Numerical Sea-keeping Evaluation of a Fast Monohull Ferry Compared to Full Scale Measurements

    DEFF Research Database (Denmark)

    Wang, Zhaohui; Folsø, Rasmus; Bondini, Francesca

    1999-01-01

    , full-scale measurements have been performed on board a 128 m monohull fast ferry. This paper deals with the results from these full-scale measurements. The primary results considered are pitch motion, midship vertical bending moment and vertical acceleration at the bow. Previous comparisons between...

  2. Resilience to climate change in a cross-scale tourism governance context: a combined quantitative-qualitative network analysis

    Directory of Open Access Journals (Sweden)

    Tobias Luthe

    2016-03-01

    Full Text Available Social systems in mountain regions are exposed to a number of disturbances, such as climate change. Calls for conceptual and practical approaches on how to address climate change have been taken up in the literature. The resilience concept as a comprehensive theory-driven approach to address climate change has only recently increased in importance. Limited research has been undertaken concerning tourism and resilience from a network governance point of view. We analyze tourism supply chain networks with regard to resilience to climate change at the municipal governance scale of three Alpine villages. We compare these with a planned destination management organization (DMO as a governance entity of the same three municipalities on the regional scale. Network measures are analyzed via a quantitative social network analysis (SNA focusing on resilience from a tourism governance point of view. Results indicate higher resilience of the regional DMO because of a more flexible and diverse governance structure, more centralized steering of fast collective action, and improved innovative capacity, because of higher modularity and better core-periphery integration. Interpretations of quantitative results have been qualitatively validated by interviews and a workshop. We conclude that adaptation of tourism-dependent municipalities to gradual climate change should be dealt with at a regional governance scale and adaptation to sudden changes at a municipal scale. Overall, DMO building at a regional scale may enhance the resilience of tourism destinations, if the municipalities are well integrated.

  3. Land-Use Scenarios: National-Scale Housing-Density Scenarios Consistent with Climate Change Storylines (Final Report)

    Science.gov (United States)

    EPA announced the availability of the final report, Land-Use Scenarios: National-Scale Housing-Density Scenarios Consistent with Climate Change Storylines. This report describes the scenarios and models used to generate national-scale housing density scenarios for the con...

  4. Large-scale impact of climate change vs. land-use change on future biome shifts in Latin America.

    Science.gov (United States)

    Boit, Alice; Sakschewski, Boris; Boysen, Lena; Cano-Crespo, Ana; Clement, Jan; Garcia-Alaniz, Nashieli; Kok, Kasper; Kolb, Melanie; Langerwisch, Fanny; Rammig, Anja; Sachse, René; van Eupen, Michiel; von Bloh, Werner; Clara Zemp, Delphine; Thonicke, Kirsten

    2016-11-01

    Climate change and land-use change are two major drivers of biome shifts causing habitat and biodiversity loss. What is missing is a continental-scale future projection of the estimated relative impacts of both drivers on biome shifts over the course of this century. Here, we provide such a projection for the biodiverse region of Latin America under four socio-economic development scenarios. We find that across all scenarios 5-6% of the total area will undergo biome shifts that can be attributed to climate change until 2099. The relative impact of climate change on biome shifts may overtake land-use change even under an optimistic climate scenario, if land-use expansion is halted by the mid-century. We suggest that constraining land-use change and preserving the remaining natural vegetation early during this century creates opportunities to mitigate climate-change impacts during the second half of this century. Our results may guide the evaluation of socio-economic scenarios in terms of their potential for biome conservation under global change. © 2016 John Wiley & Sons Ltd.

  5. Testing linear growth rate formulas of non-scale endogenous growth models

    NARCIS (Netherlands)

    Ziesemer, Thomas

    2017-01-01

    Endogenous growth theory has produced formulas for steady-state growth rates of income per capita which are linear in the growth rate of the population. Depending on the details of the models, slopes and intercepts are positive, zero or negative. Empirical tests have taken over the assumption of

  6. Direct linear driving systems; Les entrainements lineaires directs

    Energy Technology Data Exchange (ETDEWEB)

    Favre, E.; Brunner, C.; Piaget, D. [ETEL SA (France)

    1999-11-01

    The linear motor is one of the most important developments in electrical drive technology. However, it only, began to be adopted on a large scale at the beginning of the 1990's and will not be considered a mature technology until well into the next millennium. Actuators based on linear motor technology have a number of technical advantages including high speed, high positional accuracy and fine resolution. They also require fewer component parts. Some precautions are necessary when using linear motors. Care must be taken to avoid overheating and excessive vibration, and the magnetic components must be protected.

  7. Quantifying feedforward control: a linear scaling model for fingertip forces and object weight.

    Science.gov (United States)

    Lu, Ying; Bilaloglu, Seda; Aluru, Viswanath; Raghavan, Preeti

    2015-07-01

    The ability to predict the optimal fingertip forces according to object properties before the object is lifted is known as feedforward control, and it is thought to occur due to the formation of internal representations of the object's properties. The control of fingertip forces to objects of different weights has been studied extensively by using a custom-made grip device instrumented with force sensors. Feedforward control is measured by the rate of change of the vertical (load) force before the object is lifted. However, the precise relationship between the rate of change of load force and object weight and how it varies across healthy individuals in a population is not clearly understood. Using sets of 10 different weights, we have shown that there is a log-linear relationship between the fingertip load force rates and weight among neurologically intact individuals. We found that after one practice lift, as the weight increased, the peak load force rate (PLFR) increased by a fixed percentage, and this proportionality was common among the healthy subjects. However, at any given weight, the level of PLFR varied across individuals and was related to the efficiency of the muscles involved in lifting the object, in this case the wrist and finger extensor muscles. These results quantify feedforward control during grasp and lift among healthy individuals and provide new benchmarks to interpret data from neurologically impaired populations as well as a means to assess the effect of interventions on restoration of feedforward control and its relationship to muscular control. Copyright © 2015 the American Physiological Society.

  8. Surface changes of metal alloys and high-strength ceramics after ultrasonic scaling and intraoral polishing.

    Science.gov (United States)

    Yoon, Hyung-In; Noh, Hyo-Mi; Park, Eun-Jin

    2017-06-01

    This study was to evaluate the effect of repeated ultrasonic scaling and surface polishing with intraoral polishing kits on the surface roughness of three different restorative materials. A total of 15 identical discs were fabricated with three different materials. The ultrasonic scaling was conducted for 20 seconds on the test surfaces. Subsequently, a multi-step polishing with recommended intraoral polishing kit was performed for 30 seconds. The 3D profiler and scanning electron microscopy were used to investigate surface integrity before scaling (pristine), after scaling, and after surface polishing for each material. Non-parametric Friedman and Wilcoxon signed rank sum tests were employed to statistically evaluate surface roughness changes of the pristine, scaled, and polished specimens. The level of significance was set at 0.05. Surface roughness values before scaling (pristine), after scaling, and polishing of the metal alloys were 3.02±0.34 µm, 2.44±0.72 µm, and 3.49±0.72 µm, respectively. Surface roughness of lithium disilicate increased from 2.35±1.05 µm (pristine) to 28.54±9.64 µm (scaling), and further increased after polishing (56.66±9.12 µm, P scaling (from 1.65±0.42 µm to 101.37±18.75 µm), while its surface roughness decreased after polishing (29.57±18.86 µm, P scaling significantly changed the surface integrities of lithium disilicate and zirconia. Surface polishing with multi-step intraoral kit after repeated scaling was only effective for the zirconia, while it was not for lithium disilicate.

  9. Adapting crop management practices to climate change: Modeling optimal solutions at the field scale

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.; Walter, A.

    2013-01-01

    Climate change will alter the environmental conditions for crop growth and require adjustments in management practices at the field scale. In this paper, we analyzed the impacts of two different climate change scenarios on optimal field management practices in winterwheat and grain maize production

  10. Spatial modeling of agricultural land use change at global scale

    Science.gov (United States)

    Meiyappan, P.; Dalton, M.; O'Neill, B. C.; Jain, A. K.

    2014-11-01

    Long-term modeling of agricultural land use is central in global scale assessments of climate change, food security, biodiversity, and climate adaptation and mitigation policies. We present a global-scale dynamic land use allocation model and show that it can reproduce the broad spatial features of the past 100 years of evolution of cropland and pastureland patterns. The modeling approach integrates economic theory, observed land use history, and data on both socioeconomic and biophysical determinants of land use change, and estimates relationships using long-term historical data, thereby making it suitable for long-term projections. The underlying economic motivation is maximization of expected profits by hypothesized landowners within each grid cell. The model predicts fractional land use for cropland and pastureland within each grid cell based on socioeconomic and biophysical driving factors that change with time. The model explicitly incorporates the following key features: (1) land use competition, (2) spatial heterogeneity in the nature of driving factors across geographic regions, (3) spatial heterogeneity in the relative importance of driving factors and previous land use patterns in determining land use allocation, and (4) spatial and temporal autocorrelation in land use patterns. We show that land use allocation approaches based solely on previous land use history (but disregarding the impact of driving factors), or those accounting for both land use history and driving factors by mechanistically fitting models for the spatial processes of land use change do not reproduce well long-term historical land use patterns. With an example application to the terrestrial carbon cycle, we show that such inaccuracies in land use allocation can translate into significant implications for global environmental assessments. The modeling approach and its evaluation provide an example that can be useful to the land use, Integrated Assessment, and the Earth system modeling

  11. The economic impacts of climate change on the Chilean agricultural sector: A non-linear agricultural supply model

    Directory of Open Access Journals (Sweden)

    Roberto Ponce

    2014-12-01

    Full Text Available Agriculture could be one of the most vulnerable economic sectors to the impacts of climate change in the coming decades, with impacts threatening agricultural production in general and food security in particular. Within this context, climate change will impose a challenge to policy makers, especially in those countries that based their development on primary sectors. In this paper we present a non-linear agricultural supply model for the analysis of the economic impacts of changes in crop yields due to climate change. The model accounts for uncertainty through the use of Monte Carlo simulations about crop yields. According to our results, climate change impacts on the Chilean agricultural sector are widespread, with considerable distributional consequences across regions, and with fruits producers being worst-off than crops producers. In general, the results reported here are consistent with those reported by previous studies showing large economic impacts on the northern zone. However, our model does not simulate remarkable economic consequences at the country level as previous studies did.

  12. Full scale experimental analysis of wind direction changes (EOD)

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose

    2007-01-01

    wind direction gust amplitudes associated with the investigated European sites are low compared to the recommended IEC- values. However, these values, as function of the mean wind speed, are difficult to validate thoroughly due to the limited number of fully correlated measurements....... the magnitudes of a joint gust event defined by a simultaneously wind speed- and direction change in order to obtain an indication of the validity of the magnitudes specified in the IEC code. The analysis relates to pre-specified recurrence periods and is based on full-scale wind field measurements. The wind......A coherent wind speed and wind direction change (ECD) load case is defined in the wind turbine standard. This load case is an essential extreme load case that e.g. may be design driving for flap defection of active stall controlled wind turbines. The present analysis identifies statistically...

  13. Linear Accelerator Stereotactic Radiosurgery of Central Nervous System Arteriovenous Malformations: A 15-Year Analysis of Outcome-Related Factors in a Single Tertiary Center.

    Science.gov (United States)

    Thenier-Villa, José Luis; Galárraga-Campoverde, Raúl Alejandro; Martínez Rolán, Rosa María; De La Lama Zaragoza, Adolfo Ramón; Martínez Cueto, Pedro; Muñoz Garzón, Víctor; Salgado Fernández, Manuel; Conde Alonso, Cesáreo

    2017-07-01

    Linear accelerator stereotactic radiosurgery is one of the modalities available for the treatment of central nervous system arteriovenous malformations (AVMs). The aim of this study was to describe our 15-year experience with this technique in a single tertiary center and the analysis of outcome-related factors. From 1998 to 2013, 195 patients were treated with linear accelerator-based radiosurgery; we conducted a retrospective study collecting patient- and AVM-related variables. Treatment outcomes were obliteration, posttreatment hemorrhage, symptomatic radiation-induced changes, and 3-year neurologic status. We also analyzed prognostic factors of each outcome and predictability analysis of 5 scales: Spetzler-Martin grade, Lawton-Young supplementary and Lawton combined scores, radiosurgery-based AVM score, Virginia Radiosurgery AVM Scale, and Heidelberg score. Overall obliteration rate was 81%. Nidus diameter and venous drainage were predictive of obliteration (P linear accelerator-based radiosurgery is a useful, valid, effective, and safe modality for treatment of brain AVMs. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  15. Scaling up: Assessing social impacts at the macro-scale

    International Nuclear Information System (INIS)

    Schirmer, Jacki

    2011-01-01

    Social impacts occur at various scales, from the micro-scale of the individual to the macro-scale of the community. Identifying the macro-scale social changes that results from an impacting event is a common goal of social impact assessment (SIA), but is challenging as multiple factors simultaneously influence social trends at any given time, and there are usually only a small number of cases available for examination. While some methods have been proposed for establishing the contribution of an impacting event to macro-scale social change, they remain relatively untested. This paper critically reviews methods recommended to assess macro-scale social impacts, and proposes and demonstrates a new approach. The 'scaling up' method involves developing a chain of logic linking change at the individual/site scale to the community scale. It enables a more problematised assessment of the likely contribution of an impacting event to macro-scale social change than previous approaches. The use of this approach in a recent study of change in dairy farming in south east Australia is described.

  16. A Design of Mechanical Frequency Converter Linear and Non-linear Spring Combination for Energy Harvesting

    International Nuclear Information System (INIS)

    Yamamoto, K; Fujita, T; Kanda, K; Maenaka, K; Badel, A; Formosa, F

    2014-01-01

    In this study, the improvement of energy harvesting from wideband vibration with random change by using a combination of linear and nonlinear spring system is investigated. The system consists of curved beam spring for non-linear buckling, which supports the linear mass-spring resonator. Applying shock acceleration generates a snap through action to the buckling spring. From the FEM analysis, we showed that the snap through acceleration from the buckling action has no relationship with the applied shock amplitude and duration. We use this uniform acceleration as an impulse shock source for the linear resonator. It is easy to obtain the maximum shock response from the uniform snap through acceleration by using a shock response spectrum (SRS) analysis method. At first we investigated the relationship between the snap-through behaviour and an initial curved deflection. Then a time response result for non-linear springs with snap through and minimum force that makes a buckling behaviour were obtained by FEM analysis. By obtaining the optimum SRS frequency for linear resonator, we decided its resonant frequency with the MATLAB simulator

  17. Effects of collisions on linear and non-linear spectroscopic line shapes

    International Nuclear Information System (INIS)

    Berman, P.R.

    1978-01-01

    A fundamental physical problem is the determination of atom-atom, atom-molecule and molecule-molecule differential and total scattering cross sections. In this work, a technique for studying atomic and molecular collisions using spectroscopic line shape analysis is discussed. Collisions occurring within an atomic or molecular sample influence the sample's absorptive or emissive properties. Consequently the line shapes associated with the linear or non-linear absorption of external fields by an atomic system reflect the collisional processes occurring in the gas. Explicit line shape expressions are derived characterizing linear or saturated absorption by two-or three-level 'active' atoms which are undergoing collisions with perturber atoms. The line shapes may be broadened, shifted, narrowed, or distorted as a result of collisions which may be 'phase-interrupting' or 'velocity-changing' in nature. Systematic line shape studies can be used to obtain information on both the differential and total active atom-perturber scattering cross sections. (Auth.)

  18. Deriving Scaling Factors Using a Global Hydrological Model to Restore GRACE Total Water Storage Changes for China's Yangtze River Basin

    Science.gov (United States)

    Long, Di; Yang, Yuting; Yoshihide, Wada; Hong, Yang; Liang, Wei; Chen, Yaning; Yong, Bin; Hou, Aizhong; Wei, Jiangfeng; Chen, Lu

    2015-01-01

    This study used a global hydrological model (GHM), PCR-GLOBWB, which simulates surface water storage changes, natural and human induced groundwater storage changes, and the interactions between surface water and subsurface water, to generate scaling factors by mimicking low-pass filtering of GRACE signals. Signal losses in GRACE data were subsequently restored by the scaling factors from PCR-GLOBWB. Results indicate greater spatial heterogeneity in scaling factor from PCR-GLOBWB and CLM4.0 than that from GLDAS-1 Noah due to comprehensive simulation of surface and subsurface water storage changes for PCR-GLOBWB and CLM4.0. Filtered GRACE total water storage (TWS) changes applied with PCR-GLOBWB scaling factors show closer agreement with water budget estimates of TWS changes than those with scaling factors from other land surface models (LSMs) in China's Yangtze River basin. Results of this study develop a further understanding of the behavior of scaling factors from different LSMs or GHMs over hydrologically complex basins, and could be valuable in providing more accurate TWS changes for hydrological applications (e.g., monitoring drought and groundwater storage depletion) over regions where human-induced interactions between surface water and subsurface water are intensive.

  19. Classification of acute stress using linear and non-linear heart rate variability analysis derived from sternal ECG

    DEFF Research Database (Denmark)

    Tanev, George; Saadi, Dorthe Bodholt; Hoppe, Karsten

    2014-01-01

    Chronic stress detection is an important factor in predicting and reducing the risk of cardiovascular disease. This work is a pilot study with a focus on developing a method for detecting short-term psychophysiological changes through heart rate variability (HRV) features. The purpose of this pilot...... study is to establish and to gain insight on a set of features that could be used to detect psychophysiological changes that occur during chronic stress. This study elicited four different types of arousal by images, sounds, mental tasks and rest, and classified them using linear and non-linear HRV...

  20. Cosmological perturbations beyond linear order

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Cosmological perturbation theory is the standard tool to understand the formation of the large scale structure in the Universe. However, its degree of applicability is limited by the growth of the amplitude of the matter perturbations with time. This problem can be tackled with by using N-body simulations or analytical techniques that go beyond the linear calculation. In my talk, I'll summarise some recent efforts in the latter that ameliorate the bad convergence of the standard perturbative expansion. The new techniques allow better analytical control on observables (as the matter power spectrum) over scales very relevant to understand the expansion history and formation of structure in the Universe.

  1. Behaviour change counselling--how do I know if I am doing it well? The development of the Behaviour Change Counselling Scale (BCCS).

    Science.gov (United States)

    Vallis, Michael

    2013-02-01

    The purpose of this article is to operationalize behaviour change counselling skills (motivation enhancement, behaviour modification, emotion management) that facilitate self-management support activities and evaluate the psychometric properties of an expert rater scale, the Behaviour Change Counselling Scale (BCCS). Twenty-one healthcare providers with varying levels of behaviour change counselling training interviewed a simulated patient. Videotapes were independently rated by 3 experts on 2 occasions over 6 months. Data on item/subscale characteristics, interrater and test-retest reliability, preliminary data on construct reliability, were reported. All items of the BCCS performed well with the exception of 3 that were dropped due to infrequent endorsement. Most subscales showed strong psychometric properties. Interrater and test-retest reliability coefficients were uniformly high. Competency scores improved significantly from pre- to posttraining. Behaviour change counselling skills to guide lifestyle interventions can be operationalized and assessed in a reliable and valid manner. The BCCS can be used to guide clinical training in lifestyle counselling by operationalizing the component skills and providing feedback on skill achieved. Further research is needed to establish cut scores for competency and scale construct and criterion validity. Copyright © 2013 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.

  2. Simplified Linear Equation Solvers users manual

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W. [Argonne National Lab., IL (United States); Smith, B. [California Univ., Los Angeles, CA (United States)

    1993-02-01

    The solution of large sparse systems of linear equations is at the heart of many algorithms in scientific computing. The SLES package is a set of easy-to-use yet powerful and extensible routines for solving large sparse linear systems. The design of the package allows new techniques to be used in existing applications without any source code changes in the applications.

  3. Climate change in Inner Mongolia from 1955 to 2005-trends at regional, biome and local scales

    Energy Technology Data Exchange (ETDEWEB)

    Lu, N; Wilske, B; John, R; Chen, J [Department of Environmental Sciences, University of Toledo, Toledo, OH 43606 (United States); Ni, J, E-mail: nan.lu@utoledo.ed, E-mail: burkhard.wilske@utoledo.ed, E-mail: jni@ibcas.ac.c, E-mail: ranjeet.john@utoledo.ed, E-mail: jiquan.chen@utoledo.ed [Alfred Wegener Institute for Polar and Marine Research, Telegrafenberg A43, D-14473 Potsdam (Germany)

    2009-10-15

    This study investigated the climate change in Inner Mongolia based on 51 meteorological stations from 1955 to 2005. The climate data was analyzed at the regional, biome (i.e. forest, grassland and desert) and station scales, with the biome scale as our primary focus. The climate records showed trends of warmer and drier conditions in the region. The annual daily mean, maximum and minimum temperature increased whereas the diurnal temperature range (DTR) decreased. The decreasing trend of annual precipitation was not significant. However, the vapor pressure deficit (VPD) increased significantly. On the decadal scale, the warming and drying trends were more significant in the last 30 years than the preceding 20 years. The climate change varied among biomes, with more pronounced changes in the grassland and the desert biomes than in the forest biome. DTR and VPD showed the clearest inter-biome gradient from the lowest rate of change in the forest biome to the highest rate of change in the desert biome. The rates of change also showed large variations among the individual stations. Our findings correspond with the IPCC predictions that the future climate will vary significantly by location and through time, suggesting that adaptation strategies also need to be spatially viable.

  4. Climate change in Inner Mongolia from 1955 to 2005-trends at regional, biome and local scales

    International Nuclear Information System (INIS)

    Lu, N; Wilske, B; John, R; Chen, J; Ni, J

    2009-01-01

    This study investigated the climate change in Inner Mongolia based on 51 meteorological stations from 1955 to 2005. The climate data was analyzed at the regional, biome (i.e. forest, grassland and desert) and station scales, with the biome scale as our primary focus. The climate records showed trends of warmer and drier conditions in the region. The annual daily mean, maximum and minimum temperature increased whereas the diurnal temperature range (DTR) decreased. The decreasing trend of annual precipitation was not significant. However, the vapor pressure deficit (VPD) increased significantly. On the decadal scale, the warming and drying trends were more significant in the last 30 years than the preceding 20 years. The climate change varied among biomes, with more pronounced changes in the grassland and the desert biomes than in the forest biome. DTR and VPD showed the clearest inter-biome gradient from the lowest rate of change in the forest biome to the highest rate of change in the desert biome. The rates of change also showed large variations among the individual stations. Our findings correspond with the IPCC predictions that the future climate will vary significantly by location and through time, suggesting that adaptation strategies also need to be spatially viable.

  5. Linear gate with prescaled window

    Energy Technology Data Exchange (ETDEWEB)

    Koch, J; Bissem, H H; Krause, H; Scobel, W [Hamburg Univ. (Germany, F.R.). 1. Inst. fuer Experimentalphysik

    1978-07-15

    An electronic circuit is described that combines the features of a linear gate, a single channel analyzer and a prescaler. It allows selection of a pulse height region between two adjustable thresholds and scales the intensity of the spectrum within this window down by a factor 2sup(N) (0<=N<=9), whereas the complementary part of the spectrum is transmitted without being affected.

  6. CLIC e+e- Linear Collider Studies

    CERN Document Server

    Dannheim, Dominik; Linssen, Lucie; Schulte, Daniel; Simon, Frank; Stapnes, Steinar; Toge, Nobukazu; Weerts, Harry; Wells, James

    2012-01-01

    This document provides input from the CLIC e+e- linear collider studies to the update process of the European Strategy for Particle Physics. It is submitted on behalf of the CLIC/CTF3 collaboration and the CLIC physics and detector study. It describes the exploration of fundamental questions in particle physics at the energy frontier with a future TeV-scale e+e- linear collider based on the Compact Linear Collider (CLIC) two-beam acceleration technique. A high-luminosity high-energy e+e- collider allows for the exploration of Standard Model physics, such as precise measurements of the Higgs, top and gauge sectors, as well as for a multitude of searches for New Physics, either through direct discovery or indirectly, via high-precision observables. Given the current state of knowledge, following the observation of a \\sim125 GeV Higgs-like particle at the LHC, and pending further LHC results at 8 TeV and 14 TeV, a linear e+e- collider built and operated in centre-of-mass energy stages from a few-hundred GeV up t...

  7. Past and future changes in streamflow in the U.S. Midwest: Bridging across time scales

    Science.gov (United States)

    Villarini, G.; Slater, L. J.; Salvi, K. A.

    2017-12-01

    Streamflows have increased notably across the U.S. Midwest over the past century, principally due to changes in precipitation and land use / land cover. Improving our understanding of the physical drivers that are responsible for the observed changes in discharge may enhance our capability of predicting and projecting these changes, and may have large implications for water resources management over this area. This study will highlight our efforts towards the statistical attribution of changes in discharge across the U.S. Midwest, with analyses performed at the seasonal scale from low to high flows. The main drivers of changing streamflows that we focus on are: urbanization, agricultural land cover, basin-averaged temperature, basin-averaged precipitation, and antecedent soil moisture. Building on the insights from this attribution, we will examine the potential predictability of streamflow across different time scales, with lead times ranging from seasonal to decadal, and discuss a potential path forward for engineering design for future conditions.

  8. Dynamical symmetries of semi-linear Schrodinger and diffusion equations

    International Nuclear Information System (INIS)

    Stoimenov, Stoimen; Henkel, Malte

    2005-01-01

    Conditional and Lie symmetries of semi-linear 1D Schrodinger and diffusion equations are studied if the mass (or the diffusion constant) is considered as an additional variable. In this way, dynamical symmetries of semi-linear Schrodinger equations become related to the parabolic and almost-parabolic subalgebras of a three-dimensional conformal Lie algebra (conf 3 ) C . We consider non-hermitian representations and also include a dimensionful coupling constant of the non-linearity. The corresponding representations of the parabolic and almost-parabolic subalgebras of (conf 3 ) C are classified and the complete list of conditionally invariant semi-linear Schrodinger equations is obtained. Possible applications to the dynamical scaling behaviour of phase-ordering kinetics are discussed

  9. Impact of thermoelectric phenomena on phase-change memory performance metrics and scaling

    International Nuclear Information System (INIS)

    Lee, Jaeho; Asheghi, Mehdi; Goodson, Kenneth E

    2012-01-01

    The coupled transport of heat and electrical current, or thermoelectric phenomena, can strongly influence the temperature distribution and figures of merit for phase-change memory (PCM). This paper simulates PCM devices with careful attention to thermoelectric transport and the resulting impact on programming current during the reset operation. The electrothermal simulations consider Thomson heating within the phase-change material and Peltier heating at the electrode interface. Using representative values for the Thomson and Seebeck coefficients extracted from our past measurements of these properties, we predict a cell temperature increase of 44% and a decrease in the programming current of 16%. Scaling arguments indicate that the impact of thermoelectric phenomena becomes greater with smaller dimensions due to enhanced thermal confinement. This work estimates the scaling of this reduction in programming current as electrode contact areas are reduced down to 10 nm × 10 nm. Precise understanding of thermoelectric phenomena and their impact on device performance is a critical part of PCM design strategies. (paper)

  10. Effects of two-scale transverse crack systems on the non-linear behaviour of a 2D SiC-SiC composite

    Energy Technology Data Exchange (ETDEWEB)

    Morvan, J.-M.; Baste, S. [Bordeaux-1 Univ., 33 - Talence (France). Lab. de Mecanique Physique

    1998-07-31

    By using both an ultrasonic device and an extensometer, it is possible to know which stiffness coefficients change during the damage process of a material and which part of the global strain is either elastic or inelastic. The influence of the two damage mechanisms is described for a woven 2D SiC-SiC composite. It appears that the two scales of this composite have a great influence on its behaviour. Two elementary mechanisms occur at both scales of the material: at the mesostructure level consisting of the bundles as well as of the inter-bundle matrix and at the microstructure level made from both the fibres and the intra-bundle matrix. The inelastic strains are sensitive to this two-scale effect: an increment of strain at constant stress that comes to saturation corresponding to the inter-bundle damage process and a strain which needs an increase in stress as cracking occurs at the fibres scale. With the help of a model that predicts the compliance changes caused by a crack system in a solid, it is possible to predict the crack density variation at both scales as well as the geometry of the various crack systems during monotonous loading. Furthermore, when the crack opening is taken into account, it appears that the inelastic strain is governed by the transverse crack density. (orig.) 12 refs.

  11. Development of the Systems Thinking Scale for Adolescent Behavior Change.

    Science.gov (United States)

    Moore, Shirley M; Komton, Vilailert; Adegbite-Adeniyi, Clara; Dolansky, Mary A; Hardin, Heather K; Borawski, Elaine A

    2018-03-01

    This report describes the development and psychometric testing of the Systems Thinking Scale for Adolescent Behavior Change (STS-AB). Following item development, initial assessments of understandability and stability of the STS-AB were conducted in a sample of nine adolescents enrolled in a weight management program. Exploratory factor analysis of the 16-item STS-AB and internal consistency assessments were then done with 359 adolescents enrolled in a weight management program. Test-retest reliability of the STS-AB was .71, p = .03; internal consistency reliability was .87. Factor analysis of the 16-item STS-AB indicated a one-factor solution with good factor loadings, ranging from .40 to .67. Evidence of construct validity was supported by significant correlations with established measures of variables associated with health behavior change. We provide beginning evidence of the reliability and validity of the STS-AB to measure systems thinking for health behavior change in young adolescents.

  12. Do Quercus ilex woodlands undergo abrupt non-linear functional changes in response to human disturbance along a climatic gradient?

    Science.gov (United States)

    Bochet, Esther; García-Fayos, Patricio; José Molina, Maria; Moreno de las Heras, Mariano; Espigares, Tíscar; Nicolau, Jose Manuel; Monleon, Vicente

    2017-04-01

    Theoretical models predict that drylands are particularly prone to suffer critical transitions with abrupt non-linear changes in their structure and functions as a result of the existing complex interactions between climatic fluctuations and human disturbances. However, so far, few studies provide empirical data to validate these models. We aim at determining how holm oak (Quercus ilex) woodlands undergo changes in their functions in response to human disturbance along an aridity gradient (from semi-arid to sub-humid conditions), in eastern Spain. For that purpose, we used (a) remote-sensing estimations of precipitation-use-efficiency (PUE) from enhanced vegetation index (EVI) observations performed in 231x231 m plots of the Moderate Resolution Imaging Spectroradiometer (MODIS); (b) biological and chemical soil parameter determinations (extracellular soil enzyme activity, soil respiration, nutrient cycling processes) from soil sampled in the same plots; (c) vegetation parameter determinations (ratio of functional groups) from vegetation surveys performed in the same plots. We analyzed and compared the shape of the functional change (in terms of PUE and soil and vegetation parameters) in response to human disturbance intensity for our holm oak sites along the aridity gradient. Overall, our results evidenced important differences in the shape of the functional change in response to human disturbance between climatic conditions. Semi-arid areas experienced a more accelerated non-linear decrease with an increasing disturbance intensity than sub-humid ones. The proportion of functional groups (herbaceous vs. woody cover) played a relevant role in the shape of the functional response of the holm oak sites to human disturbance.

  13. Scaling and scale invariance of conservation laws in Reynolds transport theorem framework

    Science.gov (United States)

    Haltas, Ismail; Ulusoy, Suleyman

    2015-07-01

    Scale invariance is the case where the solution of a physical process at a specified time-space scale can be linearly related to the solution of the processes at another time-space scale. Recent studies investigated the scale invariance conditions of hydrodynamic processes by applying the one-parameter Lie scaling transformations to the governing equations of the processes. Scale invariance of a physical process is usually achieved under certain conditions on the scaling ratios of the variables and parameters involved in the process. The foundational axioms of hydrodynamics are the conservation laws, namely, conservation of mass, conservation of linear momentum, and conservation of energy from continuum mechanics. They are formulated using the Reynolds transport theorem. Conventionally, Reynolds transport theorem formulates the conservation equations in integral form. Yet, differential form of the conservation equations can also be derived for an infinitesimal control volume. In the formulation of the governing equation of a process, one or more than one of the conservation laws and, some times, a constitutive relation are combined together. Differential forms of the conservation equations are used in the governing partial differential equation of the processes. Therefore, differential conservation equations constitute the fundamentals of the governing equations of the hydrodynamic processes. Applying the one-parameter Lie scaling transformation to the conservation laws in the Reynolds transport theorem framework instead of applying to the governing partial differential equations may lead to more fundamental conclusions on the scaling and scale invariance of the hydrodynamic processes. This study will investigate the scaling behavior and scale invariance conditions of the hydrodynamic processes by applying the one-parameter Lie scaling transformation to the conservation laws in the Reynolds transport theorem framework.

  14. Safety Effect Analysis of the Large-Scale Design Changes in a Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun-Chan; Lee, Hyun-Gyo [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2015-05-15

    These activities were predominantly focused on replacing obsolete systems with new systems, and these efforts were not only to prolong the plant life, but also to guarantee the safe operation of the units. This review demonstrates the safety effect evaluation using the probabilistic safety assessment (PSA) of the design changes, system improvements, and Fukushima accident action items for Kori unit 1 (K1). For the large scale of system design changes for K1, the safety effects from the PSA perspective were reviewed using the risk quantification results before and after the system improvements. This evaluation considered the seven significant design changes including the replacement of the control building air conditioning system and the performance improvement of the containment sump using a new filtering system as well as above five system design changes. The analysis results demonstrated that the CDF was reduced by 12% overall from 1.62E-5/y to 1.43E-5/y. The CDF reduction was larger in the transient group than in the loss of coolant accident (LOCA) group. In conclusion, the analysis using the K1 PSA model supports that the plant safety has been appropriately maintained after the large-scale design changes in consideration of the changed operation factors and failure modes due to the system improvements.

  15. Nanometer-scale temperature measurements of phase change memory and carbon nanomaterials

    Science.gov (United States)

    Grosse, Kyle Lane

    This work investigates nanometer-scale thermometry and thermal transport in new electronic devices to mitigate future electronic energy consumption. Nanometer-scale thermal transport is integral to electronic energy consumption and limits current electronic performance. New electronic devices are required to improve future electronic performance and energy consumption, but heat generation is not well understood in these new technologies. Thermal transport deviates significantly at the nanometer-scale from macroscopic systems as low dimensional materials, grain structure, interfaces, and thermoelectric effects can dominate electronic performance. This work develops and implements an atomic force microscopy (AFM) based nanometer-scale thermometry technique, known as scanning Joule expansion microscopy (SJEM), to measure nanometer-scale heat generation in new graphene and phase change memory (PCM) devices, which have potential to improve performance and energy consumption of future electronics. Nanometer-scale thermometry of chemical vapor deposition (CVD) grown graphene measured the heat generation at graphene wrinkles and grain boundaries (GBs). Graphene is an atomically-thin, two dimensional (2D) carbon material with promising applications in new electronic devices. Comparing measurements and predictions of CVD graphene heating predicted the resistivity, voltage drop, and temperature rise across the one dimensional (1D) GB defects. This work measured the nanometer-scale temperature rise of thin film Ge2Sb2Te5 (GST) based PCM due to Joule, thermoelectric, interface, and grain structure effects. PCM has potential to reduce energy consumption and improve performance of future electronic memory. A new nanometer-scale thermometry technique is developed for independent and direct observation of Joule and thermoelectric effects at the nanometer-scale, and the technique is demonstrated by SJEM measurements of GST devices. Uniform heating and GST properties are observed for

  16. Physics with e+e- Linear Colliders

    International Nuclear Information System (INIS)

    Barklow, Timothy L

    2003-01-01

    We describe the physics potential of e + e - linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosons and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e + e - linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines

  17. Dual-range linearized transimpedance amplifier system

    Science.gov (United States)

    Wessendorf, Kurt O.

    2010-11-02

    A transimpedance amplifier system is disclosed which simultaneously generates a low-gain output signal and a high-gain output signal from an input current signal using a single transimpedance amplifier having two different feedback loops with different amplification factors to generate two different output voltage signals. One of the feedback loops includes a resistor, and the other feedback loop includes another resistor in series with one or more diodes. The transimpedance amplifier system includes a signal linearizer to linearize one or both of the low- and high-gain output signals by scaling and adding the two output voltage signals from the transimpedance amplifier. The signal linearizer can be formed either as an analog device using one or two summing amplifiers, or alternately can be formed as a digital device using two analog-to-digital converters and a digital signal processor (e.g. a microprocessor or a computer).

  18. Assimilating Non-linear Effects of Customized Large-Scale Climate Predictors on Downscaled Precipitation over the Tropical Andes

    Science.gov (United States)

    Molina, J. M.; Zaitchik, B. F.

    2016-12-01

    Recent findings considering high CO2 emission scenarios (RCP8.5) suggest that the tropical Andes may experience a massive warming and a significant precipitation increase (decrease) during the wet (dry) seasons by the end of the 21st century. Variations on rainfall-streamflow relationships and seasonal crop yields significantly affect human development in this region and make local communities highly vulnerable to climate change and variability. We developed an expert-informed empirical statistical downscaling (ESD) algorithm to explore and construct robust global climate predictors to perform skillful RCP8.5 projections of in-situ March-May (MAM) precipitation required for impact modeling and adaptation studies. We applied our framework to a topographically-complex region of the Colombian Andes where a number of previous studies have reported El Niño-Southern Oscillation (ENSO) as the main driver of climate variability. Supervised machine learning algorithms were trained with customized and bias-corrected predictors from NCEP reanalysis, and a cross-validation approach was implemented to assess both predictive skill and model selection. We found weak and not significant teleconnections between precipitation and lagged seasonal surface temperatures over El Niño3.4 domain, which suggests that ENSO fails to explain MAM rainfall variability in the study region. In contrast, series of Sea Level Pressure (SLP) over American Samoa -likely associated with the South Pacific Convergence Zone (SPCZ)- explains more than 65% of the precipitation variance. The best prediction skill was obtained with Selected Generalized Additive Models (SGAM) given their ability to capture linear/nonlinear relationships present in the data. While SPCZ-related series exhibited a positive linear effect in the rainfall response, SLP predictors in the north Atlantic and central equatorial Pacific showed nonlinear effects. A multimodel (MIROC, CanESM2 and CCSM) ensemble of ESD projections revealed

  19. Designing for Change: Interoperability in a scaling and adapting environment

    Science.gov (United States)

    Yarmey, L.

    2015-12-01

    The Earth Science cyberinfrastructure landscape is constantly changing. Technologies advance and technical implementations are refined or replaced. Data types, volumes, packaging, and use cases evolve. Scientific requirements emerge and mature. Standards shift while systems scale and adapt. In this complex and dynamic environment, interoperability remains a critical component of successful cyberinfrastructure. Through the resource- and priority-driven iterations on systems, interfaces, and content, questions fundamental to stable and useful Earth Science cyberinfrastructure arise. For instance, how are sociotechnical changes planned, tracked, and communicated? How should operational stability balance against 'new and shiny'? How can ongoing maintenance and mitigation of technical debt be managed in an often short-term resource environment? The Arctic Data Explorer is a metadata brokering application developed to enable discovery of international, interdisciplinary Arctic data across distributed repositories. Completely dependent on interoperable third party systems, the Arctic Data Explorer publicly launched in 2013 with an original 3000+ data records from four Arctic repositories. Since then the search has scaled to 25,000+ data records from thirteen repositories at the time of writing. In the final months of original project funding, priorities shift to lean operations with a strategic eye on the future. Here we present lessons learned from four years of Arctic Data Explorer design, development, communication, and maintenance work along with remaining questions and potential directions.

  20. Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale

    Science.gov (United States)

    Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru

    2013-04-01

    Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate

  1. Quantifying the astronomical contribution to Pleistocene climate change: A non-linear, statistical approach

    Science.gov (United States)

    Crucifix, Michel; Wilkinson, Richard; Carson, Jake; Preston, Simon; Alemeida, Carlos; Rougier, Jonathan

    2013-04-01

    The existence of an action of astronomical forcing on the Pleistocene climate is almost undisputed. However, quantifying this action is not straightforward. In particular, the phenomenon of deglaciation is generally interpreted as a manifestation of instability, which is typical of non-linear systems. As a consequence, explaining the Pleistocene climate record as the addition of an astronomical contribution and noise-as often done using harmonic analysis tools-is potentially deceptive. Rather, we advocate a methodology in which non-linear stochastic dynamical systems are calibrated on the Pleistocene climate record. The exercise, though, requires careful statistical reasoning and state-of-the-art techniques. In fact, the problem has been judged to be mathematically 'intractable and unsolved' and some pragmatism is justified. In order to illustrate the methodology we consider one dynamical system that potentially captures four dynamical features of the Pleistocene climate : the existence of a saddle-node bifurcation in at least one of its slow components, a time-scale separation between a slow and a fast component, the action of astronomical forcing, and the existence a stochastic contribution to the system dynamics. This model is obviously not the only possible representation of Pleistocene dynamics, but it encapsulates well enough both our theoretical and empirical knowledge into a very simple form to constitute a valid starting point. The purpose of this poster is to outline the practical challenges in calibrating such a model on paleoclimate observations. Just as in time series analysis, there is no one single and universal test or criteria that would demonstrate the validity of an approach. Several methods exist to calibrate the model and judgement develops by the confrontation of the results of the different methods. In particular, we consider here the Kalman filter variants, the Particle Monte-Carlo Markov Chain, and two other variants of Sequential Monte

  2. Development of non-linear vibration analysis code for CANDU fuelling machine

    International Nuclear Information System (INIS)

    Murakami, Hajime; Hirai, Takeshi; Horikoshi, Kiyomi; Mizukoshi, Kaoru; Takenaka, Yasuo; Suzuki, Norio.

    1988-01-01

    This paper describes the development of a non-linear, dynamic analysis code for the CANDU 600 fuelling machine (F-M), which includes a number of non-linearities such as gap with or without Coulomb friction, special multi-linear spring connections, etc. The capabilities and features of the code and the mathematical treatment for the non-linearities are explained. The modeling and numerical methodology for the non-linearities employed in the code are verified experimentally. Finally, the simulation analyses for the full-scale F-M vibration testing are carried out, and the applicability of the code to such multi-degree of freedom systems as F-M is demonstrated. (author)

  3. Application of Satellite Solar-Induced Chlorophyll Fluorescence to Understanding Large-Scale Variations in Vegetation Phenology and Function Over Northern High Latitude Forests

    Science.gov (United States)

    Jeong, Su-Jong; Schimel, David; Frankenberg, Christian; Drewry, Darren T.; Fisher, Joshua B.; Verma, Manish; Berry, Joseph A.; Lee, Jung-Eun; Joiner, Joanna

    2016-01-01

    This study evaluates the large-scale seasonal phenology and physiology of vegetation over northern high latitude forests (40 deg - 55 deg N) during spring and fall by using remote sensing of solar-induced chlorophyll fluorescence (SIF), normalized difference vegetation index (NDVI) and observation-based estimate of gross primary productivity (GPP) from 2009 to 2011. Based on GPP phenology estimation in GPP, the growing season determined by SIF time-series is shorter in length than the growing season length determined solely using NDVI. This is mainly due to the extended period of high NDVI values, as compared to SIF, by about 46 days (+/-11 days), indicating a large-scale seasonal decoupling of physiological activity and changes in greenness in the fall. In addition to phenological timing, mean seasonal NDVI and SIF have different responses to temperature changes throughout the growing season. We observed that both NDVI and SIF linearly increased with temperature increases throughout the spring. However, in the fall, although NDVI linearly responded to temperature increases, SIF and GPP did not linearly increase with temperature increases, implying a seasonal hysteresis of SIF and GPP in response to temperature changes across boreal ecosystems throughout their growing season. Seasonal hysteresis of vegetation at large-scales is consistent with the known phenomena that light limits boreal forest ecosystem productivity in the fall. Our results suggest that continuing measurements from satellite remote sensing of both SIF and NDVI can help to understand the differences between, and information carried by, seasonal variations vegetation structure and greenness and physiology at large-scales across the critical boreal regions.

  4. Linear Algebra and Smarandache Linear Algebra

    OpenAIRE

    Vasantha, Kandasamy

    2003-01-01

    The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...

  5. Linear polarization of BY Draconis

    International Nuclear Information System (INIS)

    Koch, R.H.; Pfeiffer, R.J.

    1976-01-01

    Linear polarization measurements are reported in four bandpasses for the flare star BY Dra. The red polarization is intrinsically variable at a confidence level greater than 99 percent. On a time scale of many months, the variability is not phase-locked to either a rotational or a Keplerian ephemeris. The observations of the three other bandpasses are useful principally to indicate a polarization spectrum rising toward shorter wavelengths

  6. The non-linear, interactive effects of population density and climate drive the geographical patterns of waterfowl survival

    Science.gov (United States)

    Zhao, Qing; Boomer, G. Scott; Kendall, William L.

    2018-01-01

    On-going climate change has major impacts on ecological processes and patterns. Understanding the impacts of climate on the geographical patterns of survival can provide insights to how population dynamics respond to climate change and provide important information for the development of appropriate conservation strategies at regional scales. It is challenging to understand the impacts of climate on survival, however, due to the fact that the non-linear relationship between survival and climate can be modified by density-dependent processes. In this study we extended the Brownie model to partition hunting and non-hunting mortalities and linked non-hunting survival to covariates. We applied this model to four decades (1972–2014) of waterfowl band-recovery, breeding population survey, and precipitation and temperature data covering multiple ecological regions to examine the non-linear, interactive effects of population density and climate on waterfowl non-hunting survival at a regional scale. Our results showed that the non-linear effect of temperature on waterfowl non-hunting survival was modified by breeding population density. The concave relationship between non-hunting survival and temperature suggested that the effects of warming on waterfowl survival might be multifaceted. Furthermore, the relationship between non-hunting survival and temperature was stronger when population density was higher, suggesting that high-density populations may be less buffered against warming than low-density populations. Our study revealed distinct relationships between waterfowl non-hunting survival and climate across and within ecological regions, highlighting the importance of considering different conservation strategies according to region-specific population and climate conditions. Our findings and associated novel modelling approach have wide implications in conservation practice.

  7. Two-scale modelling for hydro-mechanical damage

    International Nuclear Information System (INIS)

    Frey, J.; Chambon, R.; Dascalu, C.

    2010-01-01

    Document available in extended abstract form only. Excavation works for underground storage create a damage zone for the rock nearby and affect its hydraulics properties. This degradation, already observed by laboratory tests, can create a leading path for fluids. The micro fracture phenomenon, which occur at a smaller scale and affect the rock permeability, must be fully understood to minimize the transfer process. Many methods can be used in order to take into account the microstructure of heterogeneous materials. Among them a method has been developed recently. Instead of using a constitutive equation obtained by phenomenological considerations or by some homogenization techniques, the representative elementary volume (R.E.V.) is modelled as a structure and the links between a prescribed kinematics and the corresponding dual forces are deduced numerically. This yields the so called Finite Element square method (FE2). In a numerical point of view, a finite element model is used at the macroscopic level, and for each Gauss point, computations on the microstructure gives the usual results of a constitutive law. This numerical approach is now classical in order to properly model some materials such as composites and the efficiency of such numerical homogenization process has been shown, and allows numerical modelling of deformation processes associated with various micro-structural changes. The aim of this work is to describe trough such a method, damage of the rock with a two scale hydro-mechanical model. The rock damage at the macroscopic scale is directly link with an analysis on the microstructure. At the macroscopic scale a two phase's problem is studied. A solid skeleton is filled up by a filtrating fluid. It is necessary to enforce two balance equation and two mass conservation equations. A classical way to deal with such a problem is to work with the balance equation of the whole mixture, and the mass fluid conservation written in a weak form, the mass

  8. Core seismic behaviour: linear and non-linear models

    International Nuclear Information System (INIS)

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.

    1981-08-01

    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  9. Age-related changes in the plasticity and toughness of human cortical bone at multiple length-scales

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, Elizabeth A.; Schaible, Eric; Bale, Hrishikesh; Barth, Holly D.; Tang, Simon Y.; Reichert, Peter; Busse, Bjoern; Alliston, Tamara; Ager III, Joel W.; Ritchie, Robert O.

    2011-08-10

    The structure of human cortical bone evolves over multiple length-scales from its basic constituents of collagen and hydroxyapatite at the nanoscale to osteonal structures at nearmillimeter dimensions, which all provide the basis for its mechanical properties. To resist fracture, bone’s toughness is derived intrinsically through plasticity (e.g., fibrillar sliding) at structural-scales typically below a micron and extrinsically (i.e., during crack growth) through mechanisms (e.g., crack deflection/bridging) generated at larger structural-scales. Biological factors such as aging lead to a markedly increased fracture risk, which is often associated with an age-related loss in bone mass (bone quantity). However, we find that age-related structural changes can significantly degrade the fracture resistance (bone quality) over multiple lengthscales. Using in situ small-/wide-angle x-ray scattering/diffraction to characterize sub-micron structural changes and synchrotron x-ray computed tomography and in situ fracture-toughness measurements in the scanning electron microscope to characterize effects at micron-scales, we show how these age-related structural changes at differing size-scales degrade both the intrinsic and extrinsic toughness of bone. Specifically, we attribute the loss in toughness to increased non-enzymatic collagen cross-linking which suppresses plasticity at nanoscale dimensions and to an increased osteonal density which limits the potency of crack-bridging mechanisms at micron-scales. The link between these processes is that the increased stiffness of the cross-linked collagen requires energy to be absorbed by “plastic” deformation at higher structural levels, which occurs by the process of microcracking.

  10. Parallel beam dynamics simulation of linear accelerators

    International Nuclear Information System (INIS)

    Qiang, Ji; Ryne, Robert D.

    2002-01-01

    In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies

  11. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety

    DEFF Research Database (Denmark)

    Yeh, P.H.; Gazdzinski, S.; Durazzo, T.C.

    2007-01-01

    faster brain volume gains, which were also related to greater smoking and drinking severities. Over 7 months of abstinence from alcohol, sALC compared to nsALC showed less improvements in visuospatial learning and memory despite larger brain volume gains and ventricular shrinkage. Conclusions: Different......)-derived brain volume changes and cognitive changes in abstinent alcohol-dependent individuals as a function of smoking status, smoking severity, and drinking quantities. Methods: Twenty non-smoking recovering alcoholics (nsALC) and 30 age-matched smoking recovering alcoholics (sALC) underwent quantitative MRI...... time points. Using HLM, we modeled volumetric and cognitive outcome measures as a function of cigarette and alcohol use variables. Results: Different hierarchical linear models with unique model structures are presented and discussed. The results show that smaller brain volumes at baseline predict...

  12. Resistors Improve Ramp Linearity

    Science.gov (United States)

    Kleinberg, L. L.

    1982-01-01

    Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.

  13. A linearly-acting variable-reluctance generator for thermoacoustic engines

    International Nuclear Information System (INIS)

    Hail, Claudio U.; Knodel, Philip C.; Lang, Jeffrey H.; Brisson, John G.

    2015-01-01

    Highlights: • A new design for a linear alternator for thermoacoustic power converters is presented. • A theoretical and semi-empirical model of the generator is developed and validated. • The variable-reluctance generator’s performance is experimentally characterized. • Scaling to higher frequency suggests efficient operation with thermoacoustic engines. - Abstract: A crucial element in a thermoacoustic power converter for reliable small-scale power generation applications is an efficient acoustic-to-electric energy converter. In this work, an acoustic-to-electric transducer for application with a back-to-back standing wave thermoacoustic engine, based on a linearly-acting variable-reluctance generator is proposed, built and experimentally tested. Static and dynamic experiments are performed on one side of the generator on a shaker table at 60 Hz with 5 mm peak-to-peak displacement for performance characterization. A theoretical and empirical model of the variable-reluctance generator are presented and validated with experimental data. A frequency scaling based on the empirical model indicates that a maximum power output of 84 W at 78% generator efficiency is feasible at the thermoacoustic engine’s operating frequency of 250 Hz, not considering power electronic losses. This suggests that the linearly-acting variable-reluctance generator can efficiently convert high frequency small amplitude acoustic oscillations to useful electricity and thus enables its integration into a thermoacoustic power converter

  14. A characterization of scale invariant responses in enzymatic networks.

    Directory of Open Access Journals (Sweden)

    Maja Skataric

    Full Text Available An ubiquitous property of biological sensory systems is adaptation: a step increase in stimulus triggers an initial change in a biochemical or physiological response, followed by a more gradual relaxation toward a basal, pre-stimulus level. Adaptation helps maintain essential variables within acceptable bounds and allows organisms to readjust themselves to an optimum and non-saturating sensitivity range when faced with a prolonged change in their environment. Recently, it was shown theoretically and experimentally that many adapting systems, both at the organism and single-cell level, enjoy a remarkable additional feature: scale invariance, meaning that the initial, transient behavior remains (approximately the same even when the background signal level is scaled. In this work, we set out to investigate under what conditions a broadly used model of biochemical enzymatic networks will exhibit scale-invariant behavior. An exhaustive computational study led us to discover a new property of surprising simplicity and generality, uniform linearizations with fast output (ULFO, whose validity we show is both necessary and sufficient for scale invariance of three-node enzymatic networks (and sufficient for any number of nodes. Based on this study, we go on to develop a mathematical explanation of how ULFO results in scale invariance. Our work provides a surprisingly consistent, simple, and general framework for understanding this phenomenon, and results in concrete experimental predictions.

  15. A review of model predictive control: moving from linear to nonlinear design methods

    International Nuclear Information System (INIS)

    Nandong, J.; Samyudia, Y.; Tade, M.O.

    2006-01-01

    Linear model predictive control (LMPC) has now been considered as an industrial control standard in process industry. Its extension to nonlinear cases however has not yet gained wide acceptance due to many reasons, e.g. excessively heavy computational load and effort, thus, preventing its practical implementation in real-time control. The application of nonlinear MPC (NMPC) is advantageous for processes with strong nonlinearity or when the operating points are frequently moved from one set point to another due to, for instance, changes in market demands. Much effort has been dedicated towards improving the computational efficiency of NMPC as well as its stability analysis. This paper provides a review on alternative ways of extending linear MPC to the nonlinear one. We also highlight the critical issues pertinent to the applications of NMPC and discuss possible solutions to address these issues. In addition, we outline the future research trend in the area of model predictive control by emphasizing on the potential applications of multi-scale process model within NMPC

  16. Finding Traps in Non-linear Spin Arrays

    OpenAIRE

    Wiesniak, Marcin; Markiewicz, Marcin

    2009-01-01

    Precise knowledge of the Hamiltonian of a system is a key to many of its applications. Tasks such state transfer or quantum computation have been well studied with a linear chain, but hardly with systems, which do not possess a linear structure. While this difference does not disturb the end-to-end dynamics of a single excitation, the evolution is significantly changed in other subspaces. Here we quantify the difference between a linear chain and a pseudo-chain, which have more than one spin ...

  17. Characterization of scale-free properties of human electrocorticography in awake and slow wave sleep states

    Directory of Open Access Journals (Sweden)

    John M Zempel

    2012-06-01

    Full Text Available Like many complex dynamic systems, the brain exhibits scale-free dynamics that follow power law scaling. Broadband power spectral density (PSD of brain electrical activity exhibits state-dependent power law scaling with a log frequency exponent that varies across frequency ranges. Widely divergent naturally occurring neural states, awake and slow wave sleep (SWS periods, were used evaluate the nature of changes in scale-free indices. We demonstrate two analytic approaches to characterizing electrocorticographic (ECoG data obtained during Awake and SWS states. A data driven approach was used, characterizing all available frequency ranges. Using an Equal Error State Discriminator (EESD, a single frequency range did not best characterize state across data from all six subjects, though the ability to distinguish awake and SWS states in individual subjects was excellent. Multisegment piecewise linear fits were used to characterize scale-free slopes across the entire frequency range (0.2-200 Hz. These scale-free slopes differed between Awake and SWS states across subjects, particularly at frequencies below 10 Hz and showed little difference at frequencies above 70 Hz. A Multivariate Maximum Likelihood Analysis (MMLA method using the multisegment slope indices successfully categorized ECoG data in most subjects, though individual variation was seen. The ECoG spectrum is not well characterized by a single linear fit across a defined set of frequencies, but is best described by a set of discrete linear fits across the full range of available frequencies. With increasing computational tractability, the use of scale-free slope values to characterize EEG data will have practical value in clinical and research EEG studies.

  18. Disappearing scales in carps: re-visiting Kirpichnikov's model on the genetics of scale pattern formation.

    Directory of Open Access Journals (Sweden)

    Laura Casas

    Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.

  19. Disappearing scales in carps: Re-visiting Kirpichnikov's model on the genetics of scale pattern formation

    KAUST Repository

    Casas, Laura; Szűcs, Ré ka; Vij, Shubha; Goh, Chin Heng; Kathiresan, Purushothaman; Né meth, Sá ndor; Jeney, Zsigmond; Bercsé nyi, Mikló s; Orbá n, Lá szló

    2013-01-01

    The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.

  20. Disappearing scales in carps: Re-visiting Kirpichnikov's model on the genetics of scale pattern formation

    KAUST Repository

    Casas, Laura

    2013-12-30

    The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the \\'S\\' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called \\'N\\' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov\\'s work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.

  1. The Stanford Linear Collider

    International Nuclear Information System (INIS)

    Emma, P.

    1995-01-01

    The Stanford Linear Collider (SLC) is the first and only high-energy e + e - linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e - ) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z 0 boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10 30 cm -2 s -1 and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed

  2. Ecoregional-scale monitoring within conservation areas, in a rapidly changing climate

    Science.gov (United States)

    Beever, Erik A.; Woodward, Andrea

    2011-01-01

    Long-term monitoring of ecological systems can prove invaluable for resource management and conservation. Such monitoring can: (1) detect instances of long-term trend (either improvement or deterioration) in monitored resources, thus providing an early-warning indication of system change to resource managers; (2) inform management decisions and help assess the effects of management actions, as well as anthropogenic and natural disturbances; and (3) provide the grist for supplemental research on mechanisms of system dynamics and cause-effect relationships (Fancy et al., 2009). Such monitoring additionally provides a snapshot of the status of monitored resources during each sampling cycle, and helps assess whether legal standards and regulations are being met. Until the last 1-2 decades, tracking and understanding changes in condition of natural resources across broad spatial extents have been infrequently attempted. Several factors, however, are facilitating the achievement of such broad-scale investigation and monitoring. These include increasing awareness of the importance of landscape context, greater prevalence of regional and global environmental stressors, and the rise of landscape-scale programs designed to manage and monitor biological systems. Such programs include the US Forest Service's Forest Inventory and Analysis (FIA) Program (Moser et al., 2008), Canada's National Forest Inventory, the 3Q Programme for monitoring agricultural landscapes of Norway (Dramstad et al., 2002), and the emerging (US) Landscape Conservation Cooperatives (USDOI Secretarial Order 3289, 2009; Anonymous, 2011). This Special Section explores the underlying design considerations, as well as many pragmatic aspects associated with program implementation and interpretation of results from broad-scale monitoring systems, particularly within the constraints of high-latitude contexts (e.g., low road density, short field season, dramatic fluctuations in temperature). Although Alaska is

  3. Linear circuit theory matrices in computer applications

    CERN Document Server

    Vlach, Jiri

    2014-01-01

    Basic ConceptsNodal and Mesh AnalysisMatrix MethodsDependent SourcesNetwork TransformationsCapacitors and InductorsNetworks with Capacitors and InductorsFrequency DomainLaplace TransformationTime DomainNetwork FunctionsActive NetworksTwo-PortsTransformersModeling and Numerical MethodsSensitivitiesModified Nodal FormulationFourier Series and TransformationAppendix: Scaling of Linear Networks.

  4. The role of zonal flows in the saturation of multi-scale gyrokinetic turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Staebler, G. M.; Candy, J. [General Atomics, San Diego, California 92186 (United States); Howard, N. T. [Oak Ridge Institute for Science Education (ORISE), Oak Ridge, Tennessee 37831 (United States); Holland, C. [University of California San Diego, San Diego, California 92093 (United States)

    2016-06-15

    The 2D spectrum of the saturated electric potential from gyrokinetic turbulence simulations that include both ion and electron scales (multi-scale) in axisymmetric tokamak geometry is analyzed. The paradigm that the turbulence is saturated when the zonal (axisymmetic) ExB flow shearing rate competes with linear growth is shown to not apply to the electron scale turbulence. Instead, it is the mixing rate by the zonal ExB velocity spectrum with the turbulent distribution function that competes with linear growth. A model of this mechanism is shown to be able to capture the suppression of electron-scale turbulence by ion-scale turbulence and the threshold for the increase in electron scale turbulence when the ion-scale turbulence is reduced. The model computes the strength of the zonal flow velocity and the saturated potential spectrum from the linear growth rate spectrum. The model for the saturated electric potential spectrum is applied to a quasilinear transport model and shown to accurately reproduce the electron and ion energy fluxes of the non-linear gyrokinetic multi-scale simulations. The zonal flow mixing saturation model is also shown to reproduce the non-linear upshift in the critical temperature gradient caused by zonal flows in ion-scale gyrokinetic simulations.

  5. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  6. A Study of Joint Cost Inclusion in Linear Programming Optimization

    Directory of Open Access Journals (Sweden)

    P. Armaos

    2013-08-01

    Full Text Available The concept of Structural Optimization has been a topic or research over the past century. Linear Programming Optimization has proved being the most reliable method of structural optimization. Global advances in linear programming optimization have been recently powered by University of Sheffield researchers, to include joint cost, self-weight and buckling considerations. A joint cost inclusion scopes to reduce the number of joints existing in an optimized structural solution, transforming it to a practically viable solution. The topic of the current paper is to investigate the effects of joint cost inclusion, as this is currently implemented in the optimization code. An extended literature review on this subject was conducted prior to familiarization with small scale optimization software. Using IntelliFORM software, a structured series of problems were set and analyzed. The joint cost tests examined benchmark problems and their consequent changes in the member topology, as the design domain was expanding. The findings of the analyses were remarkable and are being commented further on. The distinct topologies of solutions created by optimization processes are also recognized. Finally an alternative strategy of penalizing joints is presented.

  7. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  8. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  9. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  10. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  11. Mechanical aspects of allotropic phase change at the mesoscopic scale

    International Nuclear Information System (INIS)

    Valance, St.

    2007-12-01

    The prediction of the mechanical state of steel structures submit to thermo-mechanical loading must take into account consequences of allotropic phase change. Indeed, phase change induce, at least for steels, a mechanism of TRansformation Induced Plasticity (TRIP) leading to irreversible deformation even for loading less than elastic yield limit. Homogenized analytical models generally fail to achieve a correct prediction for complex loading. In order to overcome these difficulties, we present a model achieving a sharper description of the phenomenon. The mesoscopic working scale we adopt here is the grain scale size. Hence, we consider that the behaviour of each phase is homogenous in the sense of continuous media mechanic, whereas the front is explicitly described. We work both experimentally and numerically. Experimentally, we designed a test facility enabling thermo mechanical loading of the sample under partial vacuum. Acquisition of sample surface while martensitic transformation is happening leads, under some hypothesis and thanks to Digital Image Correlation, to the partial identification of area affected by transformation. Numerically, the eXtended Finite Element Method is applied for weakly discontinuous displacement fields. Used of this method needs to numerically track the transformation front -discontinuity support. In that goal, based on level set method, we develop FEM numerical scheme enabling recognition and propagation of discontinuity support. Finally, this work is complete by an approach of driving forces introduced through Eshelbian mechanics which are dual of front velocity. (author)

  12. Characterizing Hypervelocity Impact (HVI-Induced Pitting Damage Using Active Guided Ultrasonic Waves: From Linear to Nonlinear

    Directory of Open Access Journals (Sweden)

    Menglong Liu

    2017-05-01

    Full Text Available Hypervelocity impact (HVI, ubiquitous in low Earth orbit with an impacting velocity in excess of 1 km/s, poses an immense threat to the safety of orbiting spacecraft. Upon penetration of the outer shielding layer of a typical two-layer shielding system, the shattered projectile, together with the jetted materials of the outer shielding material, subsequently impinge the inner shielding layer, to which pitting damage is introduced. The pitting damage includes numerous craters and cracks disorderedly scattered over a wide region. Targeting the quantitative evaluation of this sort of damage (multitudinous damage within a singular inspection region, a characterization strategy, associating linear with nonlinear features of guided ultrasonic waves, is developed. Linear-wise, changes in the signal features in the time domain (e.g., time-of-flight and energy dissipation are extracted, for detecting gross damage whose characteristic dimensions are comparable to the wavelength of the probing wave; nonlinear-wise, changes in the signal features in the frequency domain (e.g., second harmonic generation, which are proven to be more sensitive than their linear counterparts to small-scale damage, are explored to characterize HVI-induced pitting damage scattered in the inner layer. A numerical simulation, supplemented with experimental validation, quantitatively reveals the accumulation of nonlinearity of the guided waves when the waves traverse the pitting damage, based on which linear and nonlinear damage indices are proposed. A path-based rapid imaging algorithm, in conjunction with the use of the developed linear and nonlinear indices, is developed, whereby the HVI-induced pitting damage is characterized in images in terms of the probability of occurrence.

  13. A Novel Final Focus Design for Future Linear Colliders

    Energy Technology Data Exchange (ETDEWEB)

    Seryi, Andrei

    2000-05-30

    The length, complexity and cost of the present Final Focus designs for linear colliders grows very quickly with the beam energy. In this letter, a novel final focus system is presented and compared with the one proposed for NLC. This new design is simpler, shorter and cheaper, with comparable bandwidth, tolerances and tunability. Moreover, the length scales slower than linearly with energy allowing for a more flexible design which is applicable over a much larger energy range.

  14. Association between changes on the Negative Symptom Assessment scale (NSA-16) and measures of functional outcome in schizophrenia.

    Science.gov (United States)

    Velligan, Dawn I; Alphs, Larry; Lancaster, Scott; Morlock, Robert; Mintz, Jim

    2009-09-30

    We examined whether changes in negative symptoms, as measured by scores on the 16-item Negative Symptom Assessment scale (NSA-16), were associated with changes in functional outcome. A group of 125 stable outpatients with schizophrenia were assessed at baseline and at 6 months using the NSA-16, the Brief Psychiatric Rating Scale, and multiple measures of functional outcome. Baseline adjusted regression coefficients indicated moderate correlations between negative symptoms and functional outcomes when baseline values of both variables were controlled. Results were nearly identical when we controlled for positive symptoms. Cross-lag panel correlations and Structural Equation Modeling were used to examine whether changes in negative symptoms drove changes in functional outcomes over time. Results indicated that negative symptoms drove the changes in the Social and Occupational Functioning Scale (SOFAS) rather than the reverse. Measures of Quality of Life and measures of negative symptoms may be assessing overlapping constructs or changes in both may be driven by a third variable. Negative symptoms were unrelated over time to scores on a performance-based measure of functional capacity. This study indicates that the relationship between negative symptom change and the change in functional outcomes is complex, and points to potential issues in selection of assessments.

  15. Linear perspective and framing in the vista paradox

    DEFF Research Database (Denmark)

    Costa, Marco; Bonetti, Leonardo

    2017-01-01

    The vista paradox is the illusion in which an object seen through a frame appears to shrink in apparent size as the observer approaches the frame. In four studies, we tested the effect of framing and fixating on the target object. The first two studies assessed the vista paradox in a large scale...... inserted within five frames differing in size. In the fourth study linear perspective was added to the images. The results showed that both frame size and linear perspective cues were critical factors for the vista paradox illusion....

  16. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  17. The causality analysis of climate change and large-scale human crisis.

    Science.gov (United States)

    Zhang, David D; Lee, Harry F; Wang, Cong; Li, Baosheng; Pei, Qing; Zhang, Jane; An, Yulun

    2011-10-18

    Recent studies have shown strong temporal correlations between past climate changes and societal crises. However, the specific causal mechanisms underlying this relation have not been addressed. We explored quantitative responses of 14 fine-grained agro-ecological, socioeconomic, and demographic variables to climate fluctuations from A.D. 1500-1800 in Europe. Results show that cooling from A.D. 1560-1660 caused successive agro-ecological, socioeconomic, and demographic catastrophes, leading to the General Crisis of the Seventeenth Century. We identified a set of causal linkages between climate change and human crisis. Using temperature data and climate-driven economic variables, we simulated the alternation of defined "golden" and "dark" ages in Europe and the Northern Hemisphere during the past millennium. Our findings indicate that climate change was the ultimate cause, and climate-driven economic downturn was the direct cause, of large-scale human crises in preindustrial Europe and the Northern Hemisphere.

  18. Tracking global change at local scales: Phenology for science, outreach, conservation

    Science.gov (United States)

    Sharron, Ed; Mitchell, Brian

    2011-06-01

    A Workshop Exploring the Use of Phenology Studies for Public Engagement; New Orleans, Louisiana, 14 March 2011 ; During a George Wright Society Conference session that was led by the USA National Phenology Network (USANPN; http://www.usanpn.org) and the National Park Service (NPS), professionals from government organizations, nonprofits, and higher-education institutions came together to explore the possibilities of using phenology monitoring to engage the public. One of the most visible effects of global change on ecosystems is shifts in phenology: the timing of biological events such as leafing and flowering, maturation of agricultural plants, emergence of insects, and migration of birds. These shifts are already occurring and reflect biological responses to climate change at local to regional scales. Changes in phenology have important implications for species ecology and resource management and, because they are place-based and tangible, serve as an ideal platform for education, outreach, and citizen science.

  19. Linear response to long wavelength fluctuations using curvature simulations

    Energy Technology Data Exchange (ETDEWEB)

    Baldauf, Tobias; Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ (United States); Seljak, Uroš [Physics Department, Astronomy Department and Lawrence Berkeley National Laboratory, University of California, Berkeley, CA (United States); Senatore, Leonardo, E-mail: baldauf@ias.edu, E-mail: useljak@berkeley.edu, E-mail: senatore@stanford.edu, E-mail: matiasz@ias.edu [Stanford Institute for Theoretical Physics, Stanford University, Stanford, CA (United States)

    2016-09-01

    We study the local response to long wavelength fluctuations in cosmological N -body simulations, focusing on the matter and halo power spectra, halo abundance and non-linear transformations of the density field. The long wavelength mode is implemented using an effective curved cosmology and a mapping of time and distances. The method provides an alternative, more direct, way to measure the isotropic halo biases. Limiting ourselves to the linear case, we find generally good agreement between the biases obtained from the curvature method and the traditional power spectrum method at the level of a few percent. We also study the response of halo counts to changes in the variance of the field and find that the slope of the relation between the responses to density and variance differs from the naïve derivation assuming a universal mass function by approximately 8–20%. This has implications for measurements of the amplitude of local non-Gaussianity using scale dependent bias. We also analyze the halo power spectrum and halo-dark matter cross-spectrum response to long wavelength fluctuations and derive second order halo bias from it, as well as the super-sample variance contribution to the galaxy power spectrum covariance matrix.

  20. LARGE-SCALE STRUCTURE OF THE UNIVERSE AS A COSMIC STANDARD RULER

    International Nuclear Information System (INIS)

    Park, Changbom; Kim, Young-Rae

    2010-01-01

    We propose to use the large-scale structure (LSS) of the universe as a cosmic standard ruler. This is possible because the pattern of large-scale distribution of matter is scale-dependent and does not change in comoving space during the linear-regime evolution of structure. By examining the pattern of LSS in several redshift intervals it is possible to reconstruct the expansion history of the universe, and thus to measure the cosmological parameters governing the expansion of the universe. The features of the large-scale matter distribution that can be used as standard rulers include the topology of LSS and the overall shapes of the power spectrum and correlation function. The genus, being an intrinsic topology measure, is insensitive to systematic effects such as the nonlinear gravitational evolution, galaxy biasing, and redshift-space distortion, and thus is an ideal cosmic ruler when galaxies in redshift space are used to trace the initial matter distribution. The genus remains unchanged as far as the rank order of density is conserved, which is true for linear and weakly nonlinear gravitational evolution, monotonic galaxy biasing, and mild redshift-space distortions. The expansion history of the universe can be constrained by comparing the theoretically predicted genus corresponding to an adopted set of cosmological parameters with the observed genus measured by using the redshift-comoving distance relation of the same cosmological model.

  1. Confirmation of linear system theory prediction: Rate of change of Herrnstein's κ as a function of response-force requirement

    Science.gov (United States)

    McDowell, J. J; Wood, Helena M.

    1985-01-01

    Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes (¢/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's κ were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) κ increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of κ was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of κ was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's κ. PMID:16812408

  2. Confirmation of linear system theory prediction: Rate of change of Herrnstein's kappa as a function of response-force requirement.

    Science.gov (United States)

    McDowell, J J; Wood, H M

    1985-01-01

    Four human subjects worked on all combinations of five variable-interval schedules and five reinforcer magnitudes ( cent/reinforcer) in each of two phases of the experiment. In one phase the force requirement on the operandum was low (1 or 11 N) and in the other it was high (25 or 146 N). Estimates of Herrnstein's kappa were obtained at each reinforcer magnitude. The results were: (1) response rate was more sensitive to changes in reinforcement rate at the high than at the low force requirement, (2) kappa increased from the beginning to the end of the magnitude range for all subjects at both force requirements, (3) the reciprocal of kappa was a linear function of the reciprocal of reinforcer magnitude for seven of the eight data sets, and (4) the rate of change of kappa was greater at the high than at the low force requirement by an order of magnitude or more. The second and third findings confirm predictions made by linear system theory, and replicate the results of an earlier experiment (McDowell & Wood, 1984). The fourth finding confirms a further prediction of the theory and supports the theory's interpretation of conflicting data on the constancy of Herrnstein's kappa.

  3. Scale orientated analysis of river width changes due to extreme flood hazards

    Directory of Open Access Journals (Sweden)

    G. Krapesch

    2011-08-01

    Full Text Available This paper analyses the morphological effects of extreme floods (recurrence interval >100 years and examines which parameters best describe the width changes due to erosion based on 5 affected alpine gravel bed rivers in Austria. The research was based on vertical aerial photos of the rivers before and after extreme floods, hydrodynamic numerical models and cross sectional measurements supported by LiDAR data of the rivers. Average width ratios (width after/before the flood were calculated and correlated with different hydraulic parameters (specific stream power, shear stress, flow area, specific discharge. Depending on the geomorphological boundary conditions of the different rivers, a mean width ratio between 1.12 (Lech River and 3.45 (Trisanna River was determined on the reach scale. The specific stream power (SSP best predicted the mean width ratios of the rivers especially on the reach scale and sub reach scale. On the local scale more parameters have to be considered to define the "minimum morphological spatial demand of rivers", which is a crucial parameter for addressing and managing flood hazards and should be used in hazard zone plans and spatial planning.

  4. Statistical study of the non-linear propagation of a partially coherent laser beam

    International Nuclear Information System (INIS)

    Ayanides, J.P.

    2001-01-01

    This research thesis is related to the LMJ project (Laser MegaJoule) and thus to the study and development of thermonuclear fusion. It reports the study of the propagation of a partially-coherent laser beam by using a statistical modelling in order to obtain mean values for the field, and thus bypassing a complex and costly calculation of deterministic quantities. Random fluctuations of the propagated field are supposed to comply with a Gaussian statistics; the laser central wavelength is supposed to be small with respect with fluctuation magnitude; a scale factor is introduced to clearly distinguish the scale of the random and fast variations of the field fluctuations, and the scale of the slow deterministic variations of the field envelopes. The author reports the study of propagation through a purely linear media and through a non-dispersive media, and then through slow non-dispersive and non-linear media (in which the reaction time is large with respect to grain correlation duration, but small with respect to the variation scale of the field macroscopic envelope), and thirdly through an instantaneous dispersive and non linear media (which instantaneously reacts to the field) [fr

  5. Imprint of non-linear effects on HI intensity mapping on large scales

    Energy Technology Data Exchange (ETDEWEB)

    Umeh, Obinna, E-mail: umeobinna@gmail.com [Department of Physics and Astronomy, University of the Western Cape, Cape Town 7535 (South Africa)

    2017-06-01

    Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.

  6. Scaling environmental change through the community-level: a trait-based response-and-effect framework for plants.

    NARCIS (Netherlands)

    Suding, K.N.; Lavorel, S.; Chapin III, F.S.; Cornelissen, J.H.C.; Diaz, S.; Garnier, E.; Goldberg, D.; Hooper, D.U.; Jackson, S.T.; Navas, M.-L.

    2008-01-01

    Predicting ecosystem responses to global change is a major challenge in ecology. A critical step in that challenge is to understand how changing environmental conditions influence processes across levels of ecological organization. While direct scaling from individual to ecosystem dynamics can lead

  7. Non-linear analytic and coanalytic problems (Lp-theory, Clifford analysis, examples)

    International Nuclear Information System (INIS)

    Dubinskii, Yu A; Osipenko, A S

    2000-01-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the 'orthogonal' sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented

  8. The impact of climate change on hydro-electricity generation

    Energy Technology Data Exchange (ETDEWEB)

    Musy, A.; Music, B.; Roy, R. [Ouranos, Montreal, PQ (Canada)

    2008-07-01

    Hydroelectricity is a clean and renewable energy source for many countries, and is expected to play an important role in future energy supplies. However, the impact of climatic change on hydroelectricity resources is not yet understood. This study provided a critical review of current methods used to determine the potential impacts of climatic change on hydroelectric power production. General circulation models (GCMs) are used to predict future climate conditions under various greenhouse gas (GHG) emissions scenarios. Statistical techniques are then used to down-scale GCM outputs to the appropriate scales needed for hydrological models, which are then used to simulate the effects of climatic change at regional and local scales. Outputs from the models are then used to develop water management models for hydroelectric power production. Observed linear trends in annual precipitation during the twentieth century were provided. The theoretical advantages and disadvantages of various modelling techniques were reviewed. Risk assessment strategies for Hydro-Quebec were also outlined and results of the study will be used to guide research programs for the hydroelectric power industry. refs., tabs., figs.

  9. The impact of climate change on hydro-electricity generation

    International Nuclear Information System (INIS)

    Musy, A.; Music, B.; Roy, R.

    2008-01-01

    Hydroelectricity is a clean and renewable energy source for many countries, and is expected to play an important role in future energy supplies. However, the impact of climatic change on hydroelectricity resources is not yet understood. This study provided a critical review of current methods used to determine the potential impacts of climatic change on hydroelectric power production. General circulation models (GCMs) are used to predict future climate conditions under various greenhouse gas (GHG) emissions scenarios. Statistical techniques are then used to down-scale GCM outputs to the appropriate scales needed for hydrological models, which are then used to simulate the effects of climatic change at regional and local scales. Outputs from the models are then used to develop water management models for hydroelectric power production. Observed linear trends in annual precipitation during the twentieth century were provided. The theoretical advantages and disadvantages of various modelling techniques were reviewed. Risk assessment strategies for Hydro-Quebec were also outlined and results of the study will be used to guide research programs for the hydroelectric power industry. refs., tabs., figs

  10. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    Science.gov (United States)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  11. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de [Max Planck Institut für Chemische Energiekonversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F. [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24014 (United States)

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed

  12. Academic Training: Physics at e+e- linear collider

    CERN Multimedia

    Françoise Benz

    2004-01-01

    15, 16, 17, 18, 19 November 2004-2005 ACADEMIC TRAINING PROGRAMME LECTURE SERIES from 11.00 to 12.00hrs - Main Auditorium, bldg. 500 Physics at e+e- linear collider K. DESCH / Desy, Hamburg, D Future e+e- Linear Colliders offer the potential to explore new physics at the TeV scale to very high precision. The lecture series introduces the possibilities of a TeV linear collider (the International Linear Collider, ILC) in the fields of Higgs physics, alternative Electro-weak Symmetry Breaking scenarios, Supersymmetry, Extra Dimensions, and more exotic models. Also the prospects for highly improved measurements of SM parameters such as the top quark mass and electro-weak gauge boson properties are discussed. The implications for the design of an appropriate detector are outlined and current R&D developments are explained. Particular emphasis will be given to the complementarity and intimate interplay of physics at the LHC and the ILC. The additional benefit of multi-TeV e+e- collisions as envisaged i...

  13. The magnitude of linear dichroism of biological tissues as a result of cancer changes

    Science.gov (United States)

    Bojchuk, T. M.; Yermolenko, S. B.; Fedonyuk, L. Y.; Petryshen, O. I.; Guminetsky, S. G.; Prydij, O. G.

    2011-09-01

    The results of studies of linear dichroism values of different types of biological tissues (human prostate, esophageal epithelial human muscle tissue in rats) both healthy and infected tumor at different stages of development are shown here. The significant differences in magnitude of linear dichroism and its spectral dependence in the spectral range λ = 330 - 750 nm both among the objects of study, and between biotissues: healthy (or affected by benign tumors) and cancer patients are established. It is researched that in all cases in biological tissues (prostate gland, esophagus, human muscle tissue in rats) with cancer the linear dichroism arises, the value of which depends on the type of tissue and time of the tumor process. As for healthy tissues linear dichroism is absent, the results may have diagnostic value for detecting and assessing the degree of development of cancer.

  14. A comparison of publicly available linear MRI stereotaxic registration techniques.

    Science.gov (United States)

    Dadar, Mahsa; Fonov, Vladimir S; Collins, D Louis

    2018-07-01

    Linear registration to a standard space is one of the major steps in processing and analyzing magnetic resonance images (MRIs) of the brain. Here we present an overview of linear stereotaxic MRI registration and compare the performance of 5 publicly available and extensively used linear registration techniques in medical image analysis. A set of 9693 T1-weighted MR images were obtained for testing from 4 datasets: ADNI, PREVENT-AD, PPMI, and HCP, two of which have multi-center and multi-scanner data and three of which have longitudinal data. Each individual native image was linearly registered to the MNI ICBM152 average template using five versions of MRITOTAL from MINC tools, FLIRT from FSL, two versions of Elastix, spm_affreg from SPM, and ANTs linear registration techniques. Quality control (QC) images were generated from the registered volumes and viewed by an expert rater to assess the quality of the registrations. The QC image contained 60 sub-images (20 of each of axial, sagittal, and coronal views at different levels throughout the brain) overlaid with contours of the ICBM152 template, enabling the expert rater to label the registration as acceptable or unacceptable. The performance of the registration techniques was then compared across different datasets. In addition, the effect of image noise, intensity non-uniformity, age, head size, and atrophy on the performance of the techniques was investigated by comparing differences between age, scaling factor, ventricle volume, brain volume, and white matter hyperintensity (WMH) volumes between passed and failed cases for each method. The average registration failure rate among all datasets was 27.41%, 27.14%, 12.74%, 13.03%, 0.44% for the five versions of MRITOTAL techniques, 8.87% for ANTs, 11.11% for FSL, 12.35% for Elastix Affine, 24.40% for Elastix Similarity, and 30.66% for SPM. There were significant effects of signal to noise ratio, image intensity non-uniformity estimates, as well as age, head size, and

  15. Linear Mapping of Numbers onto Space Requires Attention

    Science.gov (United States)

    Anobile, Giovanni; Cicchini, Guido Marco; Burr, David C.

    2012-01-01

    Mapping of number onto space is fundamental to mathematics and measurement. Previous research suggests that while typical adults with mathematical schooling map numbers veridically onto a linear scale, pre-school children and adults without formal mathematics training, as well as individuals with dyscalculia, show strong compressive,…

  16. Scale effect in fatigue resistance under complex stressed state

    International Nuclear Information System (INIS)

    Sosnovskij, L.A.

    1979-01-01

    On the basis the of the fatigue failure statistic theory obtained is the formula for calculated estimation of probabillity of failure under complex stressed state according to partial probabilities of failure under linear stressed state with provision for the scale effect. Also the formula for calculation of equivalent stress is obtained. The verification of both formulae using literary experimental data for plane stressed state torsion has shown that the error of estimations does not exceed 10% for materials with the ultimate strength changing from 61 to 124 kg/mm 2

  17. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  18. High-order quantum algorithm for solving linear differential equations

    International Nuclear Information System (INIS)

    Berry, Dominic W

    2014-01-01

    Linear differential equations are ubiquitous in science and engineering. Quantum computers can simulate quantum systems, which are described by a restricted type of linear differential equations. Here we extend quantum simulation algorithms to general inhomogeneous sparse linear differential equations, which describe many classical physical systems. We examine the use of high-order methods (where the error over a time step is a high power of the size of the time step) to improve the efficiency. These provide scaling close to Δt 2 in the evolution time Δt. As with other algorithms of this type, the solution is encoded in amplitudes of the quantum state, and it is possible to extract global features of the solution. (paper)

  19. Linear collider research and development at SLAC, LBL and LLNL

    International Nuclear Information System (INIS)

    Mattison, T.S.

    1988-10-01

    The study of electron-positron (e + e/sup /minus//) annihilation in storage ring colliders has been very fruitful. It is by now well understood that the optimized cost and size of e + e/sup /minus// storage rings scales as E(sub cm//sup 2/ due to the need to replace energy lost to synchrotron radiation in the ring bending magnets. Linear colliders, using the beams from linear accelerators, evade this scaling law. The study of e/sup +/e/sup /minus// collisions at TeV energy will require linear colliders. The luminosity requirements for a TeV linear collider are set by the physics. Advanced accelerator research and development at SLAC is focused toward a TeV Linear Collider (TLC) of 0.5--1 TeV in the center of mass, with a luminosity of 10/sup 33/--10/sup 34/. The goal is a design for two linacs of less than 3 km each, and requiring less than 100 MW of power each. With a 1 km final focus, the TLC could be fit on Stanford University land (although not entirely within the present SLAC site). The emphasis is on technologies feasible for a proposal to be framed in 1992. Linear collider development work is progressing on three fronts: delivering electrical energy to a beam, delivering a focused high quality beam, and system optimization. Sources of high peak microwave radio frequency (RF) power to drive the high gradient linacs are being developed in collaboration with Lawrence Berkeley Laboratory (LBL) and Lawrence Livermore National Laboratory (LLNL). Beam generation, beam dynamics and final focus work has been done at SLAC and in collaboration with KEK. Both the accelerator physics and the utilization of TeV linear colliders were topics at the 1988 Snowmass Summer Study. 14 refs., 4 figs., 1 tab

  20. EFFECTS OF PORE STRUCTURE CHANGE AND MULTI-SCALE HETEROGENEITY ON CONTAMINANT TRANSPORT AND REACTION RATE UPSCALING

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Catherine A [Princeton University

    2013-05-15

    This project addressed the scaling of geochemical reactions to core and field scales, and the interrelationship between reaction rates and flow in porous media. We targeted reactive transport problems relevant to the Hanford site specifically the reaction of highly caustic, radioactive waste solutions with subsurface sediments, and the immobilization of 90Sr and 129I through mineral incorporation and passive flow blockage, respectively. We addressed the correlation of results for pore-scale fluid-soil interaction with field-scale fluid flow, with the specific goals of (i) predicting attenuation of radionuclide concentration; (ii) estimating changes in flow rates through changes of soil permeabilities; and (iii) estimating effective reaction rates. In supplemental work, we also simulated reactive transport systems relevant to geologic carbon sequestration. As a whole, this research generated a better understanding of reactive transport in porous media, and resulted in more accurate methods for reaction rate upscaling and improved prediction of permeability evolution. These scientific advancements will ultimately lead to better tools for management and remediation of DOE legacy waste problems.

  1. Efficiency scale and technological change in credit unions and multiple banks using the COSIF

    Directory of Open Access Journals (Sweden)

    Wanderson Rocha Bittencourt

    2016-08-01

    Full Text Available The modernization of the financial intermediation process and adapting to new technologies, brought adjustments to operational processes, providing the reduction of information borrowing costs, allowing generate greater customer satisfaction, due to increased competitiveness in addition to making gains with long efficiency period. In this context, this research aims to analyze the evolution in scale and technological efficiency of credit and multiple cooperative banks from 2009 to 2013. We used the method of Data Envelopment Analysis - DEA, which allows to calculate the change in efficiency of institutions through the Malmquist Index. The results indicated that institutions that employ larger volumes of assets in the composition of its resources presented evolution in scale and technological efficiency, influencing the change in total factor productivity. It should be noticed that cooperatives had, in some years, advances in technology and scale efficiency higher than banks. However, this result can be explained by the fact that the average efficiency of credit unions have been lower than that of banks in the analyzed sample, indicating that there is greater need to improve internal processes by cooperatives, compared to multiple banks surveyed.

  2. Linear and nonlinear interactions in the dark sector

    International Nuclear Information System (INIS)

    Chimento, Luis P.

    2010-01-01

    We investigate models of interacting dark matter and dark energy for the Universe in a spatially flat Friedmann-Robertson-Walker space-time. We find the 'source equation' for the total energy density and determine the energy density of each dark component. We introduce an effective one-fluid description to evidence that interacting and unified models are related to each other, analyze the effective model, and obtain the attractor solutions. We study linear and nonlinear interactions, the former comprises a linear combination of the dark matter and dark energy densities, their first derivatives, the total energy density, its first and second derivatives, and a function of the scale factor. The latter is a possible generalization of the linear interaction consisting of an aggregate of the above linear combination and a significant nonlinear term built with a rational function of the dark matter and dark energy densities homogeneous of degree 1. We solve the evolution equations of the dark components for both interactions and examine exhaustively several examples. There exist cases where the effective one-fluid description produces different alternatives to the ΛCDM model and cases where the problem of coincidence is alleviated. In addition, we find that some nonlinear interactions yield an effective one-fluid model with a Chaplygin gas equation of state, whereas others generate cosmological models with de Sitter and power-law expansions. We show that a generic nonlinear interaction induces an effective equation of state which depends on the scale factor in the same way as the variable modified Chaplygin gas model, giving rise to the 'relaxed Chaplygin gas model'.

  3. SIGMA1-2007, Doppler Broadening ENDF Format Linear-Linear. Interpolated Point Cross Section

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of problem or function: SIGMA-1 Doppler broadens evaluated Cross sections given in the linear-linear interpolation form of the ENDF/B Format to one final temperature. The data is Doppler broadened, thinned, and output in the ENDF/B Format. IAEA0854/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. 2 - Modifications from previous versions: Sigma-1 VERS. 2007-1 (Jan. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 360,000 energy points 3 - Method of solution: The energy grid is selected to ensure that the broadened data is linear-linear interpolable. SIGMA-1 starts from the free-atom Doppler broadening equations and adds the assumptions of linear data within the table and constant data outside the range of the table. If the Original data is not at zero Kelvin, the data is broadened by the effective temperature difference to the final temperature. If the data is already at a temperature higher than the final temperature, Doppler broadening is not performed. 4 - Restrictions on the complexity of the problem: The input to SIGMA-1 must be data which vary linearly in energy and cross section between tabulated points. The LINEAR program provides such data. LINEAR uses only the ENDF/B BCD Format tape and copies all sections except File 3 as read. Since File 3 data are in identical Format for ENDF/B Versions I through VI, the program can be used with all these versions. - The present version Doppler broadens only to one final temperature

  4. Fine-scale ecological and economic assessment of climate change on olive in the Mediterranean Basin reveals winners and losers.

    Science.gov (United States)

    Ponti, Luigi; Gutierrez, Andrew Paul; Ruti, Paolo Michele; Dell'Aquila, Alessandro

    2014-04-15

    The Mediterranean Basin is a climate and biodiversity hot spot, and climate change threatens agro-ecosystems such as olive, an ancient drought-tolerant crop of considerable ecological and socioeconomic importance. Climate change will impact the interactions of olive and the obligate olive fruit fly (Bactrocera oleae), and alter the economics of olive culture across the Basin. We estimate the effects of climate change on the dynamics and interaction of olive and the fly using physiologically based demographic models in a geographic information system context as driven by daily climate change scenario weather. A regional climate model that includes fine-scale representation of the effects of topography and the influence of the Mediterranean Sea on regional climate was used to scale the global climate data. The system model for olive/olive fly was used as the production function in our economic analysis, replacing the commonly used production-damage control function. Climate warming will affect olive yield and fly infestation levels across the Basin, resulting in economic winners and losers at the local and regional scales. At the local scale, profitability of small olive farms in many marginal areas of Europe and elsewhere in the Basin will decrease, leading to increased abandonment. These marginal farms are critical to conserving soil, maintaining biodiversity, and reducing fire risk in these areas. Our fine-scale bioeconomic approach provides a realistic prototype for assessing climate change impacts in other Mediterranean agro-ecosystems facing extant and new invasive pests.

  5. Size effects in non-linear heat conduction with flux-limited behaviors

    Science.gov (United States)

    Li, Shu-Nan; Cao, Bing-Yang

    2017-11-01

    Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.

  6. Scaling Quelccaya: Using 3-D Animation and Satellite Data To Visualize Climate Change

    Science.gov (United States)

    Malone, A.; Leich, M.

    2017-12-01

    The near-global glacier retreat of recent decades is among the most convincing evidence for contemporary climate change. The epicenter of this action, however, is often far from population-dense centers. How can a glacier's scale, both physical and temporal, be communicated to those faraway? This project, an artists-scientist collaboration, proposes an alternate system for presenting climate change data, designed to evoke a more visceral response through a visual, geospatial, poetic approach. Focusing on the Quelccaya Ice Cap, the world's largest tropical glaciated area located in the Peruvian Andes, we integrate 30 years of satellite imagery and elevation models with 3D animation and gaming software, to bring it into a virtual juxtaposition with a model of the city of Chicago. Using Chicago as a cosmopolitan North American "measuring stick," we apply glaciological models to determine, for instance, the amount of ice that has melted on Quelccaya over the last 30 years and what the height of an equivalent amount of snow would fall on the city of Chicago (circa 600 feet, higher than the Willis Tower). Placing the two sites in a framework of intimate scale, we present a more imaginative and psychologically-astute manner of portraying the sober facts of climate change, by inviting viewers to learn and consider without inducing fear.

  7. Linear zonal atmospheric prediction for adaptive optics

    Science.gov (United States)

    McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael

    2000-07-01

    We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.

  8. Multi-scale MHD analysis of heliotron plasma in change of background field

    International Nuclear Information System (INIS)

    Ichiguchi, K.; Sakakibara, S.; Ohdachi, S.; Carreras, B.A.

    2012-11-01

    A partial collapse observed in the Large Helical Device (LHD) experiments shifting the magnetic axis inwardly with a real time control of the background field is analyzed with a magnetohydrodynamics (MHD) numerical simulation. The simulation is carried out with a multi-scale simulation scheme. In the simulation, the equilibrium also evolves including the change of the pressure and the rotational transform due to the perturbation dynamics. The simulation result agrees with the experiments qualitatively, which shows that the mechanism is attributed to the destabilization of an infernal-like mode. The destabilization is caused by the change of the background field through the enhancement of the magnetic hill. (author)

  9. Confirmation of linear system theory prediction: Changes in Herrnstein's k as a function of changes in reinforcer magnitude.

    Science.gov (United States)

    McDowell, J J; Wood, H M

    1984-03-01

    Eight human subjects pressed a lever on a range of variable-interval schedules for 0.25 cent to 35.0 cent per reinforcement. Herrnstein's hyperbola described seven of the eight subjects' response-rate data well. For all subjects, the y-asymptote of the hyperbola increased with increasing reinforcer magnitude and its reciprocal was a linear function of the reciprocal of reinforcer magnitude. These results confirm predictions made by linear system theory; they contradict formal properties of Herrnstein's account and of six other mathematical accounts of single-alternative responding.

  10. Scale effects and morphological diversification in hindlimb segment mass proportions in neognath birds.

    Science.gov (United States)

    Kilbourne, Brandon M

    2014-01-01

    In spite of considerable work on the linear proportions of limbs in amniotes, it remains unknown whether differences in scale effects between proximal and distal limb segments has the potential to influence locomotor costs in amniote lineages and how changes in the mass proportions of limbs have factored into amniote diversification. To broaden our understanding of how the mass proportions of limbs vary within amniote lineages, I collected data on hindlimb segment masses - thigh, shank, pes, tarsometatarsal segment, and digits - from 38 species of neognath birds, one of the most speciose amniote clades. I scaled each of these traits against measures of body size (body mass) and hindlimb size (hindlimb length) to test for departures from isometry. Additionally, I applied two parameters of trait evolution (Pagel's λ and δ) to understand patterns of diversification in hindlimb segment mass in neognaths. All segment masses are positively allometric with body mass. Segment masses are isometric with hindlimb length. When examining scale effects in the neognath subclade Land Birds, segment masses were again positively allometric with body mass; however, shank, pedal, and tarsometatarsal segment masses were also positively allometric with hindlimb length. Methods of branch length scaling to detect phylogenetic signal (i.e., Pagel's λ) and increasing or decreasing rates of trait change over time (i.e., Pagel's δ) suffer from wide confidence intervals, likely due to small sample size and deep divergence times. The scaling of segment masses appears to be more strongly related to the scaling of limb bone mass as opposed to length, and the scaling of hindlimb mass distribution is more a function of scale effects in limb posture than proximo-distal differences in the scaling of limb segment mass. Though negative allometry of segment masses appears to be precluded by the need for mechanically sound limbs, the positive allometry of segment masses relative to body mass may

  11. Small-scale microwave background anisotropies implied by large-scale data

    Science.gov (United States)

    Kashlinsky, A.

    1993-01-01

    In the absence of reheating microwave background radiation (MBR) anisotropies on arcminute scales depend uniquely on the amplitude and the coherence length of the primordial density fluctuations (PDFs). These can be determined from the recent data on galaxy correlations, xi(r), on linear scales (APM survey). We develop here expressions for the MBR angular correlation function, C(theta), on arcminute scales in terms of the power spectrum of PDFs and demonstrate their accuracy by comparing with detailed calculations of MBR anisotropies. We then show how to evaluate C(theta) directly in terms of the observed xi(r) and show that the APM data give information on the amplitude, C(O), and the coherence angle of MBR anisotropies on small scales.

  12. Nonoscillation of half-linear dynamic equations

    Czech Academy of Sciences Publication Activity Database

    Matucci, S.; Řehák, Pavel

    2010-01-01

    Roč. 60, č. 5 (2010), s. 1421-1429 ISSN 0898-1221 R&D Projects: GA AV ČR KJB100190701 Grant - others:GA ČR(CZ) GA201/07/0145 Institutional research plan: CEZ:AV0Z10190503 Keywords : half-linear dynamic equation * time scale * (non)oscillation * Riccati technique Subject RIV: BA - General Mathematics Impact factor: 1.472, year: 2010 http://www.sciencedirect.com/science/article/pii/S0898122110004384

  13. The Next Linear Collider: NLC2001

    International Nuclear Information System (INIS)

    Burke, D.

    2002-01-01

    Recent studies in elementary particle physics have made the need for an e + e - linear collider able to reach energies of 500 GeV and above with high luminosity more compelling than ever [1]. Observations and measurements completed in the last five years at the SLC (SLAC), LEP (CERN), and the Tevatron (FNAL) can be explained only by the existence of at least one particle or interaction that has not yet been directly observed in experiment. The Higgs boson of the Standard Model could be that particle. The data point strongly to a mass for the Higgs boson that is just beyond the reach of existing colliders. This brings great urgency and excitement to the potential for discovery at the upgraded Tevatron early in this decade, and almost assures that later experiments at the LHC will find new physics. But the next generation of experiments to be mounted by the world-wide particle physics community must not only find this new physics, they must find out what it is. These experiments must also define the next important threshold in energy. The need is to understand physics at the TeV energy scale as well as the physics at the 100-GeV energy scale is now understood. This will require both the LHC and a companion linear electron-positron collider. A first Zeroth-Order Design Report (ZDR) [2] for a second-generation electron-positron linear collider, the Next Linear Collider (NLC), was published five years ago. The NLC design is based on a high-frequency room-temperature rf accelerator. Its goal is exploration of elementary particle physics at the TeV center-of-mass energy, while learning how to design and build colliders at still higher energies. Many advances in accelerator technologies and improvements in the design of the NLC have been made since 1996. This Report is a brief update of the ZDR

  14. The Next Linear Collider: NLC2001

    Energy Technology Data Exchange (ETDEWEB)

    D. Burke et al.

    2002-01-14

    Recent studies in elementary particle physics have made the need for an e{sup +}e{sup -} linear collider able to reach energies of 500 GeV and above with high luminosity more compelling than ever [1]. Observations and measurements completed in the last five years at the SLC (SLAC), LEP (CERN), and the Tevatron (FNAL) can be explained only by the existence of at least one particle or interaction that has not yet been directly observed in experiment. The Higgs boson of the Standard Model could be that particle. The data point strongly to a mass for the Higgs boson that is just beyond the reach of existing colliders. This brings great urgency and excitement to the potential for discovery at the upgraded Tevatron early in this decade, and almost assures that later experiments at the LHC will find new physics. But the next generation of experiments to be mounted by the world-wide particle physics community must not only find this new physics, they must find out what it is. These experiments must also define the next important threshold in energy. The need is to understand physics at the TeV energy scale as well as the physics at the 100-GeV energy scale is now understood. This will require both the LHC and a companion linear electron-positron collider. A first Zeroth-Order Design Report (ZDR) [2] for a second-generation electron-positron linear collider, the Next Linear Collider (NLC), was published five years ago. The NLC design is based on a high-frequency room-temperature rf accelerator. Its goal is exploration of elementary particle physics at the TeV center-of-mass energy, while learning how to design and build colliders at still higher energies. Many advances in accelerator technologies and improvements in the design of the NLC have been made since 1996. This Report is a brief update of the ZDR.

  15. Concordance between the chang and the International Society of Pediatric Oncology (SIOP) ototoxicity grading scales in patients treated with cisplatin for medulloblastoma.

    Science.gov (United States)

    Bass, Johnnie K; Huang, Jie; Onar-Thomas, Arzu; Chang, Kay W; Bhagat, Shaum P; Chintagumpala, Murali; Bartels, Ute; Gururangan, Sridharan; Hassall, Tim; Heath, John A; McCowage, Geoffrey; Cohn, Richard J; Fisher, Michael J; Robinson, Giles; Broniscer, Alberto; Gajjar, Amar; Gurney, James G

    2014-04-01

    Reporting ototoxicity is frequently complicated by use of various ototoxicity criteria. The International Society of Pediatric Oncology (SIOP) ototoxicity grading scale was recently proposed for standardized use in reporting hearing loss outcomes across institutions. The aim of this study was to evaluate the concordance between the Chang and SIOP ototoxicity grading scales. Differences between the two scales were identified and the implications these differences may have in the clinical setting were discussed. Audiological evaluations were reviewed for 379 patients with newly diagnosed medulloblastoma (ages 3-21 years). Each patient was enrolled on one of two St. Jude clinical protocols that included craniospinal radiation therapy and four courses of 75 mg/m(2) cisplatin chemotherapy. The latest audiogram conducted 5.5-24.5 months post-protocol treatment initiation was graded using the Chang and SIOP ototoxicity criteria. Clinically significant hearing loss was defined as Chang grade ≥2a and SIOP ≥2. Hearing loss was considered serious (requiring a hearing aid) at the level of Chang grade ≥2b and SIOP ≥3. A strong concordance was observed between the Chang and SIOP ototoxicity scales (Stuart's tau-c statistic = 0.89, 95% CI: 0.86, 0.91). Among those patients diagnosed with serious hearing loss, the two scales were in good agreement. However, the scales deviated from one another in classifying patients with less serious or no hearing loss. Although discrepancies between the Chang and SIOP ototoxicity scales exist primarily for patients with no or minimal hearing loss, the scales share a strong concordance overall. © 2013 Wiley Periodicals, Inc.

  16. Modelling land change: the issue of use and cover in wide-scale applications

    NARCIS (Netherlands)

    Bakker, M.M.; Veldkamp, A.

    2008-01-01

    In this article, the underlying causes for the apparent mismatch between land cover and land use in the context of wide-scale land change modelling are explored. A land use-land cover (LU/LC) ratio is proposed as a relevant landscape characteristic. The one-to-one ratio between land use and land

  17. A climate-change adaptation framework to reduce continental-scale vulnerability across conservation reserves

    Science.gov (United States)

    D.R. Magness; J.M. Morton; F. Huettmann; F.S. Chapin; A.D. McGuire

    2011-01-01

    Rapid climate change, in conjunction with other anthropogenic drivers, has the potential to cause mass species extinction. To minimize this risk, conservation reserves need to be coordinated at multiple spatial scales because the climate envelopes of many species may shift rapidly across large geographic areas. In addition, novel species assemblages and ecological...

  18. CFD analysis of linear compressors considering load conditions

    Science.gov (United States)

    Bae, Sanghyun; Oh, Wonsik

    2017-08-01

    This paper is a study on computational fluid dynamics (CFD) analysis of linear compressor considering load conditions. In the conventional CFD analysis of the linear compressor, the load condition was not considered in the behaviour of the piston. In some papers, behaviour of piston is assumed as sinusoidal motion provided by user defined function (UDF). In the reciprocating type compressor, the stroke of the piston is restrained by the rod, while the stroke of the linear compressor is not restrained, and the stroke changes depending on the load condition. The greater the pressure difference between the discharge refrigerant and the suction refrigerant, the more the centre point of the stroke is pushed backward. And the behaviour of the piston is not a complete sine wave. For this reason, when the load condition changes in the CFD analysis of the linear compressor, it may happen that the ANSYS code is changed or unfortunately the modelling is changed. In addition, a separate analysis or calculation is required to find a stroke that meets the load condition, which may contain errors. In this study, the coupled mechanical equations and electrical equations are solved using the UDF, and the behaviour of the piston is solved considering the pressure difference across the piston. Using the above method, the stroke of the piston with respect to the motor specification of the analytical model can be calculated according to the input voltage, and the piston behaviour can be realized considering the thrust amount due to the pressure difference.

  19. Summary scores captured changes in subjects' QoL as measured by the multiple scales of the EORTC QLQ-C30.

    Science.gov (United States)

    Phillips, Rachel; Gandhi, Mihir; Cheung, Yin Bun; Findlay, Michael P; Win, Khin Maung; Hai, Hoang Hoa; Yang, Jin Mo; Lobo, Rolley Rey; Soo, Khee Chee; Chow, Pierce K H

    2015-08-01

    To examine the performance of the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30) global health status/quality of life (QoL) scale and two summary scores to detect changes in the QoL profile over time, according to changes in the individual scales. Data came from 167 clinical trial patients with unresectable (advanced) hepatocellular carcinoma. The global health status/QoL scale of the questionnaire contained two items: overall health and overall QoL. Nordin and Hinz proposed summary scores for the questionnaire. A mixed-effect model was fitted to estimate trends in scores over time. Predominantly the individual scale scores declined over time; however, the global health status/QoL score was stable [rate of change = -0.3 per month; 95% confidence interval (CI): -1.2, 0.6]. Nordin's summary score, which gave equal weight to the 15 questionnaire scales, and Hinz's summary score, which gave equal weight to the 30 questionnaire items, showed a statistically significant decline over time, 3.4 (95% CI: -4.5, -2.4) and 4.2 (95% CI: -5.3, -3.0) points per month, respectively. In contrast to the global health status/QoL scale, the summary scores proposed by Nordin and Hinz detected changes in subjects' QoL profile described by the EORTC QLQ-C30 individual scales. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Climate Change Impacts on Runoff Regimes at a River Basin Scale in Central Vietnam

    Directory of Open Access Journals (Sweden)

    Do Hoai Nam

    2012-01-01

    Full Text Available Global warming has resulted in significant variability of global climate especially with regard to variation in temperature and precipitation. As a result, it is expected that river flow regimes will be accordingly varied. This study presents a preliminary projection of medium-term and long-term runoff variation caused by climate change at a river basin scale. The large scale precipitation projection at the middle and the end of the 21st century under the A1B scenario simulated by the CGCM model (MRI & JMA, 300 km resolution is statistically downscaled to a basin scale and then used as input for the super-tank model for runoff analysis at the upper Thu Bon River basin in Central Vietnam. Results show that by the middle and the end of this century annual rainfall will increase slightly; together with a rising temperature, potential evapotranspiration is also projected to increase as well. The total annual runoff, as a result, is found to be not distinctly varied relative to the baseline period 1981 - 2000; however, the runoff will decrease in the dry season and increase in the rainy season. The results also indicate the delay tendency of the high river flow period, shifting from Sep-Dec at present to Oct-Jan in the future. The present study demonstrates potential impacts of climate change on streamflow regimes in attempts to propose appropriate adaptation measures and responses at the river basin scales.

  1. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  2. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  3. Statistical monitoring of linear antenna arrays

    KAUST Repository

    Harrou, Fouzi

    2016-11-03

    The paper concerns the problem of monitoring linear antenna arrays using the generalized likelihood ratio (GLR) test. When an abnormal event (fault) affects an array of antenna elements, the radiation pattern changes and significant deviation from the desired design performance specifications can resulted. In this paper, the detection of faults is addressed from a statistical point of view as a fault detection problem. Specifically, a statistical method rested on the GLR principle is used to detect potential faults in linear arrays. To assess the strength of the GLR-based monitoring scheme, three case studies involving different types of faults were performed. Simulation results clearly shown the effectiveness of the GLR-based fault-detection method to monitor the performance of linear antenna arrays.

  4. Statistical monitoring of linear antenna arrays

    KAUST Repository

    Harrou, Fouzi; Sun, Ying

    2016-01-01

    The paper concerns the problem of monitoring linear antenna arrays using the generalized likelihood ratio (GLR) test. When an abnormal event (fault) affects an array of antenna elements, the radiation pattern changes and significant deviation from

  5. Interior-Point Methods for Linear Programming: A Review

    Science.gov (United States)

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  6. Climate change impacts on risks of groundwater pollution by herbicides: a regional scale assessment

    Science.gov (United States)

    Steffens, Karin; Moeys, Julien; Lindström, Bodil; Kreuger, Jenny; Lewan, Elisabet; Jarvis, Nick

    2014-05-01

    Groundwater contributes nearly half of the Swedish drinking water supply, which therefore needs to be protected both under present and future climate conditions. Pesticides are sometimes found in Swedish groundwater in concentrations exceeding the EU-drinking water limit and thus constitute a threat. The aim of this study was to assess the present and future risks of groundwater pollution at the regional scale by currently approved herbicides. We identified representative combinations of major crop types and their specific herbicide usage (product, dose and application timing) based on long-term monitoring data from two agricultural catchments in the South-West of Sweden. All these combinations were simulated with the regional version of the pesticide fate model MACRO (called MACRO-SE) for the periods 1970-1999 and 2070-2099 for a major crop production region in South West Sweden. To represent the uncertainty in future climate data, we applied a five-member ensemble based on different climate model projections downscaled with the RCA3-model (Swedish Meteorological and Hydrological Institute). In addition to the direct impacts of changes in the climate, the risks of herbicide leaching in the future will also be affected by likely changes in weed pressure and land use and management practices (e.g. changes in crop rotations and application timings). To assess the relative importance of such factors we performed a preliminary sensitivity analysis which provided us with a hierarchical structure for constructing future herbicide use scenarios for the regional scale model runs. The regional scale analysis gave average concentrations of herbicides leaching to groundwater for a large number of combinations of soils, crops and compounds. The results showed that future scenarios for herbicide use (more autumn-sown crops, more frequent multiple applications on one crop, and a shift from grassland to arable crops such as maize) imply significantly greater risks of herbicide

  7. A framework for the quantitative assessment of climate change impacts on water-related activities at the basin scale

    OpenAIRE

    Anghileri, D.; Pianosi, F.; Soncini-Sessa, R.

    2011-01-01

    While quantitative assessment of the climate change impact on hydrology at the basin scale is quite addressed in the literature, extension of quantitative analysis to impact on the ecological, economic and social sphere is still limited, although well recognized as a key issue to support water resource planning and promote public participation. In this paper we propose a framework for assessing climate change impact on water-related activities at the basin scale. The specific features of our ...

  8. Cross-Cultural Validation of Stages of Exercise Change Scale among Chinese College Students

    Science.gov (United States)

    Keating, Xiaofen D.; Guan, Jianmin; Huang, Yong; Deng, Mingying; Wu, Yifeng; Qu, Shuhua

    2005-01-01

    The purpose of the study was to test the cross-cultural concurrent validity of the stages of exercise change scale (SECS) in Chinese college students. The original SECS was translated into Chinese (C-SECS). Students from four Chinese universities (N = 1843) participated in the study. The leisure-time exercise (LTE) questionnaire was used to…

  9. Investigations of linear contraction and shrinkage stresses development in hypereutectic al-si binary alloys

    Directory of Open Access Journals (Sweden)

    J. Mutwil

    2009-07-01

    Full Text Available Shrinkage phenomena during solidification and cooling of hypereutectic aluminium-silicon alloys (AlSi18, AlSi21 have been examined. A vertical shrinkage rod casting with circular cross-section (constant or fixed: tapered has been used as a test sample. Two type of experiments have been conducted: 1 on development of the test sample linear dimension changes (linear expansion/contraction, 2 on development of shrinkage stresses in the test sample. By the linear contraction experiments the linear dimension changes of the test sample and the metal test mould as well a temperature in six points of the test sample have been registered. By shrinkage stresses examination a shrinkage tension force and linear dimension changes of the test sample as well a temperature in three points of the test sample have been registered. Registered time dependences of the test bar and the test mould linear dimension changes have shown, that so-called pre-shrinkage extension has been mainly by mould thermal extension caused. The investigation results have shown that both: the linear contraction as well as the shrinkage stresses development are evident dependent on metal temperature in a warmest region the sample (thermal centre.

  10. A biopsychosocial investigation of changes in self-concept on the Head Injury Semantic Differential Scale.

    Science.gov (United States)

    Reddy, Avneel; Ownsworth, Tamara; King, Joshua; Shields, Cassandra

    2017-12-01

    This study aimed to investigate the influence of the "good-old-days" bias, neuropsychological functioning and cued recall of life events on self-concept change. Forty seven adults with TBI (70% male, 1-5 years post-injury) and 47 matched controls rated their past and present self-concept on the Head Injury Semantic Differential Scale (HISD) III. TBI participants also completed a battery of neuropsychological tests. The matched control group of 47 were from a sample of 78 uninjured participants who were randomised to complete either the Social Readjustment Rating Scale-Revised (cued recall) or HISD (non-cued recall) first. Consistent with the good-old-days bias, participants with TBI rated their pre-injury self-concept as more positive than their present self-concept and the present self-concept of controls (p concept ratings were related to lower estimated premorbid IQ and poorer verbal fluency and delayed memory (p concept change (p concept as significantly more negative than the non-cued group (p concept change by affecting retrospective ratings of past self-concept. Further research is needed to investigate the impact of contextual cues on self-concept change after TBI.

  11. Relevance of Linear Stability Results to Enhanced Oil Recovery

    Science.gov (United States)

    Ding, Xueru; Daripa, Prabir

    2012-11-01

    How relevant can the results based on linear stability theory for any problem for that matter be to full scale simulation results? Put it differently, is the optimal design of a system based on linear stability results is optimal or even near optimal for the complex nonlinear system with certain objectives of interest in mind? We will address these issues in the context of enhanced oil recovery by chemical flooding. This will be based on an ongoing work. Supported by Qatar National Research Fund (a member of the Qatar Foundation).

  12. Linear versus non-linear supersymmetry, in general

    Energy Technology Data Exchange (ETDEWEB)

    Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2016-04-12

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  13. Linear versus non-linear supersymmetry, in general

    International Nuclear Information System (INIS)

    Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm

    2016-01-01

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  14. Introduction to linear systems of differential equations

    CERN Document Server

    Adrianova, L Ya

    1995-01-01

    The theory of linear systems of differential equations is one of the cornerstones of the whole theory of differential equations. At its root is the concept of the Lyapunov characteristic exponent. In this book, Adrianova presents introductory material and further detailed discussions of Lyapunov exponents. She also discusses the structure of the space of solutions of linear systems. Classes of linear systems examined are from the narrowest to widest: 1)�autonomous, 2)�periodic, 3)�reducible to autonomous, 4)�nearly reducible to autonomous, 5)�regular. In addition, Adrianova considers the following: stability of linear systems and the influence of perturbations of the coefficients on the stability the criteria of uniform stability and of uniform asymptotic stability in terms of properties of the solutions several estimates of the growth rate of solutions of a linear system in terms of its coefficients How perturbations of the coefficients change all the elements of the spectrum of the system is defin...

  15. Comparison of height-diameter models based on geographically weighted regressions and linear mixed modelling applied to large scale forest inventory data

    Energy Technology Data Exchange (ETDEWEB)

    Quirós Segovia, M.; Condés Ruiz, S.; Drápela, K.

    2016-07-01

    Aim of the study: The main objective of this study was to test Geographically Weighted Regression (GWR) for developing height-diameter curves for forests on a large scale and to compare it with Linear Mixed Models (LMM). Area of study: Monospecific stands of Pinus halepensis Mill. located in the region of Murcia (Southeast Spain). Materials and Methods: The dataset consisted of 230 sample plots (2582 trees) from the Third Spanish National Forest Inventory (SNFI) randomly split into training data (152 plots) and validation data (78 plots). Two different methodologies were used for modelling local (Petterson) and generalized height-diameter relationships (Cañadas I): GWR, with different bandwidths, and linear mixed models. Finally, the quality of the estimated models was compared throughout statistical analysis. Main results: In general, both LMM and GWR provide better prediction capability when applied to a generalized height-diameter function than when applied to a local one, with R2 values increasing from around 0.6 to 0.7 in the model validation. Bias and RMSE were also lower for the generalized function. However, error analysis showed that there were no large differences between these two methodologies, evidencing that GWR provides results which are as good as the more frequently used LMM methodology, at least when no additional measurements are available for calibrating. Research highlights: GWR is a type of spatial analysis for exploring spatially heterogeneous processes. GWR can model spatial variation in tree height-diameter relationship and its regression quality is comparable to LMM. The advantage of GWR over LMM is the possibility to determine the spatial location of every parameter without additional measurements. Abbreviations: GWR (Geographically Weighted Regression); LMM (Linear Mixed Model); SNFI (Spanish National Forest Inventory). (Author)

  16. Organizational capacity for change in health care: Development and validation of a scale.

    Science.gov (United States)

    Spaulding, Aaron; Kash, Bita A; Johnson, Christopher E; Gamm, Larry

    We do not have a strong understanding of a health care organization's capacity for attempting and completing multiple and sometimes competing change initiatives. Capacity for change implementation is a critical success factor as the health care industry is faced with ongoing demands for change and transformation because of technological advances, market forces, and regulatory environment. The aim of this study was to develop and validate a tool to measure health care organizations' capacity to change by building upon previous conceptualizations of absorptive capacity and organizational readiness for change. A multistep process was used to develop the organizational capacity for change survey. The survey was sent to two populations requesting answers to questions about the organization's leadership, culture, and technologies in use throughout the organization. Exploratory and confirmatory factor analyses were conducted to validate the survey as a measurement tool for organizational capacity for change in the health care setting. The resulting organizational capacity for change measurement tool proves to be a valid and reliable method of evaluating a hospital's capacity for change through the measurement of the population's perceptions related to leadership, culture, and organizational technologies. The organizational capacity for change measurement tool can help health care managers and leaders evaluate the capacity of employees, departments, and teams for change before large-scale implementation.

  17. Large scale structure from viscous dark matter

    CERN Document Server

    Blas, Diego; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-01-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale $k_m$ for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale $k_m$, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with $N$-body simulations up to scales $k=0.2 \\, h/$Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to varia...

  18. Cross-beam energy transfer: On the accuracy of linear stationary models in the linear kinetic regime

    Science.gov (United States)

    Debayle, A.; Masson-Laborde, P.-E.; Ruyer, C.; Casanova, M.; Loiseau, P.

    2018-05-01

    We present an extensive numerical study by means of particle-in-cell simulations of the energy transfer that occurs during the crossing of two laser beams. In the linear regime, when ions are not trapped in the potential well induced by the laser interference pattern, a very good agreement is obtained with a simple linear stationary model, provided the laser intensity is sufficiently smooth. These comparisons include different plasma compositions to cover the strong and weak Landau damping regimes as well as the multispecies case. The correct evaluation of the linear Landau damping at the phase velocity imposed by the laser interference pattern is essential to estimate the energy transfer rate between the laser beams, once the stationary regime is reached. The transient evolution obtained in kinetic simulations is also analysed by means of a full analytical formula that includes 3D beam energy exchange coupled with the ion acoustic wave response. Specific attention is paid to the energy transfer when the laser presents small-scale inhomogeneities. In particular, the energy transfer is reduced when the laser inhomogeneities are comparable with the Landau damping characteristic length of the ion acoustic wave.

  19. Convergent evolution and mimicry of protein linear motifs in host-pathogen interactions.

    Science.gov (United States)

    Chemes, Lucía Beatriz; de Prat-Gay, Gonzalo; Sánchez, Ignacio Enrique

    2015-06-01

    Pathogen linear motif mimics are highly evolvable elements that facilitate rewiring of host protein interaction networks. Host linear motifs and pathogen mimics differ in sequence, leading to thermodynamic and structural differences in the resulting protein-protein interactions. Moreover, the functional output of a mimic depends on the motif and domain repertoire of the pathogen protein. Regulatory evolution mediated by linear motifs can be understood by measuring evolutionary rates, quantifying positive and negative selection and performing phylogenetic reconstructions of linear motif natural history. Convergent evolution of linear motif mimics is widespread among unrelated proteins from viral, prokaryotic and eukaryotic pathogens and can also take place within individual protein phylogenies. Statistics, biochemistry and laboratory models of infection link pathogen linear motifs to phenotypic traits such as tropism, virulence and oncogenicity. In vitro evolution experiments and analysis of natural sequences suggest that changes in linear motif composition underlie pathogen adaptation to a changing environment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Modification of cementitious binder characteristics following a change in manufacturing scale

    International Nuclear Information System (INIS)

    Coninck, P. de; Ferre, B.; Moinard, M.; Tronche, E.

    2015-01-01

    CEA is developing conditioning processes for the disposal of legacy nuclear waste. One of the waste materials is magnesium cladding removed from fuel elements irradiated in nuclear reactors. The final specifications that must be met by the packages mainly include mechanical strength, cracking, waste immobilization, and H 2 release. A matrix material has been selected that complies with the requirements. This material is a geo-polymer mortar. The purpose of this study was to measure the impact on the matrix material characteristics of a change in scale with the objective of industrializing a solid magnesium waste retrieval process. The process parameters tested were different production volumes (0.7, 210 and 1000 liter packages) and process temperatures (10, 22 and 40 C. degrees). 3 types of mixers were used to scale up the production volume. The results show that the process temperature has a significant impact on the viscosity, workability time and temperature of the matrix. The size of the mixers did not significantly influence the material characteristics

  1. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    Science.gov (United States)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  2. Scaling environmental change through the community level: a trait-based response-and-effect framework for plants

    Science.gov (United States)

    Katharine N. Suding; Sandra Lavorel; F. Stuart Chapin; Johannes H.C. Cornelissen; Sandra Diaz; Eric Garnier; Deborah Goldberg; David U. Hooper; Stephen T. Jackson; Marie-Laure. Navas

    2008-01-01

    Predicting ecosystem responses to global change is a major challenge in ecology. A critical step in that challenge is to understand how changing environmental conditions influence processes across levels of ecological organization. While direct scaling from individual to ecosystem dynamics can lead to robust and mechanistic predictions, new approaches are needed to...

  3. A Scale-Explicit Framework for Conceptualizing the Environmental Impacts of Agricultural Land Use Changes

    Directory of Open Access Journals (Sweden)

    Iago Lowe Hale

    2014-11-01

    Full Text Available Demand for locally-produced food is growing in areas outside traditionally dominant agricultural regions due to concerns over food safety, quality, and sovereignty; rural livelihoods; and environmental integrity. Strategies for meeting this demand rely upon agricultural land use change, in various forms of either intensification or extensification (converting non-agricultural land, including native landforms, to agricultural use. The nature and extent of the impacts of these changes on non-food-provisioning ecosystem services are determined by a complex suite of scale-dependent interactions among farming practices, site-specific characteristics, and the ecosystem services under consideration. Ecosystem modeling strategies which honor such complexity are often impenetrable by non-experts, resulting in a prevalent conceptual gap between ecosystem sciences and the field of sustainable agriculture. Referencing heavily forested New England as an example, we present a conceptual framework designed to synthesize and convey understanding of the scale- and landscape-dependent nature of the relationship between agriculture and various ecosystem services. By accounting for the total impact of multiple disturbances across a landscape while considering the effects of scale, the framework is intended to stimulate and support the collaborative efforts of land managers, scientists, citizen stakeholders, and policy makers as they address the challenges of expanding local agriculture.

  4. The effect of millennial-scale changes in Arabian Sea denitrification on atmospheric CO2

    International Nuclear Information System (INIS)

    Altabet, M.A.; Higginson, M.J.; Murray, D.W.

    2002-01-01

    Most global biogeochemical processes are known to respond to climate change, some of which have the capacity to produce feedbacks through the regulation of atmospheric greenhouse gases. Marine denitrification - the reduction of nitrate to gaseous nitrogen - is an important process in this regard, affecting greenhouse gas concentrations directly through the incidental production of nitrous oxide, and indirectly through modification of the marine nitrogen inventory and hence the biological pump for C0 2 . Although denitrification has been shown to vary with glacial-interglacial cycles, its response to more rapid climate change has not yet been well characterized. Here we present nitrogen isotope ratio, nitrogen content and chlorin abundance data from sediment cores with high accumulation rates on the Oman continental margin that reveal substantial millennial-scale variability in Arabian Sea denitrification and productivity during the last glacial period. The detailed correspondence of these changes with Dansgaard-Oeschger events recorded in Greenland ice cores indicates rapid, century-scale reorganization of the Arabian Sea ecosystem in response to climate excursions, mediated through the intensity of summer monsoonal upwelling. Considering the several-thousand-year residence time of fixed nitrogen in the ocean, the response of global marine productivity to changes in denitrification would have occurred at lower frequency and appears to be related to climatic and atmospheric C0 2 oscillations observed in Antarctic ice cores between 20 and A kyr ago. (author)

  5. Observations and 3D hydrodynamics-based modeling of decadal-scale shoreline change along the Outer Banks, North Carolina

    Science.gov (United States)

    Safak, Ilgar; List, Jeffrey; Warner, John C.; Kumar, Nirnimesh

    2017-01-01

    Long-term decadal-scale shoreline change is an important parameter for quantifying the stability of coastal systems. The decadal-scale coastal change is controlled by processes that occur on short time scales (such as storms) and long-term processes (such as prevailing waves). The ability to predict decadal-scale shoreline change is not well established and the fundamental physical processes controlling this change are not well understood. Here we investigate the processes that create large-scale long-term shoreline change along the Outer Banks of North Carolina, an uninterrupted 60 km stretch of coastline, using both observations and a numerical modeling approach. Shoreline positions for a 24-yr period were derived from aerial photographs of the Outer Banks. Analysis of the shoreline position data showed that, although variable, the shoreline eroded an average of 1.5 m/yr throughout this period. The modeling approach uses a three-dimensional hydrodynamics-based numerical model coupled to a spectral wave model and simulates the full 24-yr time period on a spatial grid running on a short (second scale) time-step to compute the sediment transport patterns. The observations and the model results show similar magnitudes (O(105 m3/yr)) and patterns of alongshore sediment fluxes. Both the observed and the modeled alongshore sediment transport rates have more rapid changes at the north of our section due to continuously curving coastline, and possible effects of alongshore variations in shelf bathymetry. The southern section with a relatively uniform orientation, on the other hand, has less rapid transport rate changes. Alongshore gradients of the modeled sediment fluxes are translated into shoreline change rates that have agreement in some locations but vary in others. Differences between observations and model results are potentially influenced by geologic framework processes not included in the model. Both the observations and the model results show higher rates of

  6. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere

    OpenAIRE

    Rossi, Sergio; Anfodillo, Tommaso; Čufar, Katarina; Cuny, Henri E.; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gričar, Jožica; Gruber, Andreas; King, Gregory M.; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B. K.

    2017-01-01

    Background and Aims Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-l...

  7. Gravitational field of static p -branes in linearized ghost-free gravity

    Science.gov (United States)

    Boos, Jens; Frolov, Valeri P.; Zelnikov, Andrei

    2018-04-01

    We study the gravitational field of static p -branes in D -dimensional Minkowski space in the framework of linearized ghost-free (GF) gravity. The concrete models of GF gravity we consider are parametrized by the nonlocal form factors exp (-□/μ2) and exp (□2/μ4) , where μ-1 is the scale of nonlocality. We show that the singular behavior of the gravitational field of p -branes in general relativity is cured by short-range modifications introduced by the nonlocalities, and we derive exact expressions of the regularized gravitational fields, whose geometry can be written as a warped metric. For large distances compared to the scale of nonlocality, μ r →∞ , our solutions approach those found in linearized general relativity.

  8. National Scale Prediction of Soil Carbon Sequestration under Scenarios of Climate Change

    Science.gov (United States)

    Izaurralde, R. C.; Thomson, A. M.; Potter, S. R.; Atwood, J. D.; Williams, J. R.

    2006-12-01

    Carbon sequestration in agricultural soils is gaining momentum as a tool to mitigate the rate of increase of atmospheric CO2. Researchers from the Pacific Northwest National Laboratory, Texas A&M University, and USDA-NRCS used the EPIC model to develop national-scale predictions of soil carbon sequestration with adoption of no till (NT) under scenarios of climate change. In its current form, the EPIC model simulates soil C changes resulting from heterotrophic respiration and wind / water erosion. Representative modeling units were created to capture the climate, soil, and management variability at the 8-digit hydrologic unit (USGS classification) watershed scale. The soils selected represented at least 70% of the variability within each watershed. This resulted in 7,540 representative modeling units for 1,412 watersheds. Each watershed was assigned a major crop system: corn, soybean, spring wheat, winter wheat, cotton, hay, alfalfa, corn-soybean rotation or wheat-fallow rotation based on information from the National Resource Inventory. Each representative farm was simulated with conventional tillage and no tillage, and with and without irrigation. Climate change scenarios for two future periods (2015-2045 and 2045-2075) were selected from GCM model runs using the IPCC SRES scenarios of A2 and B2 from the UK Hadley Center (HadCM3) and US DOE PCM (PCM) models. Changes in mean and standard deviation of monthly temperature and precipitation were extracted from gridded files and applied to baseline climate (1960-1990) for each of the 1,412 modeled watersheds. Modeled crop yields were validated against historical USDA NASS county yields (1960-1990). The HadCM3 model predicted the most severe changes in climate parameters. Overall, there would be little difference between the A2 and B2 scenarios. Carbon offsets were calculated as the difference in soil C change between conventional and no till. Overall, C offsets during the first 30-y period (513 Tg C) are predicted to

  9. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    Science.gov (United States)

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Scaling of divertor heat flux profile widths in DIII-D

    International Nuclear Information System (INIS)

    Lasnier, C.J.; Makowski, M.A.; Boedo, J.A.; Allen, S.L.; Brooks, N.H.; Hill, D.N.; Leonard, A.W.; Watkins, J.G.; West, W.P.

    2011-01-01

    New scalings of the dependence of divertor heat flux peak and profile width, important parameters for the design of future large tokamaks, have been obtained from recent DIII-D experiments. We find the peak heat flux depends linearly on input power, decreases linearly with increasing density, and increases linearly with plasma current. The profile width has a weak dependence on input power, is independent of density up to the onset of detachment, and is inversely proportional to the plasma current. We compare these results with previously published scalings, and present mathematical expressions incorporating these results.

  11. Non-linear Matter Spectra in Coupled Quintessence

    CERN Document Server

    Saracco, F; Tetradis, N; Pettorino, V; Robbers, G

    2010-01-01

    We consider cosmologies in which a dark-energy scalar field interacts with cold dark matter. The growth of perturbations is followed beyond the linear level by means of the time-renormalization-group method, which is extended to describe a multi-component matter sector. Even in the absence of the extra interaction, a scale-dependent bias is generated as a consequence of the different initial conditions for baryons and dark matter after decoupling. The effect is greatly enhanced by the extra coupling and can be at the percent level in the range of scales of baryonic acoustic oscillations. We compare our results with N-body simulations, finding very good agreement.

  12. ACADEMIC TRAINING Progress on e+e- Linear Colliders

    CERN Multimedia

    Françoise Benz

    2002-01-01

    27, 28, 29, 30, 31 May LECTURE SERIES from 11.00 to 12.00 hrs - Auditorium, bldg. 500 Progress on e+e- Linear Colliders by P. Zerwas / Desy, D and R. Siemann / Slac, USA Physics issues (P. Zerwas - 27, 28 May)The physics program will be reviewed for e+e- linear colliders in the TeV energy range. At these prospective facilities central issues of particle physics can be addressed, the problem of mass, unification and structure of space-time. In this context the two lectures will focus on analyses of the Higgs mechanism, supersymmetry and extra space dimensions. Moreover, high-precision studies of the top-quark and the gauge boson sector will be discussed. Combined with LHC results, a comprehensive picture can be developed of physics at the electroweak scale and beyond. Designs and technologies (R. Siemann - 29, 30, 31 May) The physics and technologies of high energy linear colliders will be reviewed. Fundamental concepts of linear colliders will be introduced. They will be discussed in: the context of the Sta...

  13. Groundwater decline and tree change in floodplain landscapes: Identifying non-linear threshold responses in canopy condition

    Directory of Open Access Journals (Sweden)

    J. Kath

    2014-12-01

    Full Text Available Groundwater decline is widespread, yet its implications for natural systems are poorly understood. Previous research has revealed links between groundwater depth and tree condition; however, critical thresholds which might indicate ecological ‘tipping points’ associated with rapid and potentially irreversible change have been difficult to quantify. This study collated data for two dominant floodplain species, Eucalyptus camaldulensis (river red gum and E. populnea (poplar box from 118 sites in eastern Australia where significant groundwater decline has occurred. Boosted regression trees, quantile regression and Threshold Indicator Taxa Analysis were used to investigate the relationship between tree condition and groundwater depth. Distinct non-linear responses were found, with groundwater depth thresholds identified in the range from 12.1 m to 22.6 m for E. camaldulensis and 12.6 m to 26.6 m for E. populnea beyond which canopy condition declined abruptly. Non-linear threshold responses in canopy condition in these species may be linked to rooting depth, with chronic groundwater decline decoupling trees from deep soil moisture resources. The quantification of groundwater depth thresholds is likely to be critical for management aimed at conserving groundwater dependent biodiversity. Identifying thresholds will be important in regions where water extraction and drying climates may contribute to further groundwater decline. Keywords: Canopy condition, Dieback, Drought, Tipping point, Ecological threshold, Groundwater dependent ecosystems

  14. Effects of Grafting Density on Block Polymer Self-Assembly: From Linear to Bottlebrush.

    Science.gov (United States)

    Lin, Tzu-Pin; Chang, Alice B; Luo, Shao-Xiong; Chen, Hsiang-Yun; Lee, Byeongdu; Grubbs, Robert H

    2017-11-28

    Grafting density is an important structural parameter that exerts significant influences over the physical properties of architecturally complex polymers. In this report, the physical consequences of varying the grafting density (z) were studied in the context of block polymer self-assembly. Well-defined block polymers spanning the linear, comb, and bottlebrush regimes (0 ≤ z ≤ 1) were prepared via grafting-through ring-opening-metathesis polymerization. ω-Norbornenyl poly(d,l-lactide) and polystyrene macromonomers were copolymerized with discrete comonomers in different feed ratios, enabling precise control over both the grafting density and molecular weight. Small-angle X-ray scattering experiments demonstrate that these graft block polymers self-assemble into long-range-ordered lamellar structures. For 17 series of block polymers with variable z, the scaling of the lamellar period with the total backbone degree of polymerization (d* ∼ N bb α ) was studied. The scaling exponent α monotonically decreases with decreasing z and exhibits an apparent transition at z ≈ 0.2, suggesting significant changes in the chain conformations. Comparison of two block polymer systems, one that is strongly segregated for all z (System I) and one that experiences weak segregation at low z (System II), indicates that the observed trends are primarily caused by the polymer architectures, not segregation effects. A model is proposed in which the characteristic ratio (C ∞ ), a proxy for the backbone stiffness, scales with N bb as a function of the grafting density: C ∞ ∼ N bb f(z) . The scaling behavior disclosed herein provides valuable insights into conformational changes with grafting density, thus introducing opportunities for block polymer and material design.

  15. Landscape-and regional-scale shifts in forest composition under climate change in the Central Hardwood Region of the United States

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Frank R. Thompson; Jacob S. Fraser; William D. Dijak

    2016-01-01

    Tree species distribution and abundance are affected by forces operating at multiple scales. Niche and biophysical process models have been commonly used to predict climate change effects at regional scales, however, these models have limited capability to include site-scale population dynamics and landscape- scale disturbance and dispersal. We applied a landscape...

  16. A non-linear kinematic hardening function

    International Nuclear Information System (INIS)

    Ottosen, N.S.

    1977-05-01

    Based on the classical theory of plasticity, and accepting the von Mises criterion as the initial yield criterion, a non-linear kinematic hardening function applicable both to Melan-Prager's and to Ziegler's hardening rule is proposed. This non-linear hardening function is determined by means of the uniaxial stress-strain curve, and any such curve is applicable. The proposed hardening function considers the problem of general reversed loading, and a smooth change in the behaviour from one plastic state to another nearlying plastic state is obtained. A review of both the kinematic hardening theory and the corresponding non-linear hardening assumptions is given, and it is shown that material behaviour is identical whether Melan-Prager's or Ziegler's hardening rule is applied, provided that the von Mises yield criterion is adopted. (author)

  17. Hydrological Impacts of Land Use Change in the Central Appalachian Mountains, U.S.: A Multi-Scale Analysis

    Science.gov (United States)

    Eshleman, K. N.; Negley, T. L.; Townsend, P. A.

    2003-12-01

    Quantifying, understanding, and predicting the hydrological impacts of land use changes and land management practices are important objectives of both the academic hydrologist and the civil engineer. Relationships between stormflow response and land use have been most readily observed at small spatial scales (e.g., hillslopes, small experimental watersheds), but have proved difficult to establish in larger basins where (1) high-resolution precipitation data are usually unavailable, (2) land use patterns are often exceedingly complex, and (3) land use changes are essentially uncontrolled. In the Central Appalachian Mountains of the U.S., conversion of forests to mined lands (through devegetation, excavation of overburden and coal deposits, and subsequent reclamation) is the dominant land use change presently occurring. In the Georges Creek basin in western Maryland, for example, the portion of the watershed classified as mined (including active, reclaimed, and abandoned surface mines) increased from 3.8 to 15.5% from 1962 to 1997; modest urbanization of the basin (2.4 to 4.7%) also occurred during this period. In 1999, we initiated a comparative field study to determine if surface coal-mining and subsequent land reclamation practices affect stormflow responses at multiple spatial scales: (1) plot, (2) small watershed, and (3) river basin scales. Results from the plot-scale experiments suggested that soil infiltration capacity is grossly reduced during mining and reclamation, apparently due to loss of forest litter and soil compaction by heavy machinery. At the small watershed (<25 ha) scale, a comparative analysis of a pair of gaged watersheds indicated that conventional methods of surface mining and reclamation can increase peak stormflow, total storm runoff, and storm runoff coefficient by about 250% relative to similar forested watersheds in the same region. Finally, frequency analysis of long-term runoff data from the larger, extensively-mined Georges Creek

  18. Measurement of changes in linear accelerator photon energy through flatness variation using an ion chamber array

    International Nuclear Information System (INIS)

    Gao Song; Balter, Peter A.; Rose, Mark; Simon, William E.

    2013-01-01

    Purpose: To compare the use of flatness versus percent depth dose (PDD) for determining changes in photon beam energy for a megavoltage linear accelerator. Methods: Energy changes were accomplished by adjusting the bending magnet current by up to ±15% in 5% increments away from the value used clinically. Two metrics for flatness, relative flatness in the central 80% of the field (Flat) and average maximum dose along the diagonals normalized by central axis dose (F DN ), were measured using a commercially available planner ionization chamber array. PDD was measured in water at depths of 5 and 10 cm in 3 × 3 cm 2 and 10 × 10 cm 2 fields using a cylindrical chamber. Results: PDD was more sensitive to changes in energy when the beam energy was increased than when it was decreased. For the 18-MV beam in particular, PDD was not sensitive to energy reductions below the nominal energy. The value of Flat was found to be more sensitive to decreases in energy than to increases, with little sensitivity to energy increases above the nominal energy for 18-MV beams. F DN was the only metric that was found to be sensitive to both increases and reductions of energy for both the 6- and 18-MV beams. Conclusions: Flatness based metrics were found to be more sensitive to energy changes than PDD, In particular, F DN was found to be the most sensitive metric to energy changes for photon beams of 6 and 18 MV. The ionization chamber array allows this metric to be conveniently measured as part of routine accelerator quality assurance.

  19. Universal scaling and nonlinearity of aggregate price impact in financial markets

    Science.gov (United States)

    Patzelt, Felix; Bouchaud, Jean-Philippe

    2018-01-01

    How and why stock prices move is a centuries-old question still not answered conclusively. More recently, attention shifted to higher frequencies, where trades are processed piecewise across different time scales. Here we reveal that price impact has a universal nonlinear shape for trades aggregated on any intraday scale. Its shape varies little across instruments, but drastically different master curves are obtained for order-volume and -sign impact. The scaling is largely determined by the relevant Hurst exponents. We further show that extreme order-flow imbalance is not associated with large returns. To the contrary, it is observed when the price is pinned to a particular level. Prices move only when there is sufficient balance in the local order flow. In fact, the probability that a trade changes the midprice falls to zero with increasing (absolute) order-sign bias along an arc-shaped curve for all intraday scales. Our findings challenge the widespread assumption of linear aggregate impact. They imply that market dynamics on all intraday time scales are shaped by correlations and bilateral adaptation in the flows of liquidity provision and taking.

  20. Defining the minimal detectable change in scores on the eight-item Morisky Medication Adherence Scale.

    Science.gov (United States)

    Muntner, Paul; Joyce, Cara; Holt, Elizabeth; He, Jiang; Morisky, Donald; Webber, Larry S; Krousel-Wood, Marie

    2011-05-01

    Self-report scales are used to assess medication adherence. Data on how to discriminate change in self-reported adherence over time from random variability are limited. To determine the minimal detectable change for scores on the 8-item Morisky Medication Adherence Scale (MMAS-8). The MMAS-8 was administered twice, using a standard telephone script, with administration separated by 14-22 days, to 210 participants taking antihypertensive medication in the CoSMO (Cohort Study of Medication Adherence among Older Adults). MMAS-8 scores were calculated and participants were grouped into previously defined categories (<6, 6 to <8, and 8 for low, medium, and high adherence). The mean (SD) age of participants was 78.1 (5.8) years, 43.8% were black, and 68.1% were women. Overall, 8.1% (17/210), 16.2% (34/210), and 51.0% (107/210) of participants had low, medium, and high MMAS-8 scores, respectively, at both survey administrations (overall agreement 75.2%; 158/210). The weighted κ statistic was 0.63 (95% CI 0.53 to 0.72). The intraclass correlation coefficient was 0.78. The within-person standard error of the mean for change in MMAS-8 scores was 0.81, which equated to a minimal detectable change of 1.98 points. Only 4.3% (9/210) of the participants had a change in MMAS-8 of 2 or more points between survey administrations. Within-person changes in MMAS-8 scores of 2 or more points over time may represent a real change in antihypertensive medication adherence.

  1. Structure formation with massive neutrinos. Going beyond linear theory

    International Nuclear Information System (INIS)

    Blas, Diego; Garny, Mathias; Konstandin, Thomas; Lesgourgues, Julien; Institut de Theorie Phenomenes Physiques EPFL, Lausanne; Savoie Univ., CNRS, Annecy-le-Vieux

    2014-08-01

    We compute non-linear corrections to the matter power spectrum taking the time- and scale-dependent free-streaming length of neutrinos into account. We adopt a hybrid scheme that matches the full Boltzmann hierarchy to an effective two-fluid description at an intermediate redshift. The non-linearities in the neutrino component are taken into account by using an extension of the time-flow framework. We point out that this remedies a spurious behaviour that occurs when neglecting non-linear terms for neutrinos. This behaviour is related to how efficiently short modes decouple from long modes and can be traced back to the violation of momentum conservation if neutrinos are treated linearly. Furthermore, we compare our results at next to leading order to various other methods and quantify the accuracy of the fluid description. Due to the correct decoupling behaviour of short modes, the two-fluid scheme is a suitable starting point to compute higher orders in perturbations or for resummation methods.

  2. Structure formation with massive neutrinos: going beyond linear theory

    CERN Document Server

    Blas, Diego; Konstandin, Thomas; Lesgourgues, Julien

    2014-01-01

    We compute non-linear corrections to the matter power spectrum taking the time- and scale-dependent free-streaming length of neutrinos into account. We adopt a hybrid scheme that matches the full Boltzmann hierarchy to an effective two-fluid description at an intermediate redshift. The non-linearities in the neutrino component are taken into account by using an extension of the time-flow framework. We point out that this remedies a spurious behaviour that occurs when neglecting non-linear terms for neutrinos. This behaviour is related to how efficiently short modes decouple from long modes and can be traced back to the violation of momentum conservation if neutrinos are treated linearly. Furthermore, we compare our results at next to leading order to various other methods and quantify the accuracy of the fluid description. Due to the correct decoupling behaviour of short modes, the two-fluid scheme is a suitable starting point to compute higher orders in perturbations or for resummation methods.

  3. On non-linear dynamics of a coupled electro-mechanical system

    DEFF Research Database (Denmark)

    Darula, Radoslav; Sorokin, Sergey

    2012-01-01

    Electro-mechanical devices are an example of coupled multi-disciplinary weakly non-linear systems. Dynamics of such systems is described in this paper by means of two mutually coupled differential equations. The first one, describing an electrical system, is of the first order and the second one...... excitation. The results are verified using a numerical model created in MATLAB Simulink environment. Effect of non-linear terms on dynamical response of the coupled system is investigated; the backbone and envelope curves are analyzed. The two phenomena, which exist in the electro-mechanical system: (a......, for mechanical system, is of the second order. The governing equations are coupled via linear and weakly non-linear terms. A classical perturbation method, a method of multiple scales, is used to find a steadystate response of the electro-mechanical system exposed to a harmonic close-resonance mechanical...

  4. Solution of generalized shifted linear systems with complex symmetric matrices

    International Nuclear Information System (INIS)

    Sogabe, Tomohiro; Hoshi, Takeo; Zhang, Shao-Liang; Fujiwara, Takeo

    2012-01-01

    We develop the shifted COCG method [R. Takayama, T. Hoshi, T. Sogabe, S.-L. Zhang, T. Fujiwara, Linear algebraic calculation of Green’s function for large-scale electronic structure theory, Phys. Rev. B 73 (165108) (2006) 1–9] and the shifted WQMR method [T. Sogabe, T. Hoshi, S.-L. Zhang, T. Fujiwara, On a weighted quasi-residual minimization strategy of the QMR method for solving complex symmetric shifted linear systems, Electron. Trans. Numer. Anal. 31 (2008) 126–140] for solving generalized shifted linear systems with complex symmetric matrices that arise from the electronic structure theory. The complex symmetric Lanczos process with a suitable bilinear form plays an important role in the development of the methods. The numerical examples indicate that the methods are highly attractive when the inner linear systems can efficiently be solved.

  5. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  6. The scaling structure of the global road network.

    Science.gov (United States)

    Strano, Emanuele; Giometto, Andrea; Shai, Saray; Bertuzzo, Enrico; Mucha, Peter J; Rinaldo, Andrea

    2017-10-01

    Because of increasing global urbanization and its immediate consequences, including changes in patterns of food demand, circulation and land use, the next century will witness a major increase in the extent of paved roads built worldwide. To model the effects of this increase, it is crucial to understand whether possible self-organized patterns are inherent in the global road network structure. Here, we use the largest updated database comprising all major roads on the Earth, together with global urban and cropland inventories, to suggest that road length distributions within croplands are indistinguishable from urban ones, once rescaled to account for the difference in mean road length. Such similarity extends to road length distributions within urban or agricultural domains of a given area. We find two distinct regimes for the scaling of the mean road length with the associated area, holding in general at small and at large values of the latter. In suitably large urban and cropland domains, we find that mean and total road lengths increase linearly with their domain area, differently from earlier suggestions. Scaling regimes suggest that simple and universal mechanisms regulate urban and cropland road expansion at the global scale. As such, our findings bear implications for global road infrastructure growth based on land-use change and for planning policies sustaining urban expansions.

  7. Transverse beam dynamics in non-linear Fixed Field Alternating Gradient accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Haj, Tahar M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Meot, F. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2016-03-02

    In this paper, we present some aspects of the transverse beam dynamics in Fixed Field Ring Accelerators (FFRA): we start from the basic principles in order to derive the linearized transverse particle equations of motion for FFRA, essentially FFAGs and cyclotrons are considered here. This is a simple extension of a previous work valid for linear lattices that we generalized by including the bending terms to ensure its correctness for FFAG lattice. The space charge term (contribution of the internal coulombian forces of the beam) is contained as well, although it is not discussed here. The emphasis is on the scaling FFAG type: a collaboration work is undertaken in view of better understanding the properties of the 150 MeV scaling FFAG at KURRI in Japan, and progress towards high intensity operation. Some results of the benchmarking work between different codes are presented. Analysis of certain type of field imperfections revealed some interesting features about this machine that explain some of the experimental results and generalize the concept of a scaling FFAG to a non-scaling one for which the tune variations obey a well-defined law.

  8. Laboratory beam-plasma interactions linear and nonlinear

    International Nuclear Information System (INIS)

    Christiansen, P.J.; Bond, J.W.; Jain, V.K.

    1982-01-01

    This chapter attempts to demonstrate that despite unavoidable scaling limitations, laboratory experiments can uncover details of beam plasma interaction processes which could never be revealed through space plasma physics. Topics covered include linear theory, low frequency couplings, indirect effects, nonlinear effects, quasi-linear effects, trapping effects, nonlinear wave-wave interactions, and self modulation and cavitation. Unstable electrostatic waves arising from an exchange of energy with the ''free energy'' beam features are considered as kinetic and as hydrodynamic, or fluid, instabilities. The consequences of such instabilities (e.g. when the waves have grown to a finite level) are examined and some studies are reviewed which have attempted to understand how the free energy originally available in the beam is redistributed to produce a final state of equilibrium turbulence

  9. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  10. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  11. Cross-scale intercomparison of climate change impacts simulated by regional and global hydrological models in eleven large river basins

    Energy Technology Data Exchange (ETDEWEB)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Flörke, M.; Huang, S.; Motovilov, Y.; Buda, S.; Yang, T.; Müller, C.; Leng, G.; Tang, Q.; Portmann, F. T.; Hagemann, S.; Gerten, D.; Wada, Y.; Masaki, Y.; Alemayehu, T.; Satoh, Y.; Samaniego, L.

    2017-01-04

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.

  12. Quantifying streamflow change caused by forest disturbance at a large spatial scale: A single watershed study

    Science.gov (United States)

    Wei, Xiaohua; Zhang, Mingfang

    2010-12-01

    Climatic variability and forest disturbance are commonly recognized as two major drivers influencing streamflow change in large-scale forested watersheds. The greatest challenge in evaluating quantitative hydrological effects of forest disturbance is the removal of climatic effect on hydrology. In this paper, a method was designed to quantify respective contributions of large-scale forest disturbance and climatic variability on streamflow using the Willow River watershed (2860 km2) located in the central part of British Columbia, Canada. Long-term (>50 years) data on hydrology, climate, and timber harvesting history represented by equivalent clear-cutting area (ECA) were available to discern climatic and forestry influences on streamflow by three steps. First, effective precipitation, an integrated climatic index, was generated by subtracting evapotranspiration from precipitation. Second, modified double mass curves were developed by plotting accumulated annual streamflow against annual effective precipitation, which presented a much clearer picture of the cumulative effects of forest disturbance on streamflow following removal of climatic influence. The average annual streamflow changes that were attributed to forest disturbances and climatic variability were then estimated to be +58.7 and -72.4 mm, respectively. The positive (increasing) and negative (decreasing) values in streamflow change indicated opposite change directions, which suggest an offsetting effect between forest disturbance and climatic variability in the study watershed. Finally, a multivariate Autoregressive Integrated Moving Average (ARIMA) model was generated to establish quantitative relationships between accumulated annual streamflow deviation attributed to forest disturbances and annual ECA. The model was then used to project streamflow change under various timber harvesting scenarios. The methodology can be effectively applied to any large-scale single watershed where long-term data (>50

  13. A search for time variability and its possible regularities in linear polarization of Be stars

    International Nuclear Information System (INIS)

    Huang, L.; Guo, Z.H.; Hsu, J.C.; Huang, L.

    1989-01-01

    Linear polarization measurements are presented for 14 Be stars obtained at McDonald Observatory during four observing runs from June to November of 1983. Methods of observation and data reduction are described. Seven of eight program stars which were observed on six or more nights exhibited obvious polarimetric variations on time-scales of days or months. The incidence is estimated as 50% and may be as high as 93%. No connection can be found between polarimetric variability and rapid periodic light or spectroscopic variability for our stars. Ultra-rapid variability on time-scale of minutes was searched for with negative results. In all cases the position angles also show variations indicating that the axis of symmetry of the circumstellar envelope changes its orientation in space. For the Be binary CX Dra the variations in polarization seems to have a period which is just half of the orbital period

  14. Data adaptive control parameter estimation for scaling laws

    Energy Technology Data Exchange (ETDEWEB)

    Dinklage, Andreas [Max-Planck-Institut fuer Plasmaphysik, Teilinstitut Greifswald, Wendelsteinstrasse 1, D-17491 Greifswald (Germany); Dose, Volker [Max-Planck- Institut fuer Plasmaphysik, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2007-07-01

    Bayesian experimental design quantifies the utility of data expressed by the information gain. Data adaptive exploration determines the expected utility of a single new measurement using existing data and a data descriptive model. In other words, the method can be used for experimental planning. As an example for a multivariate linear case, we apply this method for constituting scaling laws of fusion devices. In detail, the scaling of the stellarator W7-AS is examined for a subset of {iota}=1/3 data. The impact of the existing data on the scaling exponents is presented. Furthermore, in control parameter space regions of high utility are identified which improve the accuracy of the scaling law. This approach is not restricted to the presented example only, but can also be extended to non-linear models.

  15. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  16. STABILITY OF LINEAR SYSTEMS WITH MARKOVIAN JUMPS

    Directory of Open Access Journals (Sweden)

    Jorge Enrique Mayta Guillermo

    2016-12-01

    Full Text Available In this work we will analyze the stability of linear systems governed by a Markov chain, this family is known in the specialized literature as linear systems with Markov jumps or by its acronyms in English MJLS as it is denoted in [1]. Linear systems governed by a Markov chain are dynamic systems with abrupt changes. We give some denitions of stability for the MJLS system, where these types of stability are equivalent as long as the state space of the Markov chain is nite. Finally we present a theorem that characterizes the stochastic stability by means of an equation of the Lyapunov type. The result is a generalization of a theorem in classical theory.

  17. Pattern formation due to non-linear vortex diffusion

    Science.gov (United States)

    Wijngaarden, Rinke J.; Surdeanu, R.; Huijbregtse, J. M.; Rector, J. H.; Dam, B.; Einfeld, J.; Wördenweber, R.; Griessen, R.

    Penetration of magnetic flux in YBa 2Cu 3O 7 superconducting thin films in an external magnetic field is visualized using a magneto-optic technique. A variety of flux patterns due to non-linear vortex diffusion is observed: (1) Roughening of the flux front with scaling exponents identical to those observed in burning paper including two distinct regimes where respectively spatial disorder and temporal disorder dominate. In the latter regime Kardar-Parisi-Zhang behavior is found. (2) Fractal penetration of flux with Hausdorff dimension depending on the critical current anisotropy. (3) Penetration as ‘flux-rivers’. (4) The occurrence of commensurate and incommensurate channels in films with anti-dots as predicted in numerical simulations by Reichhardt, Olson and Nori. It is shown that most of the observed behavior is related to the non-linear diffusion of vortices by comparison with simulations of the non-linear diffusion equation appropriate for vortices.

  18. Contact kinematics of biomimetic scales

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Ranajay; Ebrahimi, Hamid; Vaziri, Ashkan, E-mail: vaziri@coe.neu.edu [Department of Mechanical and Industrial Engineering, Northeastern University, Boston, Massachusetts 02115 (United States)

    2014-12-08

    Dermal scales, prevalent across biological groups, considerably boost survival by providing multifunctional advantages. Here, we investigate the nonlinear mechanical effects of biomimetic scale like attachments on the behavior of an elastic substrate brought about by the contact interaction of scales in pure bending using qualitative experiments, analytical models, and detailed finite element (FE) analysis. Our results reveal the existence of three distinct kinematic phases of operation spanning linear, nonlinear, and rigid behavior driven by kinematic interactions of scales. The response of the modified elastic beam strongly depends on the size and spatial overlap of rigid scales. The nonlinearity is perceptible even in relatively small strain regime and without invoking material level complexities of either the scales or the substrate.

  19. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Science.gov (United States)

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  20. The reduction method of statistic scale applied to study of climatic change

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel

    2000-01-01

    In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia