WorldWideScience

Sample records for average

  1. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  2. Quaternion Averaging

    Science.gov (United States)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  3. Neutron resonance averaging

    International Nuclear Information System (INIS)

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  4. Averaging anisotropic cosmologies

    International Nuclear Information System (INIS)

    We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of anisotropic pressure-free models. Adopting the Buchert scheme, we recast the averaged scalar equations in Bianchi-type form and close the standard system by introducing a propagation formula for the average shear magnitude. We then investigate the evolution of anisotropic average vacuum models and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. The presence of nonzero average shear in our equations also allows us to examine the constraints that a phase of backreaction-driven accelerated expansion might put on the anisotropy of the averaged domain. We close by assessing the status of these and other attempts to define and calculate 'average' spacetime behaviour in general relativity

  5. Average-energy games

    OpenAIRE

    Bouyer, Patricia; Markey, Nicolas; Randour, Mickael; Larsen, Kim G.; Laursen, Simon

    2015-01-01

    Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this ...

  6. Averaged extreme regression quantile

    OpenAIRE

    Jureckova, Jana

    2015-01-01

    Various events in the nature, economics and in other areas force us to combine the study of extremes with regression and other methods. A useful tool for reducing the role of nuisance regression, while we are interested in the shape or tails of the basic distribution, is provided by the averaged regression quantile and namely by the average extreme regression quantile. Both are weighted means of regression quantile components, with weights depending on the regressors. Our primary interest is ...

  7. On the Averaging Principle

    OpenAIRE

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and interchangibility is O(\\epsilon^2) equivalent to the outcome of the corresponding homogeneous model, where \\epsilon is the level of heterogeneity. We then use this averaging pr...

  8. Average Angular Velocity

    OpenAIRE

    Van Essen, H.

    2004-01-01

    This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to th...

  9. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  10. Averaging anisotropic cosmologies

    CERN Document Server

    Barrow, J D; Barrow, John D.; Tsagas, Christos G.

    2006-01-01

    We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of pressure-free Bianchi-type models. Adopting the Buchert averaging scheme, we identify the kinematic backreaction effects by focussing on spacetimes with zero or isotropic spatial curvature. This allows us to close the system of the standard scalar formulae with a propagation equation for the shear magnitude. We find no change in the already known conditions for accelerated expansion. The backreaction terms are expressed as algebraic relations between the mean-square fluctuations of the models' irreducible kinematical variables. Based on these we investigate the early evolution of averaged vacuum Bianchi type $I$ universes and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. We also discuss the possibility of accelerated expansion due to ...

  11. Average Angular Velocity

    CERN Document Server

    Essén, H

    2003-01-01

    This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to three parts: center of mass, rotational, plus the remaining internal energy relative to an optimally translating and rotating frame.

  12. On sparsity averaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2013-01-01

    Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013) introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.

  13. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  14. The averaging principle

    OpenAIRE

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of \\emph{differentiability} and \\emph{interchangibility}, is $O(\\epsilon^2)$ equivalent to the outcome of the corresponding homogeneous model, where $\\epsilon$ is the level of heterogeneity. We then us...

  15. Negative Average Preference Utilitarianism

    Directory of Open Access Journals (Sweden)

    Roger Chao

    2012-03-01

    Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.

  16. Dissociating Averageness and Attractiveness: Attractive Faces Are Not Always Average

    Science.gov (United States)

    DeBruine, Lisa M.; Jones, Benedict C.; Unger, Layla; Little, Anthony C.; Feinberg, David R.

    2007-01-01

    Although the averageness hypothesis of facial attractiveness proposes that the attractiveness of faces is mostly a consequence of their averageness, 1 study has shown that caricaturing highly attractive faces makes them mathematically less average but more attractive. Here the authors systematically test the averageness hypothesis in 5 experiments…

  17. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    OpenAIRE

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to...

  18. Average Convexity in Communication Situations

    NARCIS (Netherlands)

    Slikker, M.

    1998-01-01

    In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin

  19. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  20. Physical Theories with Average Symmetry

    CERN Document Server

    Alamino, Roberto C

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.

  1. "Pricing Average Options on Commodities"

    OpenAIRE

    Kenichiro Shiraya; Akihiko Takahashi

    2010-01-01

    This paper proposes a new approximation formula for pricing average options on commodities under a stochastic volatility environment. In particular, it derives an option pricing formula under Heston and an extended lambda-SABR stochastic volatility models (which includes an extended SABR model as a special case). Moreover, numerical examples support the accuracy of the proposed average option pricing formula.

  2. Quantized average consensus with delay

    NARCIS (Netherlands)

    Jafarian, Matin; De Persis, Claudio

    2012-01-01

    Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co

  3. Gaussian moving averages and semimartingales

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2008-01-01

    In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...

  4. Power convergence of Abel averages

    OpenAIRE

    Kozitsky, Yuri; Shoikhet, David; Zemanek, Jaroslav

    2012-01-01

    Necessary and sufficient conditions are presented for the Abel averages of discrete and strongly continuous semigroups, $T^k$ and $T_t$, to be power convergent in the operator norm in a complex Banach space. These results cover also the case where $T$ is unbounded and the corresponding Abel average is defined by means of the resolvent of $T$. They complement the classical results by Michael Lin establishing sufficient conditions for the corresponding convergence for a bounded $T$.

  5. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.

  6. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  7. Sparsity Averaging for Compressive Imaging

    CERN Document Server

    Carrillo, Rafael E; Van De Ville, Dimitri; Thiran, Jean-Philippe; Wiaux, Yves

    2012-01-01

    We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms state-of-the-art priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.

  8. Dependability in Aggregation by Averaging

    CERN Document Server

    Jesus, Paulo; Almeida, Paulo Sérgio

    2010-01-01

    Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...

  9. Stochastic Approximation with Averaging Innovation

    CERN Document Server

    Laruelle, Sophie

    2010-01-01

    The aim of the paper is to establish a convergence theorem for multi-dimensional stochastic approximation in a setting with innovations satisfying some averaging properties and to study some applications. The averaging assumptions allow us to unify the framework where the innovations are generated (to solve problems from Numerical Probability) and the one with exogenous innovations (market data, output of "device" $e.g.$ an Euler scheme) with stationary or ergodic properties. We propose several fields of applications with random innovations or quasi-random numbers. In particular we provide in both setting a rule to tune the step of the algorithm. At last we illustrate our results on five examples notably in Finance.

  10. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    J C Travers

    2010-11-01

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.

  11. Michel Parameters averages and interpretation

    International Nuclear Information System (INIS)

    The new measurements of Michel parameters in τ decays are combined to world averages. From these measurements model independent limits on non-standard model couplings are derived and interpretations in the framework of specific models are given. A lower limit of 2.5 tan β GeV on the mass of a charged Higgs boson in models with two Higgs doublets can be set and a 229 GeV limit on a right-handed W-boson in left-right symmetric models (95 % c.l.)

  12. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  13. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...

  14. Averages of Values of L-Series

    OpenAIRE

    Alkan, Emre; Ono, Ken

    2013-01-01

    We obtain an exact formula for the average of values of L-series over two independent odd characters. The average of any positive moment of values at s = 1 is then expressed in terms of finite cotangent sums subject to congruence conditions. As consequences, bounds on such cotangent sums, limit points for the average of first moment of L-series at s = 1 and the average size of positive moments of character sums related to the class number are deduced.

  15. Spectral averaging techniques for Jacobi matrices

    CERN Document Server

    del Rio, Rafael; Schulz-Baldes, Hermann

    2008-01-01

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  16. Average-cost based robust structural control

    Science.gov (United States)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  17. MEASUREMENT AND MODELLING AVERAGE PHOTOSYNTHESIS OF MAIZE

    OpenAIRE

    ZS LÕKE

    2005-01-01

    The photosynthesis of fully developed maize was investigated in the Agrometeorological Research Station Keszthely, in 2000. We used LI-6400 type measurement equipment to locate measurement points where the intensity of photosynthesis mostly nears the average. So later we could obtain average photosynthetic activities featuring the crop, with only one measurement. To check average photosynthesis of maize we used Goudriaan’s simulation model (CMSM) as well to calculate values on cloudless sampl...

  18. WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES

    Institute of Scientific and Technical Information of China (English)

    刘永平; 许贵桥

    2003-01-01

    This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.

  19. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  20. Stochastic averaging of quasi-Hamiltonian systems

    Institute of Scientific and Technical Information of China (English)

    朱位秋

    1996-01-01

    A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.

  1. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  2. Average sampling theorems for shift invariant subspaces

    Institute of Scientific and Technical Information of China (English)

    孙文昌; 周性伟

    2000-01-01

    The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.

  3. Average excitation potentials of air and aluminium

    NARCIS (Netherlands)

    Bogaardt, M.; Koudijs, B.

    1951-01-01

    By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu

  4. New results on averaging theory and applications

    Science.gov (United States)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  5. Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.

    Science.gov (United States)

    Caruk, Joan Marie

    To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…

  6. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    Science.gov (United States)

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  7. Clarifying the relationship between average excesses and average effects of allele substitutions

    Directory of Open Access Journals (Sweden)

    Jose M eÁlvarez-Castro

    2012-03-01

    Full Text Available Fisher’s concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one-locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance.

  8. Small scale magnetic flux-averaged magnetohydrodynamics

    International Nuclear Information System (INIS)

    By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits

  9. Averaged Lema\\^itre-Tolman-Bondi dynamics

    CERN Document Server

    Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried

    2016-01-01

    We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.

  10. Average-passage flow model development

    Science.gov (United States)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  11. Average Shape of Transport-Limited Aggregates

    Science.gov (United States)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  12. Averaging of Backscatter Intensities in Compounds

    Science.gov (United States)

    Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.

    2002-01-01

    Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752

  13. Experimental Demonstration of Squeezed State Quantum Averaging

    CERN Document Server

    Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.

  14. Self-averaging characteristics of spectral fluctuations

    OpenAIRE

    Braun, Petr; Haake, Fritz

    2014-01-01

    The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second a small imaginary part of the quasi-energy. Self-averaging universal (like the CUE average) behavior is found f...

  15. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...

  16. Average Vegetation Growth 1992 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  17. Average Vegetation Growth 1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  18. Average Vegetation Growth 1991 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1991 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  19. Average Vegetation Growth 1993 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  20. Average Vegetation Growth 1998 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  1. Average Vegetation Growth 1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1999 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  2. Average Vegetation Growth 1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  3. Average Vegetation Growth 2003 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2003 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  4. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using...

  5. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  6. Average Vegetation Growth 2002 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2002 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  7. Average Vegetation Growth 1997 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  8. Spacetime Average Density (SAD) Cosmological Measures

    CERN Document Server

    Page, Don N

    2014-01-01

    The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...

  9. Averaging procedure in variable-G cosmologies

    CERN Document Server

    Cardone, Vincenzo F

    2008-01-01

    Previous work in the literature had built a formalism for spatially averaged equations for the scale factor, giving rise to an averaged Raychaudhuri equation and averaged Hamiltonian constraint, which involve a backreaction source term. The present paper extends these equations to include models with variable Newton parameter and variable cosmological term, motivated by the non-perturbative renormalization program for quantum gravity based upon the Einstein--Hilbert action. The coupling between backreaction and spatially averaged three-dimensional scalar curvature is found to survive, and all equations involving contributions of a variable Newton parameter are worked out in detail. Interestingly, under suitable assumptions, an approximate solution can be found where the universe tends to a FLRW model, while keeping track of the original inhomogeneities through two effective fluids.

  10. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  11. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  12. Average Vegetation Growth 2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  13. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets...

  14. Modeling and Instability of Average Current Control

    OpenAIRE

    Fang, Chung-Chieh

    2012-01-01

    Dynamics and stability of average current control of DC-DC converters are analyzed by sampled-data modeling. Orbital stability is studied and it is found unrelated to the ripple size of the orbit. Compared with the averaged modeling, the sampled-data modeling is more accurate and systematic. An unstable range of compensator pole is found by simulations, and is predicted by sampled-data modeling and harmonic balance modeling.

  15. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  16. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  17. Disk-averaged synthetic spectra of Mars

    OpenAIRE

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a f...

  18. Comparison of Mouse Brain DTI Maps Using K-space Average, Image-space Average, or No Average Approach

    OpenAIRE

    Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan

    2013-01-01

    Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data was collected from five ...

  19. Basics of averaging of the Maxwell equations

    CERN Document Server

    Chipouline, A; Tretyakov, S

    2011-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for metamaterials, which is rather close to the case of compound materials but should include magnetic response of the inclusi...

  20. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  1. Cosmic structure, averaging and dark energy

    CERN Document Server

    Wiltshire, David L

    2013-01-01

    These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...

  2. Books average previous decade of economic misery.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  3. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  4. Matrix averages relating to Ginibre ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Forrester, Peter J [Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia); Rains, Eric M [Department of Mathematics, California Institute of Technology, Pasadena, CA 91125 (United States)], E-mail: p.forrester@ms.unimelb.edu.au

    2009-09-25

    The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument AX, where A is a fixed matrix and X is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khoruzhenko (2009 J. Phys. A: Math. Theor. 42 222002), and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions; these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.

  5. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  6. Average Cycle Period in Asymmetrical Flashing Ratchet

    Institute of Scientific and Technical Information of China (English)

    WANG Hai-Yan; HE Hou-Sheng; BAO Jing-Dong

    2005-01-01

    The directed motion of a Brownian particle in a flashing potential with various transition probabilities and waiting times in one of two states is studied. An expression for the average cycle period is proposed and the steady current J of the particle is calculated via Langevin simulation. The results show that the optimal cycle period rm,which takes the maximum of J, is shifted to a small value when the transition probability λ from the potential on to the potential off decreases, the maximalcurrent appears in the case of the average waiting time in the potential on being longer than in the potential off, and the direction of current depends on the ratio of the average times waiting in two states.

  7. Average Cross Section Evaluation - Room for Improvement

    Energy Technology Data Exchange (ETDEWEB)

    Frohner, G.H. [Forschungszentrum Karlsruhe Institut fur Kern- und Energietechnik, Karlsruhe (Germany)

    2006-07-01

    Full text of publication follows: Techniques for evaluation of average nuclear cross sections are well established. Nevertheless there seems room for improvement. Heuristic expressions for average partial cross sections of the Hauser-Feshbach type with width fluctuation corrections could be replaced by the correct GOE triple integral. Transmission coefficients derived from macroscopic models (optical, single and double hump fission barrier, etc) lead to better descriptions of cross section behaviour over wide energy ranges. At higher energies (n,{gamma}n') reactions compete with radiative capture (Moldauer effect). In all cross section modeling one must distinguish properly between average S- and R-matrix parameters. The exact relationship between them is given, as well as the connection to Endf format rules. Fitting codes (e.g. FITACS) should be able to digest observed data directly, instead of only reduced data corrected already for self shielding and multiple scattering (e.g. with SESH). (author)

  8. Model averaging and muddled multimodel inferences.

    Science.gov (United States)

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t

  9. Books Average Previous Decade of Economic Misery

    OpenAIRE

    R Alexander Bentley; Alberto Acerbi; Paul Ormerod; Vasileios Lampos

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is signific...

  10. An improved moving average technical trading rule

    Science.gov (United States)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  11. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    J M M Senovilla

    2007-07-01

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.

  12. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  13. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  14. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, ΒΘ, is derived. A method for unobtrusively measuring the quantities used to evaluate ΒΘ in Extrap T1 is described. The results if a series of measurements yielding ΒΘ as a function of externally applied toroidal field are presented. (author)

  15. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...

  16. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...

  17. Discontinuities and hysteresis in quantized average consensus

    NARCIS (Netherlands)

    Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo

    2011-01-01

    We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering

  18. Error estimates on averages of correlated data

    International Nuclear Information System (INIS)

    We describe how the true statistical error on an average of correlated data can be obtained with ease and efficiency by a renormalization group method. The method is illustrated with numerical and analytical examples, having finite as well as infinite range correlations. (orig.)

  19. Average utility maximization: A preference foundation

    NARCIS (Netherlands)

    A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)

    2014-01-01

    textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen

  20. High average-power induction linacs

    Energy Technology Data Exchange (ETDEWEB)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.

    1989-03-15

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.

  1. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  2. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...

  3. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  4. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...

  5. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...

  6. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    C. Chiarella; X.Z. He; C.H. Hommes

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use

  7. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan

    2015-07-01

    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  8. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... can be efficiently computed, we immediately gain scalability. GA is inherently more robust than PCA, but we show that they coincide for Gaussian data. We exploit that averages can be made robust to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. Robustness can be with respect......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....

  9. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  10. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  11. Average Regression-Adjusted Controlled Regenerative Estimates

    OpenAIRE

    Lewis, Peter A.W.; Ressler, Richard

    1991-01-01

    Proceedings of the 1991 Winter Simulation Conference Barry L. Nelson, W. David Kelton, Gordon M. Clark (eds.) One often uses computer simulations of queueing systems to generate estimates of system characteristics along with estimates of their precision. Obtaining precise estimates, espescially for high traffic intensities, can require large amounts of computer time. Average regression-adjusted controlled regenerative estimates result from combining the two techniques ...

  12. Endogenous average cost based access pricing

    OpenAIRE

    Fjell, Kenneth; Foros, Øystein; Pal, Debashis

    2006-01-01

    We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...

  13. Average Drift Analysis and Population Scalability

    OpenAIRE

    He, Jun; Yao, Xin

    2013-01-01

    This paper aims to study how the population size affects the computation time of evolutionary algorithms in a rigorous way. The computation time of an evolutionary algorithm can be measured by either the expected number of generations (hitting time) or the expected number of fitness evaluations (running time) to find an optimal solution. Population scalability is the ratio of the expected hitting time between a benchmark algorithm and an algorithm using a larger population size. Average drift...

  14. On Heroes and Average Moral Human Beings

    OpenAIRE

    Kirchgässner, Gebhard

    2001-01-01

    After discussing various approaches about heroic behaviour in the literature, we first give a definition and classification of moral behaviour, in distinction to intrinsically motivated and ‘prudent' behaviour. Then, we present some arguments on the function of moral behaviour according to ‘minimal' standards of the average individual in a modern democratic society, before we turn to heroic behaviour. We conclude with some remarks on methodological as well as social problems which arise or ma...

  15. Time-dependent angularly averaged inverse transport

    OpenAIRE

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured al...

  16. A Visibility Graph Averaging Aggregation Operator

    OpenAIRE

    Chen, Shiyu; Hu, Yong; Mahadevan, Sankaran; Deng, Yong

    2013-01-01

    The problem of aggregation is considerable importance in many disciplines. In this paper, a new type of operator called visibility graph averaging (VGA) aggregation operator is proposed. This proposed operator is based on the visibility graph which can convert a time series into a graph. The weights are obtained according to the importance of the data in the visibility graph. Finally, the VGA operator is used in the analysis of the TAIEX database to illustrate that it is practical and compare...

  17. Dollar-Cost Averaging: An Investigation

    OpenAIRE

    Fang, Wei

    2007-01-01

    Dollar-cost Averaging (DCA) is a common and useful systematic investment strategy for mutual fund managers, private investors, financial analysts and retirement planners. The issue of performance effectiveness of DCA is greatly controversial among academics and professionals. As a popularly recommended investment strategy, DCA is recognized as a risk reduction strategy; however, the advantage was claimed as the expense of generating higher returns. The dissertation is to intensively inves...

  18. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  19. Geomagnetic effects on the average surface temperature

    Science.gov (United States)

    Ballatore, P.

    Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.

  20. On Backus average for generally anisotropic layers

    CERN Document Server

    Bos, Len; Slawinski, Michael A; Stanoev, Theodore

    2016-01-01

    In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...

  1. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    HU HePing; YANG ZhiYong; TIAN FuQiang

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.

  2. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.

  3. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  4. Disk-averaged synthetic spectra of Mars

    CERN Document Server

    Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...

  5. Bayesian Model Averaging and Weighted Average Least Squares : Equivariance, Stability, and Numerical Issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa

  6. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  7. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  8. Time-dependent angularly averaged inverse transport

    CERN Document Server

    Bal, Guillaume

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.

  9. PROFILE OF HIRED FARMWORKERS, 1998 ANNUAL AVERAGES

    OpenAIRE

    Runyan, Jack L.

    2000-01-01

    An average of 875,000 persons 15 years of age and older did hired farmwork each week as their primary job in 1998. An additional 63,000 people did hired farmwork each week as their secondary job. Hired farmworkers were more likely than the typical U.S. wage and salary worker to be male, Hispanic, younger, less educated, never married, and not U.S. citizens. The West (42 percent) and South (31.4 percent) census regions accounted for almost three-fourths of the hired farmworkers. The rate of un...

  10. The average free volume model for liquids

    CERN Document Server

    Yu, Yang

    2014-01-01

    In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

  11. Fluctuations of wavefunctions about their classical average

    Energy Technology Data Exchange (ETDEWEB)

    Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)

    2003-02-07

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  12. Fluctuations of wavefunctions about their classical average

    CERN Document Server

    Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.

  13. Sparsity averaging for radio-interferometric imaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2014-01-01

    We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

  14. MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert

    2003-05-01

    A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.

  15. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  16. Rademacher averages on noncommutative symmetric spaces

    CERN Document Server

    Merdy, Christian Le

    2008-01-01

    Let E be a separable (or the dual of a separable) symmetric function space, let M be a semifinite von Neumann algebra and let E(M) be the associated noncommutative function space. Let $(\\epsilon_k)_k$ be a Rademacher sequence, on some probability space $\\Omega$. For finite sequences $(x_k)_k of E(M), we consider the Rademacher averages $\\sum_k \\epsilon_k\\otimes x_k$ as elements of the noncommutative function space $E(L^\\infty(\\Omega)\\otimes M)$ and study estimates for their norms $\\Vert \\sum_k \\epsilon_k \\otimes x_k\\Vert_E$ calculated in that space. We establish general Khintchine type inequalities in this context. Then we show that if E is 2-concave, the latter norm is equivalent to the infimum of $\\Vert (\\sum y_k^*y_k)^{{1/2}}\\Vert + \\Vert (\\sum z_k z_k^*)^{{1/2}}\\Vert$ over all $y_k,z_k$ in E(M) such that $x_k=y_k+z_k$ for any k. Dual estimates are given when E is 2-convex and has a non trivial upper Boyd index. We also study Rademacher averages for doubly indexed families of E(M).

  17. Motional averaging in a superconducting qubit.

    Science.gov (United States)

    Li, Jian; Silveri, M P; Kumar, K S; Pirkkalainen, J-M; Vepsäläinen, A; Chien, W C; Tuorila, J; Sillanpää, M A; Hakonen, P J; Thuneberg, E V; Paraoanu, G S

    2013-01-01

    Superconducting circuits with Josephson junctions are promising candidates for developing future quantum technologies. Of particular interest is to use these circuits to study effects that typically occur in complex condensed-matter systems. Here we employ a superconducting quantum bit--a transmon--to perform an analogue simulation of motional averaging, a phenomenon initially observed in nuclear magnetic resonance spectroscopy. By modulating the flux bias of a transmon with controllable pseudo-random telegraph noise we create a stochastic jump of its energy level separation between two discrete values. When the jumping is faster than a dynamical threshold set by the frequency displacement of the levels, the initially separate spectral lines merge into a single, narrow, motional-averaged line. With sinusoidal modulation a complex pattern of additional sidebands is observed. We show that the modulated system remains quantum coherent, with modified transition frequencies, Rabi couplings, and dephasing rates. These results represent the first steps towards more advanced quantum simulations using artificial atoms. PMID:23361011

  18. Intensity contrast of the average supergranule

    CERN Document Server

    Langfellner, J; Gizon, L

    2016-01-01

    While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...

  19. Average prime-pair counting formula

    Science.gov (United States)

    Korevaar, Jaap; Riele, Herman Te

    2010-04-01

    Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.

  20. Averaged Null Energy Condition from Causality

    CERN Document Server

    Hartman, Thomas; Tajdini, Amirhossein

    2016-01-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...

  1. Hedge algorithm and Dual Averaging schemes

    CERN Document Server

    Baes, Michel

    2011-01-01

    We show that the Hedge algorithm, a method that is widely used in Machine Learning, can be interpreted as a particular instance of Dual Averaging schemes, which have recently been introduced by Nesterov for regret minimization. Based on this interpretation, we establish three alternative methods of the Hedge algorithm: one in the form of the original method, but with optimal parameters, one that requires less a priori information, and one that is better adapted to the context of the Hedge algorithm. All our modified methods have convergence results that are better or at least as good as the performance guarantees of the vanilla method. In numerical experiments, our methods significantly outperform the original scheme.

  2. The Lang-Trotter Conjecture on Average

    OpenAIRE

    Baier, Stephan

    2006-01-01

    For an elliptic curve $E$ over $\\ratq$ and an integer $r$ let $\\pi_E^r(x)$ be the number of primes $p\\le x$ of good reduction such that the trace of the Frobenius morphism of $E/\\fie_p$ equals $r$. We consider the quantity $\\pi_E^r(x)$ on average over certain sets of elliptic curves. More in particular, we establish the following: If $A,B>x^{1/2+\\epsilon}$ and $AB>x^{3/2+\\epsilon}$, then the arithmetic mean of $\\pi_E^r(x)$ over all elliptic curves $E$ : $y^2=x^3+ax+b$ with $a,b\\in \\intz$, $|a...

  3. Averaging lifetimes for B hadron species

    International Nuclear Information System (INIS)

    The measurement of the lifetimes of the individual B species are of great interest. Many of these measurements are well below the 10% level of precision. However, in order to reach the precision necessary to test the current theoretical predictions, the results from different experiments need to be averaged together. Therefore, the relevant systematic uncertainties of each measurement need to be well defined in order to understand the correlations between the results from different experiments. In this paper we discuss the dominant sources of systematic errors which lead to correlations between the different measurements. We point out problems connected with the conventional approach of combining lifetime data and discuss methods which overcome these problems. (orig.)

  4. Scaling crossover for the average avalanche shape

    Science.gov (United States)

    Papanikolaou, Stefanos; Bohn, Felipe; Sommer, Rubem L.; Durin, Gianfranco; Zapperi, Stefano; Sethna, James P.

    2010-03-01

    Universality and the renormalization group claim to predict all behavior on long length and time scales asymptotically close to critical points. In practice, large simulations and heroic experiments have been needed to unambiguously test and measure the critical exponents and scaling functions. We announce here the measurement and prediction of universal corrections to scaling, applied to the temporal average shape of Barkhausen noise avalanches. We bypass the confounding factors of time-retarded interactions (eddy currents) by measuring thin permalloy films, and bypass thresholding effects and amplifier distortions by applying Wiener deconvolution. We show experimental shapes that are approximately symmetric, and measure the leading corrections to scaling. We solve a mean-field theory for the magnetization dynamics and calculate the relevant demagnetizing-field correction to scaling, showing qualitative agreement with the experiment. In this way, we move toward a quantitative theory useful at smaller time and length scales and farther from the critical point.

  5. Average transverse momentum quantities approaching the lightfront

    CERN Document Server

    Boer, Daniel

    2014-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of such integrated quantities, using Bessel-weighting and rapidity cut-offs, with the conventional definitions as limiting cases. The regularized quantities are given in terms of integrals over the TMDs of interest that are well-defined and moreover have the advantage of being amenable to lattice evaluations.

  6. Bivariate phase-rectified signal averaging

    CERN Document Server

    Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg

    2008-01-01

    Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.

  7. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  8. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-01-01

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214

  9. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  10. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  11. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  12. Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport

    Science.gov (United States)

    Parker, J. C.; van Genuchten, M. Th.

    1984-07-01

    Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.

  13. Global Average Brightness Temperature for April 2003

    Science.gov (United States)

    2003-01-01

    [figure removed for brevity, see original site] Figure 1 This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  14. Average stress-average strain tension-stiffening relationships based on provisions of design codes

    Institute of Scientific and Technical Information of China (English)

    Gintaris KAKLAUSKAS; Viktor GRIBNIAK; Rokas GIRDZIUS

    2011-01-01

    This research was aimed at deriving average stress-average strain tension-stiffening relationships in accordance with the provisions of design codes for reinforced concrete (RC) members.Using a proposed inverse technique,the tension-stiffening relationships were derived from moment-curvature diagrams of RC beams calculated by different code methods,namely Eurocode 2,ACI 318,and the Chinese standard GB 50010-2002.The derived tension-stiffening laws were applied in a numerical study using the nonlinear finite element software ATENA.The curvatures calculated by ATENA and the code methods were in good agreement.

  15. Interpreting Sky-Averaged 21-cm Measurements

    Science.gov (United States)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  16. Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data

    Science.gov (United States)

    Kristoffersen, Anders

    2007-08-01

    The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

  17. Hearing Office Average Processing Time Ranking Report, April 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  18. Hearing Office Average Processing Time Ranking Report, February 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  19. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...

  20. The monthly-averaged and yearly-averaged cosine effect factor of a heliostat field

    Energy Technology Data Exchange (ETDEWEB)

    Al-Rabghi, O.M.; Elsayed, M.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia). Dept. of Thermal Engineering)

    1992-01-01

    Calculations are carried out to determine the dependence of the monthly-averaged and the yearly-averaged daily cosine effect factor on the pertinent parameters. The results are plotted on charts for each month and for the full year. These results cover latitude angles between 0 and 45[sup o]N, for fields with radii up to 50 tower height. In addition, the results are expressed in mathematical correlations to facilitate using them in computer applications. A procedure is outlined to use the present results to preliminary layout the heliostat field, and to predict the rated MW[sub th] reflected by the heliostat field during a period of a month, several months, or a year. (author)

  1. 40 CFR 1033.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more...

  2. 7 CFR 51.577 - Average midrib length.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  3. 7 CFR 760.640 - National average market price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...

  4. 40 CFR 80.67 - Compliance on average.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Compliance on average. 80.67 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.67 Compliance on average. The requirements... with one or more of the requirements of § 80.41 is determined on average (“averaged gasoline”)....

  5. Kinetic energy equations for the average-passage equation system

    Science.gov (United States)

    Johnson, Richard W.; Adamczyk, John J.

    1989-01-01

    Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.

  6. Average annual runoff in the United States, 1951-80

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States

  7. Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...

  8. Average American 15 Pounds Heavier Than 20 Years Ago

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...

  9. The SU(N) Wilson Loop Average in 2 Dimensions

    OpenAIRE

    Karjalainen, Esa

    1993-01-01

    We solve explicitly a closed, linear loop equation for the SU(2) Wilson loop average on a two-dimensional plane and generalize the solution to the case of the SU(N) Wilson loop average with an arbitrary closed contour. Furthermore, the flat space solution is generalized to any two-dimensional manifold for the SU(2) Wilson loop average and to any two-dimensional manifold of genus 0 for the SU(N) Wilson loop average.

  10. Average of Distribution and Remarks on Box-Splines

    Institute of Scientific and Technical Information of China (English)

    LI Yue-sheng

    2001-01-01

    A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.

  11. Investigating Averaging Effect by Using Three Dimension Spectrum

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The eddy current displacement sensor's averaging effect has been investigated in this paper,and thefrequency spectrum property of the averaging effect was also deduced. It indicates that the averaging effect has no influences on measuring a rotor's rotating error, but it has visible influences on measuring the rotor's profile error. According to the frequency spectrum of the averaging effect, the actual sampling data can be adjusted reasonably, thus measuring precision is improved.

  12. 40 CFR 1042.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families...

  13. Spectral averaging techniques for Jacobi matrices with matrix entries

    CERN Document Server

    Sadel, Christian

    2009-01-01

    A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.

  14. Evaluation of the average ion approximation for a tokamak plasma

    International Nuclear Information System (INIS)

    The average ion approximation, sometimes used to calculated atomic processes in plasmas, is assessed by computing deviations in various rates over a set of conditions representative of tokamak edge plasmas. Conditions are identified under which the rates are primarily a function of the average ion charge and plasma parameters, as assumed in the average ion approximation. (Author) 19 refs., tab., 5 figs

  15. 20 CFR 226.62 - Computing average monthly compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is computed by first determining the employee's highest 60 months of railroad compensation...

  16. 27 CFR 19.37 - Average effective tax rate.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...

  17. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...

  18. 20 CFR 404.220 - Average-monthly-wage method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...

  19. 7 CFR 1410.44 - Average adjusted gross income.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...

  20. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...

  1. 47 CFR 80.759 - Average terrain elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth.... (d) Average the values by adding them and dividing by the number of readings along each radial....

  2. 34 CFR 668.196 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...

  3. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...

  4. 34 CFR 668.215 - Average rates appeals.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...

  5. 7 CFR 51.2548 - Average moisture content determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548..., AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content of the lot is not a requirement...

  6. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    Science.gov (United States)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  7. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  8. Averaging and exact perturbations in LTB dust models

    CERN Document Server

    Sussman, Roberto A

    2012-01-01

    We introduce a scalar weighed average ("q-average") acting on concentric comoving domains in spherically symmetric Lemaitre-Tolman-Bondi (LTB) dust models. The resulting averaging formalism allows for an elegant coordinate independent dynamical study of the models, providing as well a valuable theoretical insight on the properties of scalar averaging in inhomogeneous spacetimes. The q-averages of those covariant scalars common to FLRW models (the "q-scalars") identically satisfy FLRW evolution laws and determine for every domain a unique FLRW background state. All curvature and kinematic proper tensors and their invariant contractions are expressible in terms of the q-scalars and their linear and quadratic local fluctuations, which convey the effects of inhomogeneity through the ratio of Weyl to Ricci curvature invariants and the magnitude of radial gradients. We define also non-local fluctuations associated with the intuitive notion of a "contrast" with respect to FLRW reference averaged values assigned to a...

  9. Basics of averaging of the Maxwell equations for bulk materials

    OpenAIRE

    Chipouline, A.; Simovski, C.; Tretyakov, S.

    2012-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some b...

  10. Average-Consensus Algorithms in a Deterministic Framework

    OpenAIRE

    Topley, Kevin; Krishnamurthy, Vikram

    2011-01-01

    We consider the average-consensus problem in a multi-node network of finite size. Communication between nodes is modeled by a sequence of directed signals with arbitrary communication delays. Four distributed algorithms that achieve average-consensus are proposed. Necessary and sufficient communication conditions are given for each algorithm to achieve average-consensus. Resource costs for each algorithm are derived based on the number of scalar values that are required for communication and ...

  11. On the average crosscap number Ⅱ: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    Yi-chao CHEN; Yan-pei LIU

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less than 2β(G)-1/2β(G)-1β(G)β(G) and not larger than/β(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  12. On the average crosscap number II: Bounds for a graph

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less thanβ(G)-1/2β(G)-1β(G) and not larger thanβ(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.

  13. Orbit-averaged Guiding-center Fokker-Planck Operator

    CERN Document Server

    Brizard, A J; Decker, J; Duthoit, F -X

    2009-01-01

    A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant $\\ov{\\psi}$, the minimum-B pitch-angle coordinate $\\xi_{0}$, and the momentum magnitude $p$.

  14. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  15. Practical definition of averages of tensors in general relativity

    CERN Document Server

    Boero, Ezequiel F

    2016-01-01

    We present a definition of tensor fields which are average of tensors over a manifold, with a straightforward and natural definition of derivative for the averaged fields; which in turn makes a suitable and practical construction for the study of averages of tensor fields that satisfy differential equations. Although we have in mind applications to general relativity, our presentation is applicable to a general n-dimensional manifold. The definition is based on the integration of scalars constructed from a physically motivated basis, making use of the least amount of geometrical structure. We also present definitions of covariant derivative of the averaged tensors and Lie derivative.

  16. Thermodynamic properties of average-atom interatomic potentials for alloys

    Science.gov (United States)

    Nöhring, Wolfram Georg; Curtin, William Arthur

    2016-05-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.

  17. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    2007-01-01

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in

  18. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  19. A Characterization of the average tree solution for tree games

    OpenAIRE

    Debasis Mishra; Dolf Talman

    2009-01-01

    For the class of tree games, a new solution called the average tree solution has been proposed recently. We provide a characterization of this solution. This characterization underlines an important difference, in terms of symmetric treatment of the agents, between the average tree solution and the Myerson value for the class of tree games.

  20. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages ob...

  1. A procedure to average 3D anatomical structures.

    Science.gov (United States)

    Subramanya, K; Dean, D

    2000-12-01

    Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.

  2. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper concerns the problem of average σ-K width and average σ-L width of some anisotropic Besov-Wiener classes Srp q θb(Rd) and Srp q θB(Rd) in Lq(Rd) (1≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  3. 7 CFR 701.17 - Average adjusted gross income limitation.

    Science.gov (United States)

    2010-01-01

    ... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...

  4. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...

  5. (Average-) convexity of common pool and oligopoly TU-games

    NARCIS (Netherlands)

    Driessen, T.S.H.; Meinhardt, H.

    2000-01-01

    The paper studies both the convexity and average-convexity properties for a particular class of cooperative TU-games called common pool games. The common pool situation involves a cost function as well as a (weakly decreasing) average joint production function. Firstly, it is shown that, if the rele

  6. Average widths of anisotropic Besov-Wiener classes

    Institute of Scientific and Technical Information of China (English)

    蒋艳杰

    2000-01-01

    This paper concems the problem of average σ-K width and average σ-L width of some anisotropic Besov-wiener classes Spqθr(Rd) and Spqθr(Rd) in Lq(Rd) (1≤≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.

  7. Remarks on the Lower Bounds for the Average Genus

    Institute of Scientific and Technical Information of China (English)

    Yi-chao Chen

    2011-01-01

    Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.

  8. Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?

    Energy Technology Data Exchange (ETDEWEB)

    Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.

    2013-06-17

    Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.

  9. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995)) is...... valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that...

  10. On the extremal properties of the average eccentricity

    CERN Document Server

    Ilic, Aleksandar

    2011-01-01

    The eccentricity of a vertex is the maximum distance from it to another vertex and the average eccentricity $ecc (G)$ of a graph $G$ is the mean value of eccentricities of all vertices of $G$. The average eccentricity is deeply connected with a topological descriptor called the eccentric connectivity index, defined as a sum of products of vertex degrees and eccentricities. In this paper we analyze extremal properties of the average eccentricity, introducing two graph transformations that increase or decrease $ecc (G)$. Furthermore, we resolve four conjectures, obtained by the system AutoGraphiX, about the average eccentricity and other graph parameters (the clique number, the Randi\\' c index and the independence number), refute one AutoGraphiX conjecture about the average eccentricity and the minimum vertex degree and correct one AutoGraphiX conjecture about the domination number.

  11. Average cross-responses in correlated financial markets

    Science.gov (United States)

    Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas

    2016-09-01

    There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.

  12. Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.

    Science.gov (United States)

    Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu

    2010-05-01

    Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813

  13. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  14. Inversion of the circular averages transform using the Funk transform

    International Nuclear Information System (INIS)

    The integral of a function defined on the half-plane along the semi-circles centered on the boundary of the half-plane is known as the circular averages transform. Circular averages transform arises in many tomographic image reconstruction problems. In particular, in synthetic aperture radar (SAR) when the transmitting and receiving antennas are colocated, the received signal is modeled as the integral of the ground reflectivity function of the illuminated scene over the intersection of spheres centered at the antenna location and the surface topography. When the surface topography is flat the received signal becomes the circular averages transform of the ground reflectivity function. Thus, SAR image formation requires inversion of the circular averages transform. Apart from SAR, circular averages transform also arises in thermo-acoustic tomography and sonar inverse problems. In this paper, we present a new inversion method for the circular averages transform using the Funk transform. For a function defined on the unit sphere, its Funk transform is given by the integrals of the function along the great circles. We used hyperbolic geometry to establish a diffeomorphism between the circular averages transform, hyperbolic x-ray and Funk transforms. The method is exact and numerically efficient when fast Fourier transforms over the sphere are used. We present numerical simulations to demonstrate the performance of the inversion method. Dedicated to Dennis Healy, a friend of Applied Mathematics and Engineering

  15. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... must meet the minimum driving range requirements established by the Secretary of Transportation (49 CFR... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of...

  16. A space-averaged model of branched structures

    CERN Document Server

    Lopez, Diego; Michelin, Sébastien

    2014-01-01

    Many biological systems and artificial structures are ramified, and present a high geometric complexity. In this work, we propose a space-averaged model of branched systems for conservation laws. From a one-dimensional description of the system, we show that the space-averaged problem is also one-dimensional, represented by characteristic curves, defined as streamlines of the space-averaged branch directions. The geometric complexity is then captured firstly by the characteristic curves, and secondly by an additional forcing term in the equations. This model is then applied to mass balance in a pipe network and momentum balance in a tree under wind loading.

  17. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  18. Average-Case Analysis of Algorithms Using Kolmogorov Complexity

    Institute of Scientific and Technical Information of China (English)

    姜涛; 李明

    2000-01-01

    Analyzing the average-case complexity of algorithms is a very prac tical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.

  19. United States Average Annual Precipitation, 1990-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-2009. Parameter-elevation...

  20. United States Average Annual Precipitation, 1961-1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  1. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  2. On the average exponent of elliptic curves modulo $p$

    CERN Document Server

    Freiberg, Tristan

    2012-01-01

    Given an elliptic curve $E$ defined over $\\mathbb{Q}$ and a prime $p$ of good reduction, let $\\tilde{E}(\\mathbb{F}_p)$ denote the group of $\\mathbb{F}_p$-points of the reduction of $E$ modulo $p$, and let $e_p$ denote the exponent of said group. Assuming a certain form of the Generalized Riemann Hypothesis (GRH), we study the average of $e_p$ as $p \\le X$ ranges over primes of good reduction, and find that the average exponent essentially equals $p\\cdot c_{E}$, where the constant $c_{E} > 0$ depends on $E$. For $E$ without complex multiplication (CM), $c_{E}$ can be written as a rational number (depending on $E$) times a universal constant. Without assuming GRH, we can determine the average exponent when $E$ has CM, as well as give an upper bound on the average in the non-CM case.

  3. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  4. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  5. Ensemble vs. time averages in financial time series analysis

    Science.gov (United States)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  6. United States Average Annual Precipitation, 1995-1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1995-1999. Parameter-elevation...

  7. United States Average Annual Precipitation, 2005-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2005-2009. Parameter-elevation...

  8. United States Average Annual Precipitation, 2000-2004 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2000-2004. Parameter-elevation...

  9. United States Average Annual Precipitation, 1990-1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-1994. Parameter-elevation...

  10. Homogeneous conformal averaging operators on semisimple Lie algebras

    OpenAIRE

    Kolesnikov, Pavel

    2014-01-01

    In this note we show a close relation between the following objects: Classical Yang---Baxter equation (CYBE), conformal algebras (also known as vertex Lie algebras), and averaging operators on Lie algebras. It turns out that the singular part of a solution of CYBE (in the operator form) on a Lie algebra $\\mathfrak g$ determines an averaging operator on the corresponding current conformal algebra $\\mathrm{Cur} \\mathfrak g$. For a finite-dimensional semisimple Lie algebra $\\mathfrak g$, we desc...

  11. Estimating PIGLOG Demands Using Representative versus Average Expenditure

    OpenAIRE

    Hahn, William F.; Taha, Fawzi A.; Davis, Christopher G.

    2013-01-01

    Economists often use aggregate time series data to estimate consumer demand functions. Some of the popular applied demand systems have a PIGLOG form. In the most general PIGLOG cases the “average” demand for a good is a function of the representative consumer expenditure not the average consumer expenditure. We would need detailed information on each period’s expenditure distribution to calculate the representative expenditure. This information is generally unavailable, so average expenditure...

  12. Average resonance parameters of zirconium and molybdenum nuclei

    International Nuclear Information System (INIS)

    Full sets of average resonance parameters S0, S1, R0', R1', S1,3/2 for zirconium and molybdenum nuclei with natural mixture of isotopes are determined by means of the method designed by authors. The determination is realized from analysis of the average experimental differential cross sections of neutron elastic scattering in the field of energy before 440 keV. Analysis of recommended parameters and some of the literary data had been performed also.

  13. Average resonance parameters of ruthenium and palladium nuclei

    International Nuclear Information System (INIS)

    Full sets of the average resonance parameters S0, S1, R0', R1', S1,3/2 for ruthenium and palladium nuclei with natural mixture of isotopes are determined by means of the method designed by authors. The determination is realized from analysis of the average experimental differential cross sections of neutron elastic scattering in the field of energy before 440 keV. The analysis of recommended parameters and of some of the literary data had been performed also.

  14. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  15. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  16. On the convergence time of asynchronous distributed quantized averaging algorithms

    OpenAIRE

    ZHU, MINGHUI; Martinez, Sonia

    2010-01-01

    We come up with a class of distributed quantized averaging algorithms on asynchronous communication networks with fixed, switching and random topologies. The implementation of these algorithms is subject to the realistic constraint that the communication rate, the memory capacities of agents and the computation precision are finite. The focus of this paper is on the study of the convergence time of the proposed quantized averaging algorithms. By appealing to random walks on graphs, we derive ...

  17. Average life of oxygen vacancies of quartz in sediments

    Institute of Scientific and Technical Information of China (English)

    刁少波; 业渝光

    2002-01-01

    Average life of oxygen vacancies of quartz in sediments is estimated by using the ESR (electron spin resonance) signals of E( centers from the thermal activation technique. The experimental results show that the second-order kinetics equation is more applicable to the life estimation compared with the first order equation. The average life of oxygen vacancies of quartz from 4895 to 4908 deep sediments in the Tarim Basin is about 1018 a at 27℃.

  18. On the relativistic mass function and averaging in cosmology

    CERN Document Server

    Ostrowski, Jan J; Roukema, Boudewijn F

    2016-01-01

    The general relativistic description of cosmological structure formation is an important challenge from both the theoretical and the numerical point of views. In this paper we present a brief prescription for a general relativistic treatment of structure formation and a resulting mass function on galaxy cluster scales in a highly generic scenario. To obtain this we use an exact scalar averaging scheme together with the relativistic generalization of Zel'dovich's approximation (RZA) that serves as a closure condition for the averaged equations.

  19. Cycle Average Peak Fuel Temperature Prediction Using CAPP/GAMMA+

    Energy Technology Data Exchange (ETDEWEB)

    Tak, Nam-il; Lee, Hyun Chul; Lim, Hong Sik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In order to obtain a cycle average maximum fuel temperature without rigorous efforts, a neutronics/thermo-fluid coupled calculation is needed with depletion capability. Recently, a CAPP/GAMMA+ coupled code system has been developed and the initial core of PMR200 was analyzed using the CAPP/GAMMA+ code system. The GAMMA+ code is a system thermo-fluid analysis code and the CAPP code is a neutronics code. The General Atomics proposed that the design limit of the fuel temperature under normal operating conditions should be a cycle-averaged maximum value. Nonetheless, the existing works of Korea Atomic Energy Research Institute (KAERI) only calculated the maximum fuel temperature at a fixed time point, e.g., the beginning of cycle (BOC) just because the calculation capability was not ready for a cycle average value. In this work, a cycle average maximum fuel temperature has been calculated using CAPP/GAMMA+ code system for the equilibrium core of PMR200. The CAPP/GAMMA+ coupled calculation was carried out for the equilibrium core of PMR 200 from BOC to EOC to obtain a cycle average peak fuel temperature. The peak fuel temperature was predicted to be 1372 .deg. C near MOC. However, the cycle average peak fuel temperature was calculated as 1181 .deg. C, which is below the design target of 1250 .deg. C.

  20. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  1. Simple Moving Average: A Method of Reporting Evolving Complication Rates.

    Science.gov (United States)

    Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J

    2016-09-01

    Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].

  2. Basics of averaging of the Maxwell equations for bulk materials

    CERN Document Server

    Chipouline, A; Tretyakov, S

    2012-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for bulk MM, which is rather close to the case of compound materials but should include magnetic response of the inclusions an...

  3. Averaged universe confronted to cosmological observations: a fully covariant approach

    CERN Document Server

    Wijenayake, Tharake; Ishak, Mustapha

    2016-01-01

    One of the outstanding problems in general relativistic cosmology is that of the averaging. That is, how the lumpy universe that we observe at small scales averages out to a smooth Friedmann-Lemaitre-Robertson-Walker (FLRW) model. The root of the problem is that averaging does not commute with the Einstein equations that govern the dynamics of the model. This leads to the well-know question of backreaction in cosmology. In this work, we approach the problem using the covariant framework of Macroscopic Gravity (MG). We use its cosmological solution with a flat FLRW macroscopic background where the result of averaging cosmic inhomogeneities has been encapsulated into a backreaction density parameter denoted $\\Omega_\\mathcal{A}$. We constrain this averaged universe using available cosmological data sets of expansion and growth including, for the first time, a full CMB analysis from Planck temperature anisotropy and polarization data, the supernovae data from Union 2.1, the galaxy power spectrum from WiggleZ, the...

  4. Motion artifacts reduction from PPG using cyclic moving average filter.

    Science.gov (United States)

    Lee, Junyeon

    2014-01-01

    The photoplethysmogram (PPG) is an extremely useful medical diagnostic tool. However, PPG signals are highly susceptible to motion artifacts. In this paper, we propose a cyclic moving average filter that use similarity of Photoplethysmogram. This filtering method has the average value of each samples through separating the cycle of PPG signal. If there are some motion artifacts in continuous PPG signal, disjoin the signal based on cycle. And then, we made these signals to have same cycle by coordinating the number of sample. After arrange these cycles in 2 dimension, we put the average value of each samples from starting till now. So, we can eliminate the motion artifacts without damaged PPG signal. PMID:24704660

  5. Modified Adaptive Weighted Averaging Filtering Algorithm for Noisy Image Sequences

    Institute of Scientific and Technical Information of China (English)

    LI Weifeng; YU Daoyin; CHEN Xiaodong

    2007-01-01

    In order to avoid the influence of noise variance on the filtering performances, a modified adaptive weighted averaging (MAWA) filtering algorithm is proposed for noisy image sequences. Based upon adaptive weighted averaging pixel values in consecutive frames, this algorithm achieves the filtering goal by assigning smaller weights to the pixels with inappropriate estimated motion trajectory for noise. It only utilizes the intensity of pixels to suppress noise and accordingly is independent of noise variance. To evaluate the performance of the proposed filtering algorithm, its mean square error and percentage of preserved edge points were compared with those of traditional adaptive weighted averaging and non-adaptive mean filtering algorithms under different noise variances. Relevant results show that the MAWA filtering algorithm can preserve image structures and edges under motion after attenuating noise, and thus may be used in image sequence filtering.

  6. How do children form impressions of persons? They average.

    Science.gov (United States)

    Hendrick, C; Franz, C M; Hoving, K L

    1975-05-01

    The experiment reported was concerned with impression formation in children. Twelve subjects in each of Grades K, 2, 4, and 6 rated several sets of single trait words and trait pairs. The response scale consisted of a graded series of seven schematic faces which ranged from a deep frown to a happy smile. A basic question was whether children use an orderly integration rule in forming impressions of trait pairs. The answer was clear. At all grade levels a simple averaging model adequately accounted for pair ratings. A second question concerned how children resolve semantic inconsistencies. Responses to two highly inconsistent trait pairs suggested that subjects responded in the same fashion, essentially averaging the two traits in a pair. Overall, the data strongly supported an averaging model, and indicated that impression formation of children is similar to previous results obtained from adults.

  7. Genuine non-self-averaging and ultraslow convergence in gelation.

    Science.gov (United States)

    Cho, Y S; Mazza, M G; Kahng, B; Nagler, J

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation. PMID:27627355

  8. Despeckling vs averaging of retinal UHROCT tomograms: advantages and limitations

    Science.gov (United States)

    Eichel, Justin A.; Lee, Donghyun D.; Wong, Alexander; Fieguth, Paul W.; Clausi, David A.; Bizheva, Kostadinka K.

    2011-03-01

    Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research grade 1060nm spectral domain UHROCT system with 5μm axial resolution in the retina. Single cross-sectional retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise ratio (CNR) were evaluated for both cases.

  9. How do children form impressions of persons? They average.

    Science.gov (United States)

    Hendrick, C; Franz, C M; Hoving, K L

    1975-05-01

    The experiment reported was concerned with impression formation in children. Twelve subjects in each of Grades K, 2, 4, and 6 rated several sets of single trait words and trait pairs. The response scale consisted of a graded series of seven schematic faces which ranged from a deep frown to a happy smile. A basic question was whether children use an orderly integration rule in forming impressions of trait pairs. The answer was clear. At all grade levels a simple averaging model adequately accounted for pair ratings. A second question concerned how children resolve semantic inconsistencies. Responses to two highly inconsistent trait pairs suggested that subjects responded in the same fashion, essentially averaging the two traits in a pair. Overall, the data strongly supported an averaging model, and indicated that impression formation of children is similar to previous results obtained from adults. PMID:21287081

  10. Time-averaged photon-counting digital holography.

    Science.gov (United States)

    Demoli, Nazif; Skenderović, Hrvoje; Stipčević, Mario

    2015-09-15

    Time-averaged holography has been using photo-emulsions (early stage) and digital photo-sensitive arrays (later) to record holograms. We extend the recording possibilities by utilizing a photon-counting camera, and we further investigate the possibility of obtaining accurate hologram reconstructions in rather severe experimental conditions. To achieve this, we derived an expression for fringe function comprising the main parameters affecting the hologram recording. Influence of the main parameters, namely the exposure time and the number of averaged holograms, is analyzed by simulations and experiments. It is demonstrated that taking long exposure times can be avoided by averaging over many holograms with the exposure times much shorter than the vibration cycle. Conditions in which signal-to-noise ratio in reconstructed holograms can be substantially increased are provided. PMID:26371907

  11. Optimum orientation versus orientation averaging description of cluster radioactivity

    CERN Document Server

    Seif, W M; Refaie, A I; Amer, L H

    2016-01-01

    Background: The deformation of the nuclei involved in the cluster decay of heavy nuclei affect seriously their half-lives against the decay. Purpose: We investigate the description of the different decay stages in both the optimum orientation and the orientation-averaged pictures of the cluster decay process. Method: We consider the decays of 232,233,234U and 236,238Pu isotopes. The quantum mechanical knocking frequency and penetration probability based on the Wentzel-Kramers-Brillouin approximation are used to find the decay width. Results: We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. The difference between the two values increases with decreasing the mass number of the emitted cluster. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformati...

  12. Gauge-Invariant Average of Einstein Equations for finite Volumes

    CERN Document Server

    Smirnov, Juri

    2014-01-01

    For the study of cosmological backreacktion an avaragng procedure is required. In this work a covariant and gauge invariant averaging formalism for finite volumes will be developed. This averaging will be applied to the scalar parts of Einstein's equations. For this purpose dust as a physical laboratory will be coupled to the gravitating system. The goal is to study the deviation from the homogeneous universe and the impact of this deviation on the dynamics of our universe. Fields of physical observers are included in the studied system and used to construct a reference frame to perform the averaging without a formal gauge fixing. The derived equations resolve the question whether backreaction is gauge dependent.

  13. The Role of the Harmonic Vector Average in Motion Integration

    Directory of Open Access Journals (Sweden)

    Alan eJohnston

    2013-10-01

    Full Text Available The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC solution. Here a new combination rule, the harmonic vector average (HVA, is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The harmonic vector average however provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the intersection of constraints direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the harmonic vector average.

  14. Optimum orientation versus orientation averaging description of cluster radioactivity

    Science.gov (United States)

    Seif, W. M.; Ismail, M.; Refaie, A. I.; Amer, Laila H.

    2016-07-01

    While the optimum-orientation concept is frequently used in studies on cluster decays involving deformed nuclei, the orientation-averaging concept is used in most alpha decay studies. We investigate the different decay stages in both the optimum-orientation and the orientation-averaging pictures of the cluster decay process. For decays of 232,233,234U and 236,238Pu isotopes, the quantum knocking frequency and penetration probability based on the Wentzel–Kramers–Brillouin approximation are used to find the decay width. The obtained decay width and the experimental half-life are employed to estimate the clusters preformation probability. We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformation probabilities estimated by the two considered schemes are in more or less comparable agreement with the Blendowske–Walliser (BW) formula based on the preformation probability of α ({S}α {{a}{{v}}{{e}}}) obtained from the orientation-averaging scheme. All the results, including the optimum-orientation ones, deviate substantially from the BW law based on {S}α {{o}{{p}}{{t}}} that was estimated from the optimum-orientation scheme. To account for the nuclear deformations, it is more relevant to calculate the decay width by averaging over the different possible orientations of the participating deformed nuclei, rather than considering the corresponding non-compact optimum orientation.

  15. Light shift averaging in paraffin-coated alkali vapor cells

    CERN Document Server

    Zhivun, Elena; Sudyka, Julia; Pustelny, Szymon; Patton, Brian; Budker, Dmitry

    2015-01-01

    Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.

  16. Method of Best Representation for Averages in Data Evaluation

    International Nuclear Information System (INIS)

    A new method for averaging data for which incomplete information is available is presented. For example, this method would be applicable during data evaluation where only the final outcomes of the experiments and the associated uncertainties are known. This method is based on using the measurements to construct a mean probability density for the data set. This “expected value method” (EVM) is designed to treat asymmetric uncertainties and has distinct advantages over other methods of averaging, including giving a more realistic uncertainty, being robust to outliers and consistent under various representations of the same quantity

  17. Modification of averaging process in GR: Case study flat LTB

    CERN Document Server

    Khosravi, Shahram; Mansouri, Reza

    2007-01-01

    We study the volume averaging of inhomogeneous metrics within GR and discuss its shortcomings such as gauge dependence, singular behavior as a result of caustics, and causality violations. To remedy these shortcomings, we suggest some modifications to this method. As a case study we focus on the inhomogeneous model of structured FRW based on a flat LTB metric. The effect of averaging is then studied in terms of an effective backreaction fluid. This backreaction fluid turns out to behave like a dark matter component, instead of dark energy as claimed in literature.

  18. Average Fidelity of Teleportation in Quantum Noise Channel

    Institute of Scientific and Technical Information of China (English)

    HAO Xiang; ZHANG Rong; ZHU Shi-Qun

    2006-01-01

    The effects of amplitude damping in quantum noise channels on average fidelity of quantum teleportation are analyzed in Bloch sphere representation for every stage of teleportation. When the quantum channels are varied from maximally entangled states to non-maximally entangled states, it is found that the effects of noise channels on the fidelity are nearly equivalent to each other for strong quantum noise. The degree of damage on the fidelity of non-maximally entangled channels is smaller than that of maximally entangled channels. The average fidelity of values larger than 2/3may be one representation indirectly showing how much the unavoidable quantum noise is.

  19. Comparison of peak and average nitrogen dioxide concentrations inside homes

    Science.gov (United States)

    Franklin, Peter; Runnion, Tina; Farrar, Drew; Dingle, Peter

    Most health studies measuring indoor nitrogen dioxide (NO 2) concentrations have utilised long-term passive monitors. However, this method may not provide adequate information on short-term peaks, which may be important when examining health effects of this pollutant. The aims of this study were to investigate the relationship between short-term peak (peak) and long-term average (average) NO 2 concentrations in kitchens and the effect of gas cookers on this relationship. Both peak and average NO 2 levels were measured simultaneously in the kitchens of 53 homes using passive sampling techniques. All homes were non-smoking and sampling was conducted in the summer months. Geometric mean (95% confidence interval (CI)) average NO 2 concentrations for all homes were 16.2 μg m -3 (12.7-20.6 μg m -3). There was no difference between homes with and without gas cookers ( p=0.40). Geometric mean (95%CI) peak NO 2 concentrations were 45.3 μg m -3 (36.0-57.1 μg m -3). Unlike average concentrations, peak concentrations were significantly higher in homes with gas cookers (64.0 μg m -3, 48.5-82.0 μg m -3) compared to non-gas homes (25.1 μg m -3, 18.3-35.5 μg m -3) ( p<0.001). There was only a moderate correlation between the peak and average concentrations measured in all homes ( r=0.39, p=0.004). However, when the data were analysed separately based on the presence of gas cookers, the correlation between peak and average NO 2 concentrations was improved in non-gas homes ( r=0.59, p=0.005) but was not significant in homes with gas cookers ( r=0.19, p=0.33). These results suggest that average NO 2 concentrations do not adequately identify exposure to short-term peaks of NO 2 that may be caused by gas cookers. The lack of peak exposure data in many epidemiological studies may explain some of the inconsistent findings.

  20. Average Lorentz self-force from electric field lines

    International Nuclear Information System (INIS)

    We generalize the derivation of electromagnetic fields of a charged particle moving with a constant acceleration Singal (2011 Am. J. Phys. 79 1036) to a variable acceleration (piecewise constants) over a small finite time interval using Coulomb's law, relativistic transformations of electromagnetic fields and Thomson's construction Thomson (1904 Electricity and Matter (New York: Charles Scribners) ch 3). We derive the average Lorentz self-force for a charged particle in arbitrary non-relativistic motion via averaging the fields at retarded time. (paper)

  1. HAT AVERAGE MULTIRESOLUTION WITH ERROR CONTROL IN 2-D

    Institute of Scientific and Technical Information of China (English)

    Sergio Amat

    2004-01-01

    Multiresolution representations of data are a powerful tool in data compression. For a proper adaptation to the singularities, it is crucial to develop nonlinear methods which are not based on tensor product. The hat average framework permets develop adapted schemes for all types of singularities. In contrast with the wavelet framework these representations cannot be considered as a change of basis, and the stability theory requires different considerations. In this paper, non separable two-dimensional hat average multiresolution processing algorithms that ensure stability are introduced. Explicit error bounds are presented.

  2. Generalized Sampling Series Approximation of Random Signals from Local Averages

    Institute of Scientific and Technical Information of China (English)

    SONG Zhanjie; HE Gaiyun; YE Peixin; YANG Deyun

    2007-01-01

    Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes. On the other hand, because of the inertia of the measurement apparatus, measured sampled values obtained in practice may not be the precise value of the signal X(t) at time tk( k∈ Z), but only local averages of X(t) near tk. In this paper, it is presented that a wide (or weak ) sense stationary stochastic process can be approximated by generalized sampling series with local average samples.

  3. Quantum state discrimination using the minimum average number of copies

    CERN Document Server

    Slussarenko, Sergei; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M; Pryde, Geoff J

    2016-01-01

    In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we consider minimizing the average resources for a fixed admissible error probability. We derive a detection scheme optimized for the latter task, and experimentally test it, along with schemes previously considered for the former task. We show that, for our new task, our new scheme outperforms all previously considered schemes.

  4. THEORETICAL CALCULATION OF THE RELATIVISTIC SUBCONFIGURATION-AVERAGED TRANSITION ENERGIES

    Institute of Scientific and Technical Information of China (English)

    张继彦; 杨向东; 杨国洪; 张保汉; 雷安乐; 刘宏杰; 李军

    2001-01-01

    A method for calculating the average energies of relativistic subconfigurations in highly ionized heavy atoms has been developed in the framework of the multiconfigurational Dirac-Fock theory. The method is then used to calculate the average transition energies of the spin-orbit-split 3d-4p transition of Co-like tungsten, the 3d-5f transition of Cu-like tantalum, and the 3d-5f transitions of Cu-like and Zn-like gold samples. The calculated results are in good agreement with those calculated with the relativistic parametric potential method and also with the experimental results.

  5. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  6. Bayes model averaging of cyclical decompositions in economic time series

    NARCIS (Netherlands)

    R.H. Kleijn (Richard); H.K. van Dijk (Herman)

    2003-01-01

    textabstractA flexible decomposition of a time series into stochastic cycles under possible non-stationarity is specified, providing both a useful data analysis tool and a very wide model class. A Bayes procedure using Markov Chain Monte Carlo (MCMC) is introduced with a model averaging approach whi

  7. Group Averaging and Refined Algebraic Quantization: Where are we now?

    OpenAIRE

    Marolf, D.

    2000-01-01

    Refined Algebraic Quantization and Group Averaging are powerful methods for quantizing constrained systems. They give constructive algorithms for generating observables and the physical inner product. This work outlines the current status of these ideas with an eye toward quantum gravity. The main goal is provide a description of outstanding problems and possible research topics in the field.

  8. The effect of cosmic inhomogeneities on the average cosmological dynamics

    CERN Document Server

    Singh, T P

    2011-01-01

    It is generally assumed that on sufficiently large scales the Universe is well-described as a homogeneous, isotropic FRW cosmology with a dark energy. Does the formation of nonlinear cosmic inhomogeneities produce a significant effect on the average large-scale FLRW dynamics? As an answer, we suggest that if the length scale at which homogeneity sets in is much smaller than the Hubble length scale, the back-reaction due to averaging over inhomogeneities is negligible. This result is supported by more than one approach to study of averaging in cosmology. Even if no single approach is sufficiently rigorous and compelling, they are all in agreement that the effect of averaging in the real Universe is small. On the other hand, it is perhaps fair to say that there is no definitive observational evidence yet that there indeed is a homogeneity scale which is much smaller than the Hubble scale, or for that matter, if today's Universe is indeed homogeneous on large scales. If the Copernican principle can be observatio...

  9. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  10. Light-cone averages in a swiss-cheese universe

    CERN Document Server

    Marra, Valerio; Matarrese, Sabino

    2007-01-01

    We analyze a toy swiss-cheese cosmological model to study the averaging problem. In our model, the cheese is the EdS model and the holes are constructed from a LTB solution. We study the propagation of photons in the swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the the expansion scalar is unaffected by the inhomogeneities. This is because of spherical symmetry. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the concordance model. Although the sole source in the swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we ...

  11. Grade Point Average and Changes in (Great) Grade Expectations.

    Science.gov (United States)

    Wendorf, Craig A.

    2002-01-01

    Examines student grade expectations throughout a semester in which students offered their expectations three times during the course: (1) within the first week; (2) midway through the semester; and (3) the week before the final examination. Finds that their expectations decreased stating that their cumulative grade point average was related to the…

  12. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    1995-01-01

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that Fouri

  13. Modelling spatial heteroskedasticity by volatility modulated moving averages

    OpenAIRE

    Nguyen, Michele; Veraart, Almut E. D.

    2016-01-01

    Spatial heteroskedasticity refers to stochastically changing variances and covariances in space. Such features have been observed in, for example, air pollution and vegetation data. We study how volatility modulated moving averages can model this by developing theory, simulation and statistical inference methods. For illustration, we also apply our procedure to sea surface temperature anomaly data from the International Research Institute for Climate and Society.

  14. Empirical Bayes in-season prediction of baseball batting averages

    OpenAIRE

    Jiang, Wenhua; Zhang, Cun-Hui

    2010-01-01

    The performance of a number of empirical Bayes methods are examined for the in-season prediction of batting averages with the 2005 Major League baseball data. Among the methodologies considered are new general empirical Bayes estimators in homoscedastic and heteroscedastic partial linear models.

  15. HIGH AVERAGE POWER UV FREE ELECTRON LASER EXPERIMENTS AT JLAB

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle D; Tennant, Christopher

    2012-07-01

    Having produced 14 kW of average power at {approx}2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  16. Average Error Bounds of Trigonometric Approximation on Periodic Wiener Spaces

    Institute of Scientific and Technical Information of China (English)

    Cheng Yong WANG; Rui Min WANG

    2013-01-01

    In this paper,we study the approximation of identity operator and the convolution integral operator Bm by Fourier partial sum operators,Fejér operators,Vallée-Poussin operators,Cesáro operators and Abel mean operators,respectively,on the periodic Wiener space (C1(R),W°) and obtain the average error estimations.

  17. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... computing income tax liability. The regulations reflect changes made by the American Jobs Creation Act of 2004 and the Tax Extenders and Alternative Minimum Tax Relief Act of 2008. The regulations provide...) relating to the averaging of farm and fishing income in computing tax liability. A notice of...

  18. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  19. Discrete Averaging Relations for Micro to Macro Transition

    Science.gov (United States)

    Liu, Chenchen; Reina, Celia

    2016-05-01

    The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.

  20. Precalculating the average luminance of road surface in public lighting.

    NARCIS (Netherlands)

    Schreuder, D.A.

    1967-01-01

    The influence of the reflection properties of the road surface on the aspect of the street lighting and the importance of the use of luminance has been shown. A method is described with which the value to be expected of the average road surface luminance can be easily found.

  1. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  2. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  3. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  4. Average Number of Coherent Modes for Pulse Random Fields

    CERN Document Server

    Lazaruk, A M; Lazaruk, Alexander M.; Karelin, Nikolay V.

    1997-01-01

    Some consequences of spatio-temporal symmetry for the deterministic decomposition of complex light fields into factorized components are considered. This enables to reveal interrelations between spatial and temporal coherence properties of wave. An estimation of average number of the decomposition terms is obtained in the case of statistical ensemble of light pulses.

  5. 40 CFR 63.503 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.503 Emissions averaging... limited to twenty. This number may be increased by up to five additional points if pollution prevention... pollution prevention measures are used to control five or more of the emission points included in...

  6. 40 CFR 63.1332 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group IV Polymers and Resins § 63.1332 Emissions averaging... if pollution prevention measures are used to control five or more of the emission points included in... additional emission points if pollution prevention measures are used to control five or more of the...

  7. Characteristics of phase-averaged equations for modulated wave groups

    NARCIS (Netherlands)

    Klopman, G.; Petit, H.A.H.; Battjes, J.A.

    2000-01-01

    The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993). Th

  8. Multiscale correlations and conditional averages in numerical turbulence

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef; Reeh, Achim

    2000-01-01

    The equations of motion for the nth order velocity differences raise the interest in correlation functions containing both large and small scales simultaneously. We consider the scaling of such objects and also their conditional average representation with emphasis on the question of whether they be

  9. Average position in quantum walks with a U(2) coin

    Institute of Scientific and Technical Information of China (English)

    Li Min; Zhang Yong-Sheng; Guo Guang-Can

    2013-01-01

    We investigated discrete-time quantum walks with an arbitary unitary coin.Here we discover that the average position =max() sin(α + γ),while the initial state is 1//2(|OL> + i |OR>).We verify the result,and obtain some symmetry properties of quantum walks with a U(2) coin with |OL> and |OR> as the initial state.

  10. Relaxing monotonicity in the identification of local average treatment effects

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    In heterogeneous treatment effect models with endogeneity, the identification of the local average treatment effect (LATE) typically relies on an instrument that satisfies two conditions: (i) joint independence of the potential post-instrument variables and the instrument and (ii) monotonicity...

  11. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...

  12. Truncated cross-sectional average length of life

    DEFF Research Database (Denmark)

    Canudas-Romo, Vladimir; Guillot, Michel

    2015-01-01

    of developed countries. The truncated cross-sectional average length of life (TCAL) is a new measure that captures historical information about all cohorts present at a given moment and is not limited to countries with complete cohort mortality data. The value of TCAL depends on the rates used to complete...

  13. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  14. Fuel optimum low-thrust elliptic transfer using numerical averaging

    Science.gov (United States)

    Tarzi, Zahi; Speyer, Jason; Wirz, Richard

    2013-05-01

    Low-thrust electric propulsion is increasingly being used for spacecraft missions primarily due to its high propellant efficiency. As a result, a simple and fast method for low-thrust trajectory optimization is of great value for preliminary mission planning. However, few low-thrust trajectory tools are appropriate for preliminary mission design studies. The method presented in this paper provides quick and accurate solutions for a wide range of transfers by using numerical orbital averaging to improve solution convergence and include orbital perturbations. Thus, preliminary trajectories can be obtained for transfers which involve many revolutions about the primary body. This method considers minimum fuel transfers using first-order averaging to obtain the fuel optimum rates of change of the equinoctial orbital elements in terms of each other and the Lagrange multipliers. Constraints on thrust and power, as well as minimum periapsis, are implemented and the equations are averaged numerically using a Gausian quadrature. The use of numerical averaging allows for more complex orbital perturbations to be added in the future without great difficulty. The effects of zonal gravity harmonics, solar radiation pressure, and thrust limitations due to shadowing are included in this study. The solution to a transfer which minimizes the square of the thrust magnitude is used as a preliminary guess for the minimum fuel problem, thus allowing for faster convergence to a wider range of problems. Results from this model are shown to provide a reduction in propellant mass required over previous minimum fuel solutions.

  15. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  16. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    International Nuclear Information System (INIS)

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-barP, the average, U-bar, the effective, Ueff or the maximum peak, UP tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-barp voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak kPPV,kVp and the average kPPV,Uav conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-barp and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  17. Construction of average adult Japanese voxel phantoms for dose assessment

    International Nuclear Information System (INIS)

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  18. Pulsar average waveforms and hollow cone beam models

    Science.gov (United States)

    Backer, D. C.

    1975-01-01

    An analysis of pulsar average waveforms at radio frequencies from 40 MHz to 15 GHz is presented. The analysis is based on the hypothesis that the observer sees one cut of a hollow-cone beam pattern and that stationary properties of the emission vary over the cone. The distributions of apparent cone widths for different observed forms of the average pulse profiles (single, double/unresolved, double/resolved, triple and multiple) are in modest agreement with a model of a circular hollow-cone beam with random observer-spin axis orientation, a random cone axis-spin axis alignment, and a small range of physical hollow-cone parameters for all objects.

  19. Detrending moving average algorithm: Frequency response and scaling performances

    Science.gov (United States)

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed.

  20. Evolutionary Prisoner's Dilemma Game Based on Pursuing Higher Average Payoff

    Institute of Scientific and Technical Information of China (English)

    LI Yu-Jian; WANG Bing-Hong; YANG Han-Xin; LING Xiang; CHEN Xiao-Jie; JIANG Rui

    2009-01-01

    We investigate the prisoner's dilemma game based on a new rule: players will change their current strategies to opposite strategies with some probability if their neighbours' average payoffs are higher than theirs. Compared with the cases on regular lattices (RL) and Newman-Watts small world network (NW), cooperation can be best enhanced on the scale-free Barabasi-Albert network (BA). It is found that cooperators are dispersive on RL network, which is different from previously reported results that cooperators will form large clusters to resist the invasion of defectors. Cooperative behaviours on the BA network are discussed in detail. It is found that large-degree individuals have lower cooperation level and gain higher average payoffs than that of small-degree individuals. In addition, we find that small-degree individuals more frequently change strategies than do large-degree individuals.

  1. SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    Directory of Open Access Journals (Sweden)

    VIGH MELINDA

    2015-03-01

    Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.

  2. Disk-averaged Spectra & light-curves of Earth

    CERN Document Server

    Tinetti, G; Crisp, D; Fong, W; Kiang, N; Fishbein, E; Velusamy, T; Bosc, E; Turnbull, M

    2005-01-01

    We are using computer models to explore the observational sensitivity to changes in atmospheric and surface properties, and the detectability of biosignatures, in the globally averaged spectra and light-curves of the Earth. Using AIRS (Atmospheric Infrared Sounder) data, as input for atmospheric and surface properties, we have generated spatially resolved high-resolution synthetic spectra using the SMART radiative transfer model, for a variety of conditions, from the UV to the far-IR (beyond the range of current Earth-based satellite data). We have then averaged over the visible disk for a number of different viewing geometries to quantify the sensitivity to surface types and atmospheric features as a function of viewing geometry, and spatial and spectral resolution. These results have been processed with an instrument simulator to improve our understanding of the detectable characteristics of Earth-like planets as viewed by the first generation extrasolar terrestrial planet detection and characterization mis...

  3. The stability of a zonally averaged thermohaline circulation model

    CERN Document Server

    Schmidt, G A

    1995-01-01

    A combination of analytical and numerical techniques are used to efficiently determine the qualitative and quantitative behaviour of a one-basin zonally averaged thermohaline circulation ocean model. In contrast to earlier studies which use time stepping to find the steady solutions, the steady state equations are first solved directly to obtain the multiple equilibria under identical mixed boundary conditions. This approach is based on the differentiability of the governing equations and especially the convection scheme. A linear stability analysis is then performed, in which the normal modes and corresponding eigenvalues are found for the various equilibrium states. Resonant periodic solutions superimposed on these states are predicted for various types of forcing. The results are used to gain insight into the solutions obtained by Mysak, Stocker and Huang in a previous numerical study in which the eddy diffusivities were varied in a randomly forced one-basin zonally averaged model. Resonant stable oscillat...

  4. Detrending Moving Average Algorithm: Frequency Response and Scaling Performances

    CERN Document Server

    Carbone, Anna

    2016-01-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) either over time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent and finite scale range behavior will be discussed.

  5. High average power supercontinuum generation in a fluoroindate fiber

    Science.gov (United States)

    Swiderski, J.; Théberge, F.; Michalska, M.; Mathieu, P.; Vincent, D.

    2014-01-01

    We report the first demonstration of Watt-level supercontinuum (SC) generation in a step-index fluoroindate (InF3) fiber pumped by a 1.55 μm fiber master-oscillator power amplifier (MOPA) system. The SC is generated in two steps: first ˜1 ns amplified laser diode pulses are broken up into soliton-like sub-pulses leading to initial spectrum extension and then launched into a fluoride fiber to obtain further spectral broadening. The pump MOPA system can operate at a changeable repetition frequency delivering up to 19.2 W of average power at 2 MHz. When the 8-m long InF3 fiber was pumped with 7.54 W at 420 kHz, output average SC power as high as 2.09 W with 27.8% of slope efficiency was recorded. The achieved SC spectrum spread from 1 to 3.05 μm.

  6. Estimation of Otoacoustic Emision Signals by Using Synchroneous Averaging Method

    Directory of Open Access Journals (Sweden)

    Linas Sankauskas

    2011-08-01

    Full Text Available The study presents the investigation results of synchro­nous averaging method and its application in estimation of impulse evoked otoacoustic emission signals (IEOAE. The method was analyzed using synthetic and real signals. Synthetic signals were modeled as the mixtures of deterministic compo­nent with noise realizations. Two types of noise were used: normal (Gaussian and transient impulses dominated (Lapla­cian. Signal to noise ratio was used as the signal quality measure after processing. In order to account varying amplitude of deterministic component in the realizations weighted aver­aging method was investigated. Results show that the perfor­mance of synchronous averaging method is very similar in case of both types of noise Gaussian and Laplacian. Weighted aver­aging method helps to cope with varying deterministic component or noise level in case of nonhomogenous ensembles as is the case in IEOAE signal.Article in Lithuanian

  7. Ampere Average Current Photoinjector and Energy Recovery Linac

    CERN Document Server

    Ben-Zvi, Ilan; Calaga, R; Cameron, P; Chang, X; Gassner, D M; Hahn, H; Hershcovitch, A; Hseuh, H C; Johnson, P; Kayran, D; Kewisch, J; Lambiase, R F; Litvinenko, Vladimir N; McIntyre, G; Nicoletti, A; Rank, J; Roser, T; Scaduto, J; Smith, K; Srinivasan-Rao, T; Wu, K C; Zaltsman, A; Zhao, Y

    2004-01-01

    High-power Free-Electron Lasers were made possible by advances in superconducting linac operated in an energy-recovery mode, as demonstrated by the spectacular success of the Jefferson Laboratory IR-Demo. In order to get to much higher power levels, say a fraction of a megawatt average power, many technological barriers are yet to be broken. BNL’s Collider-Accelerator Department is pursuing some of these technologies for a different application, that of electron cooling of high-energy hadron beams. I will describe work on CW, high-current and high-brightness electron beams. This will include a description of a superconducting, laser-photocathode RF gun employing a new secondary-emission multiplying cathode and an accelerator cavity, both capable of producing of the order of one ampere average current.

  8. STRONG APPROXIMATION FOR MOVING AVERAGE PROCESSES UNDER DEPENDENCE ASSUMPTIONS

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Let {Xt, t ≥ 1} be a moving average process defined by Xt = ∞Σk=0akξt-k,where {ak,k ≥ 0} is a sequence of real numbers and {ξt,-∞< t <∞} is a doubly infinite sequence of strictly stationary dependent random variables. Under the conditions of {ak, k ≥ 0} which entail that {Xt, t ≥ 1} is either a long memory process or a linear process, the strong approximation of {Xt, t ≥ 1} to a Gaussian process is studied. Finally,the results are applied to obtain the strong approximation of a long memory process to a fractional Brownian motion and the laws of the iterated logarithm for moving average processes.

  9. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  10. Yearly-averaged daily usefulness efficiency of heliostat surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Elsayed, M.M.; Habeebuallah, M.B.; Al-Rabghi, O.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia))

    1992-08-01

    An analytical expression for estimating the instantaneous usefulness efficiency of a heliostat surface is obtained. A systematic procedure is then introduced to calculate the usefulness efficiency even when overlapping of blocking and shadowing on a heliostat surface exist. For possible estimation of the reflected energy from a given field, the local yearly-averaged daily usefulness efficiency is calculated. This efficiency is found to depend on site latitude angle, radial distance from the tower measured in tower heights, heliostat position azimuth angle and the radial spacing between heliostats. Charts for the local yearly-averaged daily usefulness efficiency are presented for {phi} = 0, 15, 30, and 45 N. These charts can be used in calculating the reflected radiation from a given cell. Utilization of these charts is demonstrated.

  11. Detrending moving average algorithm: Frequency response and scaling performances.

    Science.gov (United States)

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed. PMID:27415389

  12. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given. PMID:23455291

  13. Gaze-direction-based MEG averaging during audiovisual speech perception

    Directory of Open Access Journals (Sweden)

    Lotta Hirvenkari

    2010-03-01

    Full Text Available To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG signals and subject’s gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent and /aka/ (incongruent in synchrony, repeated once every 3 s. Subjects (N = 10 were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’ was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.

  14. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  15. Averaged null energy condition in Loop Quantum Cosmology

    CERN Document Server

    Li, Li-Fang

    2008-01-01

    Wormhole and time machine are very interesting objects in general relativity. However, they need exotic matters which are impossible in classical level to support them. But if we introduce the quantum effects of gravity into the stress-energy tensor, these peculiar objects can be constructed self-consistently. Fortunately, loop quantum cosmology (LQC) has the potential to serve as a bridge connecting the classical theory and quantum gravity. Therefore it provides a simple way for the study of quantum effect in the semiclassical case. As is well known, loop quantum cosmology is very successful to deal with the behavior of early universe. In the early stage, if taken the quantum effect into consideration, inflation is natural because of the violation of every kind of local energy conditions. Similar to the inflationary universe, the violation of the averaged null energy condition is the necessary condition for the traversable wormholes. In this paper, we investigate the averaged null energy condition in LQC in ...

  16. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  17. Refined similarity hypothesis using 3D local averages

    CERN Document Server

    Iyer, Kartik P; Yeung, P K

    2015-01-01

    The refined similarity hypotheses of Kolmogorov, regarded as an important ingredient of intermittent turbulence, has been tested in the past using one-dimensional data and plausible surrogates of energy dissipation. We employ data from direct numerical simulations, at the microscale Reynolds number $R_\\lambda \\sim 650$, on a periodic box of $4096^3$ grid points to test the hypotheses using 3D averages. In particular, we study the small-scale properties of the stochastic variable $V = \\Delta u(r)/(r \\epsilon_r)^{1/3}$, where $\\Delta u(r)$ is the longitudinal velocity increment and $\\epsilon_r$ is the dissipation rate averaged over a three-dimensional volume of linear size $r$. We show that $V$ is universal in the inertial subrange. In the dissipation range, the statistics of $V$ are shown to depend solely on a local Reynolds number.

  18. Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Distributed consensus has appeared as one of the most important and primary problems in the context of distributed computation and it has received renewed interest in the field of sensor networks (due to recent advances in wireless communications), where solving fastest distributed consensus averaging problem over networks with different topologies is one of the primary problems in this issue. Here in this work analytical solution for the problem of fastest distributed consensus averaging algorithm over Chain of Rhombus networks is provided, where the solution procedure consists of stratification of associated connectivity graph of the network and semidefinite programming, particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Also characteristic polynomial together with its roots corresponding to eigenvalues of weight matrix including SLEM of network is determined inductively. Moreover t...

  19. FUNDAMENTALS OF TRANSMISSION FLUCTUATION SPECTROMETRY WITH VARIABLE SPATIAL AVERAGING

    Institute of Scientific and Technical Information of China (English)

    Jianqi Shen; Ulrich Riebel; Marcus Breitenstein; Udo Kr(a)uter

    2003-01-01

    Transmission signal of radiation in suspension of particles performed with a high spatial and temporal resolution shows significant fluctuations, which are related to the physical properties of the particles and the process of spatial and temporal averaging. Exploiting this connection, it is possible to calculate the parti cie size distribution (PSD)and particle concentration. This paper provides an approach of transmission fluctuation spectrometry (TFS) with variable spatial averaging. The transmission fluctuations are expressed in terms of the expectancy of transmission square (ETS)and are obtained as a spectrum, which is a function of the variable beam diameter. The reversal point and the depth of the spectrum contain the information of particle size and particle concentration, respectively.

  20. A database of age-appropriate average MRI templates.

    Science.gov (United States)

    Richards, John E; Sanchez, Carmen; Phillips-Meek, Michelle; Xie, Wanze

    2016-01-01

    This article summarizes a life-span neurodevelopmental MRI database. The study of neurostructural development or neurofunctional development has been hampered by the lack of age-appropriate MRI reference volumes. This causes misspecification of segmented data, irregular registrations, and the absence of appropriate stereotaxic volumes. We have created the "Neurodevelopmental MRI Database" that provides age-specific reference data from 2 weeks through 89 years of age. The data are presented in fine-grained ages (e.g., 3 months intervals through 1 year; 6 months intervals through 19.5 years; 5 year intervals from 20 through 89 years). The base component of the database at each age is an age-specific average MRI template. The average MRI templates are accompanied by segmented partial volume estimates for segmenting priors, and a common stereotaxic atlas for infant, pediatric, and adult participants. The database is available online (http://jerlab.psych.sc.edu/NeurodevelopmentalMRIDatabase/).

  1. Inferring average generation via division-linked labeling.

    Science.gov (United States)

    Weber, Tom S; Perié, Leïla; Duffy, Ken R

    2016-08-01

    For proliferating cells subject to both division and death, how can one estimate the average generation number of the living population without continuous observation or a division-diluting dye? In this paper we provide a method for cell systems such that at each division there is an unlikely, heritable one-way label change that has no impact other than to serve as a distinguishing marker. If the probability of label change per cell generation can be determined and the proportion of labeled cells at a given time point can be measured, we establish that the average generation number of living cells can be estimated. Crucially, the estimator does not depend on knowledge of the statistics of cell cycle, death rates or total cell numbers. We explore the estimator's features through comparison with physiologically parameterized stochastic simulations and extrapolations from published data, using it to suggest new experimental designs.

  2. Bayesian Model Averaging in the Instrumental Variable Regression Model

    OpenAIRE

    Gary Koop; Robert Leon Gonzalez; Rodney Strachan

    2011-01-01

    This paper considers the instrumental variable regression model when there is uncertainly about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainly can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very fl...

  3. Crime pays if you are just an average hacker

    OpenAIRE

    Shim, Woohyun; Allodi, Luca; Massacci, Fabio

    2013-01-01

    This study investigates the e ects of incentive and deterrence strategies that might turn a security researcher into a malware writer, or vice versa. By using a simple game theoretic model, we illustrate how hackers maximize their expected utility. Furthermore, our simulation models show how hackers' malicious activities are a ected by changes in strategies employed by defenders. Our results indicate that, despite the manipulation of strategies, average-skilled hackers have incentives to part...

  4. Path Dependent Option Pricing: the path integral partial averaging method

    OpenAIRE

    Andrew Matacz

    2000-01-01

    In this paper I develop a new computational method for pricing path dependent options. Using the path integral representation of the option price, I show that in general it is possible to perform analytically a partial averaging over the underlying risk-neutral diffusion process. This result greatly eases the computational burden placed on the subsequent numerical evaluation. For short-medium term options it leads to a general approximation formula that only requires the evaluation of a one d...

  5. Marginal versus average beta of equity under corporate taxation

    OpenAIRE

    Lund, Diderik

    2009-01-01

    Even for fully equity-financed firms there may be substantial effects of taxation on the after-tax cost of capital. Among the few studies of these effects, even fewer identify all effects correctly. When marginal investment is taxed together with inframarginal, marginal beta differs from average if there are investmentrelated deductions like depreciation. To calculate asset betas, one should not only "unlever" observed equity betas, but "untax" and "unaverage" them. Risky tax claims are value...

  6. Average resonance parameters of germanium and selenium nuclei

    International Nuclear Information System (INIS)

    Full sets of average resonance parameters S0, S1, R0', R1', S1,3/2 for germanium and selenium nuclei with natural isotope content are determined. Parameters are received from the analysis of experimental neutron elastic scattering cross sections at energy region up to 440 keV with the help of the method developed by the authors. The analysis of recommended parameters and some literature data is fulfilled as well.

  7. Average resonance parameters of tellurium and neodymium nuclei

    International Nuclear Information System (INIS)

    Complete sets of average resonance parameters S0, S1, R''0, R''1, and S1,3/2 for tellurium and neodymium nuclei with natural isotope contents have been determined by analyzing the experimental differential cross-sections of neutron elastic scattering in the energy range lower than 440 keV. The data obtained, the recommended parameter values, and some literature data have been analyzed.

  8. Finding large average submatrices in high dimensional data

    OpenAIRE

    Shabalin, Andrey A.; Weigman, Victor J.; Perou, Charles M.; Nobel, Andrew B

    2009-01-01

    The search for sample-variable associations is an important problem in the exploratory analysis of high dimensional data. Biclustering methods search for sample-variable associations in the form of distinguished submatrices of the data matrix. (The rows and columns of a submatrix need not be contiguous.) In this paper we propose and evaluate a statistically motivated biclustering procedure (LAS) that finds large average submatrices within a given real-valued data matrix. ...

  9. Average saturated fatty acids daily intake in Sarajevo University students

    Directory of Open Access Journals (Sweden)

    Amra Catovic

    2014-12-01

    Full Text Available Introduction: There are wide variations in diet patterns among population subgroups. Macronutrients content analyses have become necessary in dietary assessment. The purpose of this study is to analyze dietary saturated fatty acids intake in students, detect differences between men and women, and compare with nourish status and nutrition recommendations.Methods: A cross-sectional survey of 60 graduate students was performed during the spring 2013, at the Sarajevo University. Food-frequency questionnaire was conducted during seven days. Body mass index was used to assess students' nourish status. Statistical analyses were performed using the Statistical Package for Social Sciences software (version 13.0.Results: Mean age of males was 26.00±2.72, and of females was 27.01±3.93 years. The prevalence of overweight was more common among males compared to females (55.56% vs. 6.06%. Median of total fat average intake for men and women was 76.32(70.15;114.41 and 69.41(63.23;86.94 g/d, respectively. Median of saturated fatty acids average intake for men and women was 28.86(22.41;36.42 and 24.29(20.53;31.60 g/d, respectively. There was significant difference in average intake of total fat between genders (Mann-Whitney U test: p=0.04. Macronutrient data were related to requirement of reference person. Total fat intake was beyond recommended limits in 37.04% of males and 54.55% of females. Saturated fatty acids intake was beyond the upper limit in 55.56% of males and 51.52% of females.Conclusion: Diet pattern of the average student is not in accordance with the recommendations of saturated fatty acids contribution as a percentage of energy.

  10. Characterizing individual painDETECT symptoms by average pain severity

    Science.gov (United States)

    Sadosky, Alesia; Koduru, Vijaya; Bienen, E Jay; Cappelleri, Joseph C

    2016-01-01

    Background painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure), a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe), but their ability to discriminate individual item severity has not been evaluated. Methods Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624). Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level. Results A probability >50% for a better outcome (less severe pain) was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain) and highest probability was 76.4% (on cold/heat for mild vs severe pain). The pain radiation item was significant (P<0.05) and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ. Conclusion painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain-severity levels can serve as proxies to determine treatment effects, thus indicating probabilities for more favorable outcomes on pain symptoms.

  11. Average dynamics of a finite set of coupled phase oscillators

    Energy Technology Data Exchange (ETDEWEB)

    Dima, Germán C., E-mail: gdima@df.uba.ar; Mindlin, Gabriel B. [Laboratorio de Sistemas Dinámicos, IFIBA y Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellón 1, Ciudad Universitaria, Buenos Aires (Argentina)

    2014-06-15

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  12. The inhomogeneous Universe : its average expansion and cosmic variance

    OpenAIRE

    Wiegand, Alexander

    2012-01-01

    Despite its global homogeneity and isotropy, the local matter distribution in the late Universe is manifestly inhomogeneous. Understanding the various effects resulting from these inhomogeneities is one of the most important tasks of modern cosmology. In this thesis, we investigate two aspects of the influence of local structure: firstly, to what extent do local structures modify the average expansion of spatial regions with a given size, and secondly, how strongly does the presence of struct...

  13. Average Contrastive Divergence for Training Restricted Boltzmann Machines

    OpenAIRE

    Xuesi Ma; Xiaojie Wang

    2016-01-01

    This paper studies contrastive divergence (CD) learning algorithm and proposes a new algorithm for training restricted Boltzmann machines (RBMs). We derive that CD is a biased estimator of the log-likelihood gradient method and make an analysis of the bias. Meanwhile, we propose a new learning algorithm called average contrastive divergence (ACD) for training RBMs. It is an improved CD algorithm, and it is different from the traditional CD algorithm. Finally, we obtain some experimental resul...

  14. 3-Paths in Graphs with Bounded Average Degree

    Directory of Open Access Journals (Sweden)

    Jendrol′ Stanislav

    2016-05-01

    Full Text Available In this paper we study the existence of unavoidable paths on three vertices in sparse graphs. A path uvw on three vertices u, v, and w is of type (i, j, k if the degree of u (respectively v, w is at most i (respectively j, k. We prove that every graph with minimum degree at least 2 and average degree strictly less than m contains a path of one of the types

  15. Snapshots of Anderson localization beyond the ensemble average

    Science.gov (United States)

    El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad

    2012-09-01

    We study (1+1)D transverse localization of electromagnetic radiation at microwave frequencies directly by two-dimensional spatial scans. Since the longitudinal direction can be mapped onto time, our experiments provide unique snapshots of the buildup of localized waves. The evolution of the wave functions is compared with semianalytical calculations. Studies beyond ensemble averages reveal counterintuitive surprises. Oscillations of the wave functions are observed in space and explained in terms of a beating between the eigenstates.

  16. Model characteristics of average skill boxers’ competition functioning

    OpenAIRE

    Martsiv V.P.

    2015-01-01

    Purpose: analysis of competition functioning of average skill boxers. Material: 28 fights of boxers-students have been analyzed. The following coefficients have been determined: effectiveness of punches, reliability of defense. The fights were conducted by formula: 3 rounds (3 minutes - every round). Results: models characteristics of boxers for stage of specialized basic training have been worked out. Correlations between indicators of specialized and general exercises have been determined. ...

  17. State-space average modelling of 18-pulse diode rectifier

    OpenAIRE

    Griffo, Antonio; Wang, J B; Howe, D.

    2008-01-01

    The paper presents an averaged-value model of the direct symmetric topology of 18-pulse autotransformer AC-DC rectifiers. The model captures the key features of the dynamic characteristics of the rectifiers, while being time invariant and computationally efficient. The developed models, validated by comparison of the resultant transient and steady state behaviours with those obtained from detailed simulations can, therefore, be used for stability assessment of electric power syste...

  18. Targeted Cancer Screening in Average-Risk Individuals.

    Science.gov (United States)

    Marcus, Pamela M; Freedman, Andrew N; Khoury, Muin J

    2015-11-01

    Targeted cancer screening refers to use of disease risk information to identify those most likely to benefit from screening. Researchers have begun to explore the possibility of refining screening regimens for average-risk individuals using genetic and non-genetic risk factors and previous screening experience. Average-risk individuals are those not known to be at substantially elevated risk, including those without known inherited predisposition, without comorbidities known to increase cancer risk, and without previous diagnosis of cancer or pre-cancer. In this paper, we describe the goals of targeted cancer screening in average-risk individuals, present factors on which cancer screening has been targeted, discuss inclusion of targeting in screening guidelines issued by major U.S. professional organizations, and present evidence to support or question such inclusion. Screening guidelines for average-risk individuals currently target age; smoking (lung cancer only); and, in some instances, race; family history of cancer; and previous negative screening history (cervical cancer only). No guidelines include common genomic polymorphisms. RCTs suggest that targeting certain ages and smoking histories reduces disease-specific cancer mortality, although some guidelines extend ages and smoking histories based on statistical modeling. Guidelines that are based on modestly elevated disease risk typically have either no or little evidence of an ability to affect a mortality benefit. In time, targeted cancer screening is likely to include genetic factors and past screening experience as well as non-genetic factors other than age, smoking, and race, but it is of utmost importance that clinical implementation be evidence-based.

  19. Averaging cross section data so we can fit it

    International Nuclear Information System (INIS)

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  20. Averaging cross section data so we can fit it

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D. [Brookhaven National Lab. (BNL), Upton, NY (United States). NNDC

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  1. The role of the harmonic vector average in motion integration

    OpenAIRE

    Alan eJohnston; Peter eScarfe

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition t...

  2. Cortical evoked potentials recorded from the guinea pig without averaging.

    Science.gov (United States)

    Walloch, R A

    1975-01-01

    Potentials evoked by tonal pulses and recorded with a monopolar electrode on the pial surface over the auditory cortex of the guinea pig are presented. These potentials are compared with average potentials recorded in previous studies with an electrode on the dura. The potentials recorded by these two techniques have similar waveforms, peak latencies and thresholds. They appear to be generated within the same region of the cerebral cortex. As can be expected, the amplitude of the evoked potentials recorded from the pial surface is larger than that recorded from the dura. Consequently, averaging is not needed to extract the evoked potential once the dura is removed. The thresholds for the evoked cortical potential are similar to behavioral thresholds for the guinea pig at high frequencies; however, evoked potential thresholds are eleveate over behavioral thresholds at low frequencies. The removal of the dura and the direct recording of the evoked potential appears most appropriate for acute experiments. The recording of an evoked potential with dura electrodes empploying averaging procedures appears most appropriate for chronic studies.

  3. Role of spatial averaging in multicellular gradient sensing

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation–global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation–global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  4. Aerodynamic surface stress intermittency and conditionally averaged turbulence statistics

    Science.gov (United States)

    Anderson, William; Lanigan, David

    2015-11-01

    Aeolian erosion is induced by aerodynamic stress imposed by atmospheric winds. Erosion models prescribe that sediment flux, Q, scales with aerodynamic stress raised to exponent, n, where n > 1 . Since stress (in fully rough, inertia-dominated flows) scales with incoming velocity squared, u2, it follows that q ~u2n (where u is some relevant component of the flow). Thus, even small (turbulent) deviations of u from its time-mean may be important for aeolian activity. This rationale is augmented given that surface layer turbulence exhibits maximum Reynolds stresses in the fluid immediately above the landscape. To illustrate the importance of stress intermittency, we have used conditional averaging predicated on stress during large-eddy simulation of atmospheric boundary layer flow over an arid, bare landscape. Conditional averaging provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. We characterize geometric attributes of such structures and explore streamwise and vertical vorticity distribution within the conditionally averaged flow field. This work was supported by the National Sci. Foundation, Phys. and Dynamic Meteorology Program (PM: Drs. N. Anderson, C. Lu, and E. Bensman) under Grant # 1500224. Computational resources were provided by the Texas Adv. Comp. Center at the Univ. of Texas.

  5. Probability density function transformation using seeded localized averaging

    International Nuclear Information System (INIS)

    Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)

  6. Human facial beauty : Averageness, symmetry, and parasite resistance.

    Science.gov (United States)

    Thornhill, R; Gangestad, S W

    1993-09-01

    It is hypothesized that human faces judged to be attractive by people possess two features-averageness and symmetry-that promoted adaptive mate selection in human evolutionary history by way of production of offspring with parasite resistance. Facial composites made by combining individual faces are judged to be attractive, and more attractive than the majority of individual faces. The composites possess both symmetry and averageness of features. Facial averageness may reflect high individual protein heterozygosity and thus an array of proteins to which parasites must adapt. Heterozygosity may be an important defense of long-lived hosts against parasites when it occurs in portions of the genome that do not code for the essential features of complex adaptations. In this case heterozygosity can create a hostile microenvironment for parasites without disrupting adaptation. Facial bilateral symmetry is hypothesized to affect positive beauty judgments because symmetry is a certification of overall phenotypic quality and developmental health, which may be importantly influenced by parasites. Certain secondary sexual traits are influenced by testosterone, a hormone that reduces immunocompetence. Symmetry and size of the secondary sexual traits of the face (e.g., cheek bones) are expected to correlate positively and advertise immunocompetence honestly and therefore to affect positive beauty judgments. Facial attractiveness is predicted to correlate with attractive, nonfacial secondary sexual traits; other predictions from the view that parasite-driven selection led to the evolution of psychological adaptations of human beauty perception are discussed. The view that human physical attractiveness and judgments about human physical attractiveness evolved in the context of parasite-driven selection leads to the hypothesis that both adults and children have a species-typical adaptation to the problem of identifying and favoring healthy individuals and avoiding parasite

  7. Resonance averaged channel radiative neutron capture cross sections

    International Nuclear Information System (INIS)

    In order to apply Lane amd Lynn's channel capture model in calculations with a realistic optical model potential, we have derived an approximate wave function for the entrance channel in the neutron-nucleus reaction, based on the intermediate interaction model. It is valid in the exterior region as well as the region near the nuclear surface, ans is expressed in terms of the wave function and reactance matrix of the optical model and of the near-resonance parameters. With this formalism the averaged channel radiative neutron capture cross section in the resonance region is written as the sum of three terms. The first two terms correspond to contribution of the optical model real and imaginary parts respectively, and together can be regarded as the radiative capture of the shape elastic wave. The third term is a fluctuation term, corresponding to the radiative capture of the compound elastic wave in the exterior region. On applying this theory in the resonance region, we obtain an expression for the average valence radiative width similar to that of Lane and Mughabghab. We have investigated the magnitude and energy dependence of the three terms as a function of the neutron incident energy. Calculated results for 98Mo and 55Mn show that the averaged channel radiative capture cross section in the giant resonance region of the neutron strength function may account for a considerable fraction of the total (n, γ) cross section; at lower neutron energies a large part of this channel capture arises from the fluctuation term. We have also calculated the partial capture cross section in 98Mo and 55Mn at 2.4 keV and 24 keV, respectively, and compared the 98Mo results with the experimental data. (orig.)

  8. Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics

    Science.gov (United States)

    Anderson, W.

    2015-12-01

    Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of

  9. RELATIONSHIP BETWEEN J-INTEGRAL AND FRACTURE SURFACE AVERAGE PROFILE

    Institute of Scientific and Technical Information of China (English)

    Y.G. Cao; S.F. Xue; K.Tanaka

    2007-01-01

    To investigate the causes that led to the formation of cracks in materials, a novel method that only considered the fracture surfaces for determining the fracture toughness parameters of J-integral for plain strain was proposed. The principle of the fracture-surface topography analysis (FRASTA) was used. In FRASTA, the fracture surfaces were scanned by laser microscope and the elevation data was recorded for analysis. The relationship between J-integral and fracture surface average profile for plain strain was deduced. It was also verified that the J-integral determined by the novel method and by the compliance method matches each other well.

  10. Concentration fluctuations and averaging time in vapor clouds

    CERN Document Server

    Wilson, David J

    2010-01-01

    This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t

  11. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  12. Optical Parametric Amplification for High Peak and Average Power

    Energy Technology Data Exchange (ETDEWEB)

    Jovanovic, I

    2001-11-26

    Optical parametric amplification is an established broadband amplification technology based on a second-order nonlinear process of difference-frequency generation (DFG). When used in chirped pulse amplification (CPA), the technology has been termed optical parametric chirped pulse amplification (OPCPA). OPCPA holds a potential for producing unprecedented levels of peak and average power in optical pulses through its scalable ultrashort pulse amplification capability and the absence of quantum defect, respectively. The theory of three-wave parametric interactions is presented, followed by a description of the numerical model developed for nanosecond pulses. Spectral, temperature and angular characteristics of OPCPA are calculated, with an estimate of pulse contrast. An OPCPA system centered at 1054 nm, based on a commercial tabletop Q-switched pump laser, was developed as the front end for a large Nd-glass petawatt-class short-pulse laser. The system does not utilize electro-optic modulators or multi-pass amplification. The obtained overall 6% efficiency is the highest to date in OPCPA that uses a tabletop commercial pump laser. The first compression of pulses amplified in highly nondegenerate OPCPA is reported, with the obtained pulse width of 60 fs. This represents the shortest pulse to date produced in OPCPA. Optical parametric amplification in {beta}-barium borate was combined with laser amplification in Ti:sapphire to produce the first hybrid CPA system, with an overall conversion efficiency of 15%. Hybrid CPA combines the benefits of high gain in OPCPA with high conversion efficiency in Ti:sapphire to allow significant simplification of future tabletop multi-terawatt sources. Preliminary modeling of average power limits in OPCPA and pump laser design are presented, and an approach based on cascaded DFG is proposed to increase the average power beyond the single-crystal limit. Angular and beam quality effects in optical parametric amplification are modeled

  13. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  14. Averaged hole mobility model of biaxially strained Si

    Institute of Scientific and Technical Information of China (English)

    Song Jianjun; Zhu He; Yang Jinyong; Zhang Heming; Xuan Rongxi; Hu Huiyong

    2013-01-01

    We aim to establisha model of the averaged hole mobility of strained Si grown on (001),(101),and (111) relaxed Si1-xGex substrates.The results obtained from our calculation show that their hole mobility values corresponding to strained Si (001),(101) and (111) increase by at most about three,two and one times,respectively,in comparison with the unstrained Si.The results can provide a valuable reference to the understanding and design of strained Si-based device physics.

  15. Weighted Average Consensus-Based Unscented Kalman Filtering.

    Science.gov (United States)

    Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong

    2016-02-01

    In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example. PMID:26168453

  16. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  17. Using Averaged Modeling for Capacitors Voltages Observer in NPC Inverter

    Directory of Open Access Journals (Sweden)

    Bassem Omri

    2012-01-01

    Full Text Available This paper developed an adaptive observer to estimate capacitors voltages of a three-level neutral-point-clamped (NPC inverter. A robust estimated method using one parameter is proposed, which eliminates the voltages sensors. An averaged modeling of the inverter was used to develop the observer. This kind of modeling allows a good trade-off between simulation cost and precision. Circuit model of the inverter (implemented in Simpower Matlab simulator associated to the observer algorithm was used to validate the proposed algorithm.

  18. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  19. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  20. Analytical network-averaging of the tube model:. Rubber elasticity

    Science.gov (United States)

    Khiêm, Vu Ngoc; Itskov, Mikhail

    2016-10-01

    In this paper, a micromechanical model for rubber elasticity is proposed on the basis of analytical network-averaging of the tube model and by applying a closed-form of the Rayleigh exact distribution function for non-Gaussian chains. This closed-form is derived by considering the polymer chain as a coarse-grained model on the basis of the quantum mechanical solution for finitely extensible dumbbells (Ilg et al., 2000). The proposed model includes very few physically motivated material constants and demonstrates good agreement with experimental data on biaxial tension as well as simple shear tests.

  1. Control of average spacing of OMCVD grown gold nanoparticles

    Science.gov (United States)

    Rezaee, Asad

    Metallic nanostructures and their applications is a rapidly expanding field. Nobel metals such as silver and gold have historically been used to demonstrate plasmon effects due to their strong resonances, which occur in the visible part of the electromagnetic spectrum. Localized surface plasmon resonance (LSPR) produces an enhanced electromagnetic field at the interface between a gold nanoparticle (Au NP) and the surrounding dielectric. This enhanced field can be used for metal-dielectric interfacesensitive optical interactions that form a powerful basis for optical sensing. In addition to the surrounding material, the LSPR spectral position and width depend on the size, shape, and average spacing between these particles. Au NP LSPR based sensors depict their highest sensitivity with optimized parameters and usually operate by investigating absorption peak: shifts. The absorption peak: of randomly deposited Au NPs on surfaces is mostly broad. As a result, the absorption peak: shifts, upon binding of a material onto Au NPs might not be very clear for further analysis. Therefore, novel methods based on three well-known techniques, self-assembly, ion irradiation, and organo-meta1lic chemical vapour deposition (OMCVD) are introduced to control the average-spacing between Au NPs. In addition to covalently binding and other advantages of OMCVD grown Au NPs, interesting optical features due to their non-spherical shapes are presented. The first step towards the average-spacing control is to uniformly form self-assembled monolayers (SAMs) of octadecyltrichlorosilane (OTS) as resists for OMCVD Au NPs. The formation and optimization of the OTS SAMs are extensively studied. The optimized resist SAMs are ion-irradiated by a focused ion beam (Fill) and ions generated by a Tandem accelerator. The irradiated areas are refilled with 3-mercaptopropyl-trimethoxysilane (MPTS) to provide nucleation sites for the OMCVD Au NP growth. Each step during sample preparation is monitored by

  2. Model averaging for semiparametric additive partial linear models

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    To improve the prediction accuracy of semiparametric additive partial linear models(APLM) and the coverage probability of confidence intervals of the parameters of interest,we explore a focused information criterion for model selection among ALPM after we estimate the nonparametric functions by the polynomial spline smoothing,and introduce a general model average estimator.The major advantage of the proposed procedures is that iterative backfitting implementation is avoided,which thus results in gains in computational simplicity.The resulting estimators are shown to be asymptotically normal.A simulation study and a real data analysis are presented for illustrations.

  3. Colorectal Cancer Screening in Average Risk Populations: Evidence Summary.

    Science.gov (United States)

    Tinmouth, Jill; Vella, Emily T; Baxter, Nancy N; Dubé, Catherine; Gould, Michael; Hey, Amanda; Ismaila, Nofisat; McCurdy, Bronwen R; Paszat, Lawrence

    2016-01-01

    Introduction. The objectives of this systematic review were to evaluate the evidence for different CRC screening tests and to determine the most appropriate ages of initiation and cessation for CRC screening and the most appropriate screening intervals for selected CRC screening tests in people at average risk for CRC. Methods. Electronic databases were searched for studies that addressed the research objectives. Meta-analyses were conducted with clinically homogenous trials. A working group reviewed the evidence to develop conclusions. Results. Thirty RCTs and 29 observational studies were included. Flexible sigmoidoscopy (FS) prevented CRC and led to the largest reduction in CRC mortality with a smaller but significant reduction in CRC mortality with the use of guaiac fecal occult blood tests (gFOBTs). There was insufficient or low quality evidence to support the use of other screening tests, including colonoscopy, as well as changing the ages of initiation and cessation for CRC screening with gFOBTs in Ontario. Either annual or biennial screening using gFOBT reduces CRC-related mortality. Conclusion. The evidentiary base supports the use of FS or FOBT (either annual or biennial) to screen patients at average risk for CRC. This work will guide the development of the provincial CRC screening program. PMID:27597935

  4. Prompt fission neutron spectra and average prompt neutron multiplicities

    International Nuclear Information System (INIS)

    We present a new method for calculating the prompt fission neutron spectrum N(E) and average prompt neutron multiplicity anti nu/sub p/ as functions of the fissioning nucleus and its excitation energy. The method is based on standard nuclear evaporation theory and takes into account (1) the motion of the fission fragments, (2) the distribution of fission-fragment residual nuclear temperature, (3) the energy dependence of the cross section sigma/sub c/ for the inverse process of compound-nucleus formation, and (4) the possibility of multiple-chance fission. We use a triangular distribution in residual nuclear temperature based on the Fermi-gas model. This leads to closed expressions for N(E) and anti nu/sub p/ when sigma/sub c/ is assumed constant and readily computed quadratures when the energy dependence of sigma/sub c/ is determined from an optical model. Neutron spectra and average multiplicities calculated with an energy-dependent cross section agree well with experimental data for the neutron-induced fission of 235U and the spontaneous fission of 252Cf. For the latter case, there are some significant inconsistencies between the experimental spectra that need to be resolved. 29 references

  5. BeppoSAX average spectra of Seyfert galaxies

    CERN Document Server

    Malizia, A; Stephen, J B; Cocco, G D; Fiore, F; Dean, A J

    2003-01-01

    We have studied the average 3-200 keV spectra of Seyfert galaxies of type 1 and 2, using data obtained with BeppoSAX. The average Seyfert 1 spectrum is well-fitted by a power law continuum with photon spectral index Gamma~1.9, a Compton reflection component R~0.6-1 (depending on the inclination angle between the line of sight and the reflecting material) and a high-energy cutoff at around 200 keV; there is also an iron line at 6.4 keV characterized by an equivalent width of 120 eV. Seyfert 2's on the other hand show stronger neutral absorption (NH=3-4 x 10^{22} atoms cm-2) as expected but are also characterized by an X-ray power law which is substantially harder (Gamma~1.75) and with a cut-off at lower energies (E_c~130 keV); the iron line parameters are instead substantially similar to those measured in type 1 objects. There are only two possible solutions to this problem: to assume more reflection in Seyfert 2 galaxies than observed in Seyfert 1 or more complex absorption than estimated in the first instanc...

  6. Seismicity and average velocities beneath the Argentine Puna Plateau

    Science.gov (United States)

    Schurr, B.; Asch, G.; Rietbrock, A.; Kind, R.; Pardo, M.; Heit, B.; Monfret, T.

    A network of 60 seismographs was deployed across the Andes at ∼23.5°S. The array was centered in the backarc, atop the Puna high plateau in NW Argentina. P and S arrival times of 426 intermediate depth earthquakes were inverted for 1-D velocity structure and hypocentral coordinates. Average velocities and υp/υs in the crust are low. Average mantle velocities are high but difficult to interpret because of the presence of a fast velocity slab at depth. Although the hypocenters sharply define a 35° dipping Benioff zone, seismicity in the slab is not continuous. The spatial clustering of earthquakes is thought to reflect inherited heterogeneties of the subducted oceanic lithosphere. Additionally, 57 crustal earthquakes were located. Seismicity concentrates in the fold and thrust belt of the foreland and Eastern Cordillera, and along and south of the El Toro-Olacapato-Calama Lineament (TOCL). Focal mechanisms of two earthquakes at this structure exhibit left lateral strike-slip mechanisms similar to the suggested kinematics of the TOCL. We believe that the Puna north of the TOCL behaves like a rigid block with little internal deformation, whereas the area south of the TOCL is weaker and currently deforming.

  7. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author)

  8. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  9. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  10. Face averages enhance user recognition for smartphone security.

    Directory of Open Access Journals (Sweden)

    David J Robertson

    Full Text Available Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy. In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1 and for real faces (Experiment 2: users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  11. Average radiation widths and the giant dipole resonance width

    Energy Technology Data Exchange (ETDEWEB)

    Arnould, M.; Thielemann, F.K.

    1982-11-01

    The average E1 radiation width can be calculated in terms of the energy Esub(G) and width GAMMAsub(G) of the Giant Dipole Resonance (GDR). While various models can predict Esub(G) quite reliably, the theoretical situation regarding ..lambda..sub(G) is much less satisfactory. We propose a simple phenomenological model which is able to provide GAMMAsub(G) values in good agreement with experimental data for spherical or deformed intermediate and heavy nuclei. In particular, this model can account for shell effects in GAMMAsub(G), and can be used in conjunction with the droplet model. The GAMMAsub(G) values derived in such a way are used to compute average E1 radiation widths which are quite close to the experimental values. The method proposed for the calculation of GAMMAsub(G) also appears to be well suited when the GDR characteristics of extended sets of nuclei are required, as is namely the case in nuclear astrophysics.

  12. Two-Stage Bayesian Model Averaging in Endogenous Variable Models.

    Science.gov (United States)

    Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E

    2014-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  13. Loss of lifetime due to radiation exposure-averaging problems.

    Science.gov (United States)

    Raicević, J J; Merkle, M; Ehrhardt, J; Ninković, M M

    1997-04-01

    A new method is presented for assessing a years of life lost (YLL) due to stochastic effects caused by the exposure to ionizing radiation. The widely accepted method from the literature uses a ratio of means of two quantities, defining in fact the loss of life as a derived quantity. We start from the real stochastic nature of the quantity (YLL), which enables us to obtain its mean values in a consistent way, using the standard averaging procedures, based on the corresponding joint probability density functions needed in this problem. Our method is mathematically different and produces lower values of average YLL. In this paper we also found certain similarities with the concept of loss of life expectancy among exposure induced deaths (LLE-EID), which is accepted in the recently published UNSCEAR report, where the same quantity is defined as years of life lost per radiation induced case (YLC). Using the same data base, the YLL and the LLE-EID are calculated and compared for the simplest exposure case-the discrete exposure at age a. It is found that LLE-EID overestimates the YLL, and that the magnitude of this overestimation reaches more than 15%, which depends on the effect under consideration. PMID:9119679

  14. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  15. Average density and porosity of high-strength lightweight concrete

    Directory of Open Access Journals (Sweden)

    A.S. Inozemtcev

    2014-11-01

    Full Text Available The analysis results of high-strength lightweight concrete (HSLWC structure are presented in this paper. The X-ray tomography, optical microscopy and other methods are used for researching of average density and porosity. It has been revealed that mixtures of HSLWC with density 1300…1500 kg/m3 have a homogeneous structure. The developed concrete has a uniform distribution of the hollow filler and a uniform layer of cement-mineral matrix. The highly saturated gas phase which is divided by denser large particles of quartz sand and products of cement hydration in the contact area allow forming a composite material with low average density, big porosity (up to 40% and high strength (compressive strength is more than 40 MPa. Special modifiers increase adhesion, compacts structure in the contact area, decrease water absorption of high-strength lightweight concrete (up to 1 % and ensure its high water resistance (water resistance coefficient is more than 0.95.

  16. Measurement properties of painDETECT by average pain severity

    Directory of Open Access Journals (Sweden)

    Cappelleri JC

    2014-11-01

    Full Text Available Joseph C Cappelleri,1 E Jay Bienen,2 Vijaya Koduru,3 Alesia Sadosky4 1Pfizer, Groton, CT, 2Outcomes research consultant, New York, NY, 3Eliassen Group, New London, CT, USA; 4Pfizer, New York, NY, USA Background: Since the burden of neuropathic pain (NeP increases with pain severity, it is important to characterize and quantify pain severity when identifying NeP patients. This study evaluated whether painDETECT, a screening questionnaire to identify patients with NeP, can distinguish pain severity. Materials and methods: Subjects (n=614, 55.4% male, 71.8% white, mean age 55.5 years with confirmed NeP were identified during office visits to US community-based physicians. The Brief Pain Inventory – Short Form stratified subjects by mild (score 0–3, n=110, moderate (score 4–6, n=297, and severe (score 7–10, n=207 average pain. Scores on the nine-item painDETECT (seven pain-symptom items, one pain-course item, one pain-irradiation item range from -1 to 38 (worst NeP; the seven-item painDETECT scores (only pain symptoms range from 0 to 35. The ability of painDETECT to discriminate average pain-severity levels, based on the average pain item from the Brief Pain Inventory – Short Form (0–10 scale, was evaluated using analysis of variance or covariance models to obtain unadjusted and adjusted (age, sex, race, ethnicity, time since NeP diagnosis, number of comorbidities mean painDETECT scores. Cumulative distribution functions on painDETECT scores by average pain severity were compared (Kolmogorov–Smirnov test. Cronbach's alpha assessed internal consistency reliability. Results: Unadjusted mean scores were 15.2 for mild, 19.8 for moderate, and 24.0 for severe pain for the nine items, and 14.3, 18.6, and 22.7, respectively, for the seven items. Adjusted nine-item mean scores for mild, moderate, and severe pain were 17.3, 21.3, and 25.3, respectively; adjusted seven-item mean scores were 16.4, 20.1, and 24.0, respectively. All pair

  17. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  18. Runoff and leaching of metolachlor from Mississippi River alluvial soil during seasons of average and below-average rainfall.

    Science.gov (United States)

    Southwick, Lloyd M; Appelboom, Timothy W; Fouss, James L

    2009-02-25

    The movement of the herbicide metolachlor [2-chloro-N-(2-ethyl-6-methylphenyl)-N-(2-methoxy-1-methylethyl)acetamide] via runoff and leaching from 0.21 ha plots planted to corn on Mississippi River alluvial soil (Commerce silt loam) was measured for a 6-year period, 1995-2000. The first three years received normal rainfall (30 year average); the second three years experienced reduced rainfall. The 4-month periods prior to application plus the following 4 months after application were characterized by 1039 +/- 148 mm of rainfall for 1995-1997 and by 674 +/- 108 mm for 1998-2000. During the normal rainfall years 216 +/- 150 mm of runoff occurred during the study seasons (4 months following herbicide application), accompanied by 76.9 +/- 38.9 mm of leachate. For the low-rainfall years these amounts were 16.2 +/- 18.2 mm of runoff (92% less than the normal years) and 45.1 +/- 25.5 mm of leachate (41% less than the normal seasons). Runoff of metolachlor during the normal-rainfall seasons was 4.5-6.1% of application, whereas leaching was 0.10-0.18%. For the below-normal periods, these losses were 0.07-0.37% of application in runoff and 0.22-0.27% in leachate. When averages over the three normal and the three less-than-normal seasons were taken, a 35% reduction in rainfall was characterized by a 97% reduction in runoff loss and a 71% increase in leachate loss of metolachlor on a percent of application basis. The data indicate an increase in preferential flow in the leaching movement of metolachlor from the surface soil layer during the reduced rainfall periods. Even with increased preferential flow through the soil during the below-average rainfall seasons, leachate loss (percent of application) of the herbicide remained below 0.3%. Compared to the average rainfall seasons of 1995-1997, the below-normal seasons of 1998-2000 were characterized by a 79% reduction in total runoff and leachate flow and by a 93% reduction in corresponding metolachlor movement via these routes

  19. Average of peak-to-average ratio (PAR) of IS95 and CDMA2000 systems-single carrier

    OpenAIRE

    Lau, VKN

    2001-01-01

    Peak-to-average ratio (PAR) of a signal is an important parameter. It determines the input backoff factor of the amplifier to avoid clipping and spectral regrowth. We analyze and compose the PAR of the downlink signal for IS95 and the CDMA2000 single-carrier systems. It is found that the PAR of the transmitted signal depends on the Walsh code assignment. Furthermore, we found that the PAR of CDMA2000 signal is always lower than the IS95 signal. Finally, PAR control by Walsh code selection is ...

  20. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the

  1. Forecasts of time averages with a numerical weather prediction model

    Science.gov (United States)

    Roads, J. O.

    1986-01-01

    Forecasts of time averages of 1-10 days in duration by an operational numerical weather prediction model are documented for the global 500 mb height field in spectral space. Error growth in very idealized models is described in order to anticipate various features of these forecasts and in order to anticipate what the results might be if forecasts longer than 10 days were carried out by present day numerical weather prediction models. The data set for this study is described, and the equilibrium spectra and error spectra are documented; then, the total error is documented. It is shown how forecasts can immediately be improved by removing the systematic error, by using statistical filters, and by ignoring forecasts beyond about a week. Temporal variations in the error field are also documented.

  2. Maximum Average SAR Measurement Procedure for Multi-Antenna Transmitters

    Science.gov (United States)

    Iyama, Takahiro; Onishi, Teruo

    This paper proposes and verifies a specific absorption rate (SAR) measurement procedure for multi-antenna transmitters that requires measurement of two-dimensional electric field distributions for the number of antennas and calculation in order to obtain the three-dimensional SAR distributions for arbitrary weighting coefficients of the antennas prior to determining the average SAR. The proposed procedure is verified based on Finite-Difference Time-Domain (FDTD) calculation and measurement using electro-optic (EO) probes. For two reference dipoles, the differences in the 10g SAR obtained based on the proposed procedure compared numerically and experimentally to that based on the original calculated three-dimensional SAR distribution are at most 4.8% and 3.6%, respectively, at 1950MHz. At 3500MHz, this difference is at most 5.2% in the numerical verification.

  3. A note on computing average state occupation times

    Directory of Open Access Journals (Sweden)

    Jan Beyersmann

    2014-05-01

    Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.

  4. Data Point Averaging for Computational Fluid Dynamics Data

    Science.gov (United States)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  5. Ocean tides in GRACE monthly averaged gravity fields

    DEFF Research Database (Denmark)

    Knudsen, Per

    2003-01-01

    The GRACE mission will map the Earth's gravity fields and its variations with unprecedented accuracy during its 5-year lifetime. Unless ocean tide signals and their load upon the solid earth are removed from the GRACE data, their long period aliases obscure more subtle climate signals which GRACE...... the GRACE data up to harmonic degree 60. A study of the revised alias frequencies confirm that the ocean tide errors will not cancel in the GRACE monthly averaged temporal gravity fields. The S-2 and the K-2 terms have alias frequencies much longer than 30 days, so they remain almost unreduced...... aims at. In this analysis the results of Knudsen and Andersen (2002) have been verified using actual post-launch orbit parameter of the GRACE mission. The current ocean tide models are not accurate enough to correct GRACE data at harmonic degrees lower than 47. The accumulated tidal errors may affect...

  6. Voter dynamics on an adaptive network with finite average connectivity

    Science.gov (United States)

    Mukhopadhyay, Abhishek; Schmittmann, Beate

    2009-03-01

    We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.

  7. Effects of polynomial trends on detrending moving average analysis

    CERN Document Server

    Shao, Ying-Hui; Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2015-01-01

    The detrending moving average (DMA) algorithm is one of the best performing methods to quantify the long-term correlations in nonstationary time series. Many long-term correlated time series in real systems contain various trends. We investigate the effects of polynomial trends on the scaling behaviors and the performances of three widely used DMA methods including backward algorithm (BDMA), centered algorithm (CDMA) and forward algorithm (FDMA). We derive a general framework for polynomial trends and obtain analytical results for constant shifts and linear trends. We find that the behavior of the CDMA method is not influenced by constant shifts. In contrast, linear trends cause a crossover in the CDMA fluctuation functions. We also find that constant shifts and linear trends cause crossovers in the fluctuation functions obtained from the BDMA and FDMA methods. When a crossover exists, the scaling behavior at small scales comes from the intrinsic time series while that at large scales is dominated by the cons...

  8. Average weighted receiving time in recursive weighted Koch networks

    Indian Academy of Sciences (India)

    DAI MEIFENG; YE DANDAN; LI XINGYI; HOU JIE

    2016-06-01

    Motivated by the empirical observation in airport networks and metabolic networks, we introduce the model of the recursive weighted Koch networks created by the recursive division method. As a fundamental dynamical process, random walks have received considerable interest in the scientific community. Then, we study the recursive weighted Koch networks on random walk i.e., the walker, at each step, starting from its current node, moves uniformly to any of itsneighbours. In order to study the model more conveniently, we use recursive division method again to calculate the sum of the mean weighted first-passing times for all nodes to absorption at the trap located in the merging node. It is showed that in a large network, the average weighted receiving time grows sublinearly with the network order.

  9. ORDERED WEIGHTED AVERAGING AGGREGATION METHOD FOR PORTFOLIO SELECTION

    Institute of Scientific and Technical Information of China (English)

    LIU Shancun; QIU Wanhua

    2004-01-01

    Portfolio management is a typical decision making problem under incomplete,sometimes unknown, informationThis paper considers the portfolio selection problemsunder a general setting of uncertain states without probabilityThe investor's preferenceis based on his optimum degree about the nature, and his attitude can be described by anOrdered Weighted Averaging Aggregation functionWe construct the OWA portfolio selec-tion model, which is a nonlinear programming problemThe problem can be equivalentlytransformed into a mixed integer linear programmingA numerical example is given andthe solutions imply that the investor's strategies depend not only on his optimum degreebut also on his preference weight vectorThe general game-theoretical portfolio selectionmethod, max-min method and competitive ratio method are all the special settings of thismodel.

  10. Suicide attempts, platelet monoamine oxidase and the average evoked response

    International Nuclear Information System (INIS)

    The relationship between suicides and suicide attempts and two biological measures, platelet monoamine oxidase levels (MAO) and average evoked response (AER) augmenting was examined in 79 off-medication psychiatric patients and in 68 college student volunteers chosen from the upper and lower deciles of MAO activity levels. In the patient sample, male individuals with low MAO and AER augmenting, a pattern previously associated with bipolar affective disorders, showed a significantly increased incidence of suicide attempts in comparison with either non-augmenting low MAO or high MAO patients. Within the normal volunteer group, all male low MAO probands with a family history of suicide or suicide attempts were AER augmenters themselves. Four completed suicides were found among relatives of low MAO probands whereas no high MAO proband had a relative who committed suicide. These findings suggest that the combination of low platelet MAO activity and AER augmenting may be associated with a possible genetic vulnerability to psychiatric disorders. (author)

  11. Average dimension of fixed point spaces with applications

    CERN Document Server

    Guralnick, Robert M

    2010-01-01

    Let $G$ be a finite group, $F$ a field, and $V$ a finite dimensional $FG$-module such that $G$ has no trivial composition factor on $V$. Then the arithmetic average dimension of the fixed point spaces of elements of $G$ on $V$ is at most $(1/p) \\dim V$ where $p$ is the smallest prime divisor of the order of $G$. This answers and generalizes a 1966 conjecture of Neumann which also appeared in a paper of Neumann and Vaughan-Lee and also as a problem in The Kourovka Notebook posted by Vaughan-Lee. Our result also generalizes a recent theorem of Isaacs, Keller, Meierfrankenfeld, and Moret\\'o. Various applications are given. For example, another conjecture of Neumann and Vaughan-Lee is proven and some results of Segal and Shalev are improved and/or generalized concerning BFC groups.

  12. Spatial Games Based on Pursuing the Highest Average Payoff

    Institute of Scientific and Technical Information of China (English)

    YANG Han-Xin; WANG Bing-Hong; WANG Wen-Xu; RONG Zhi-Hai

    2008-01-01

    We propose a strategy updating mechanism based on pursuing the highest average payoff to investigate the prisoner's dilemma game and the snowdrift game. We apply the new rule to investigate cooperative behaviours on regular, small-world, scale-free networks, and find spatial structure can maintain cooperation for the prisoner's dilemma game. In the snowdrift game, spatial structure can inhibit or promote cooperative behaviour which depends on payoff parameter. We further study cooperative behaviour on scale-free network in detail. Interestingly, non-monotonous behaviours observed on scale-free network with middle-degree individuals have the lowest cooperation level. We also find that large-degree individuals change their strategies more frequently for both games.

  13. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  14. Angular averaged consistency relations of large-scale structures

    CERN Document Server

    Valageas, Patrick

    2013-01-01

    The cosmological dynamics of gravitational clustering satisfies an approximate invariance with respect to the cosmological parameters that is often used to simplify analytical computations. We describe how this approximate symmetry gives rise to angular averaged consistency relations for the matter density correlations. This allows one to write the $(\\ell+n)$ density correlation, with $\\ell$ large-scale linear wave numbers that are integrated over angles, and $n$ fixed small-scale nonlinear wave numbers, in terms of the small-scale $n$-point density correlation and $\\ell$ prefactors that involve the linear power spectra at the large-scale wave numbers. These relations, which do not vanish for equal-time statistics, go beyond the already known kinematic consistency relations. They could be used to detect primordial non-Gaussianities, modifications of gravity, limitations of galaxy biasing schemes, or to help designing analytical models of gravitational clustering.

  15. Quantum gravity unification via transfinite arithmetic and geometrical averaging

    International Nuclear Information System (INIS)

    In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε(∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε(∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68

  16. A coefficient average approximation towards Gutzwiller wavefunction formalism

    Science.gov (United States)

    Liu, Jun; Yao, Yongxin; Wang, Cai-Zhuang; Ho, Kai-Ming

    2015-06-01

    Gutzwiller wavefunction is a physically well-motivated trial wavefunction for describing correlated electron systems. In this work, a new approximation is introduced to facilitate the evaluation of the expectation value of any operator within the Gutzwiller wavefunction formalism. The basic idea is to make use of a specially designed average over Gutzwiller wavefunction coefficients expanded in the many-body Fock space to approximate the ratio of expectation values between a Gutzwiller wavefunction and its underlying noninteracting wavefunction. To check with the standard Gutzwiller approximation (GA), we test its performance on single band systems and find quite interesting properties. On finite systems, we noticed that it gives superior performance over GA, while on infinite systems it asymptotically approaches GA. Analytic analysis together with numerical tests are provided to support this claimed asymptotical behavior. Finally, possible improvements on the approximation and its generalization towards multiband systems are illustrated and discussed.

  17. Risk-sensitive reinforcement learning algorithms with generalized average criterion

    Institute of Scientific and Technical Information of China (English)

    YIN Chang-ming; WANG Han-xing; ZHAO Fei

    2007-01-01

    A new algorithm is proposed, which immolates the optimality of control policies potentially to obtain the robusticity of solutions. The robusticity of solutions maybe becomes a very important property for a learning system when there exists non-matching between theory models and practical physical system, or the practical system is not static,or the availability of a control action changes along with the variety of time. The main contribution is that a set of approximation algorithms and their convergence results are given. A generalized average operator instead of the general optimal operator max (or min) is applied to study a class of important learning algorithms, dynamic programming algorithms, and discuss their convergences from theoretic point of view. The purpose for this research is to improve the robusticity of reinforcement learning algorithms theoretically.

  18. Rapidity dependence of the average transverse momentum in hadronic collisions

    Science.gov (United States)

    Durães, F. O.; Giannini, A. V.; Gonçalves, V. P.; Navarra, F. S.

    2016-08-01

    The energy and rapidity dependence of the average transverse momentum in p p and p A collisions at energies currently available at the BNL Relativistic Heavy Ion Collider (RHIC) and CERN Large Hadron Collider (LHC) are estimated using the color glass condensate (CGC) formalism. We update previous predictions for the pT spectra using the hybrid formalism of the CGC approach and two phenomenological models for the dipole-target scattering amplitude. We demonstrate that these models are able to describe the RHIC and LHC data for hadron production in p p , d Au , and p Pb collisions at pT≤20 GeV. Moreover, we present our predictions for and demonstrate that the ratio / decreases with the rapidity and has a behavior similar to that predicted by hydrodynamical calculations.

  19. Quantitative metagenomic analyses based on average genome size normalization

    DEFF Research Database (Denmark)

    Frank, Jeremy Alexander; Sørensen, Søren Johannes

    2011-01-01

    Over the past quarter-century, microbiologists have used DNA sequence information to aid in the characterization of microbial communities. During the last decade, this has expanded from single genes to microbial community genomics, or metagenomics, in which the gene content of an environment can...... provide not just a census of the community members but direct information on metabolic capabilities and potential interactions among community members. Here we introduce a method for the quantitative characterization and comparison of microbial communities based on the normalization of metagenomic data...... by estimating average genome sizes. This normalization can relieve comparative biases introduced by differences in community structure, number of sequencing reads, and sequencing read lengths between different metagenomes. We demonstrate the utility of this approach by comparing metagenomes from two different...

  20. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially ...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...... are described and a practical example is given. It is demonstrated how the method communicates the current failure probability in a direct and interpretable way, which makes it well suited for surveillance of a great variety of activities in industry or in the service sector such as in hospitals, for example...

  1. Estimation of the average visibility in central Europe

    Science.gov (United States)

    Horvath, Helmuth

    Visibility has been obtained from spectral extinction coefficients measured with the University of Vienna Telephotometer or size distributions determined with an Aerosol Spectrometer. By measuring the extinction coefficient in different directions, possible influences of local sources could be determined easily. A region, undisturbed by local sources usually had a variation of extinction coefficient of less than 10% in different directions. Generally good visibility outside population centers in Europe is considered as 40-50 km. These values have been found independent of the location in central Europe, thus this represents the average European "clean" air. Under rare occasions (normally rapid change of air mass) the visibility can be 100-150 km. In towns, the visibility is a factor of approximately 2 lower. In comparison to this the visibility in remote regions of North and South America is larger by a factor of 2-4. Obviously the lower visibility in Europe is caused by its higher population density. Since the majority of visibility reducing particulate emissions come from small sources such as cars or heating, the emissions per unit area can be considered proportional to the population density. Using a simple box model and the visibility measured in central Europe and in Vienna, the difference in visibility inside and outside the town can be explained quantitatively. It thus is confirmed, that the generally low visibility in central Europe is a consequence of the emissions in connection with human activities and the low visibility (compared, e.g. to North or South America) in remote location such as the Alps is caused by the average European pollution.

  2. The imprint of stratospheric transport on column-averaged methane

    Directory of Open Access Journals (Sweden)

    A. Ostler

    2015-07-01

    Full Text Available Model simulations of column-averaged methane mixing ratios (XCH4 are extensively used for inverse estimates of methane (CH4 emissions from atmospheric measurements. Our study shows that virtually all chemical transport models (CTM used for this purpose are affected by stratospheric model-transport errors. We quantify the impact of such model transport errors on the simulation of stratospheric CH4 concentrations via an a posteriori correction method. This approach compares measurements of the mean age of air with modeled age and expresses the difference in terms of a correction to modeled stratospheric CH4 mixing ratios. We find age differences up to ~ 3 years yield to a bias in simulated CH4 of up to 250 parts per billion (ppb. Comparisons between model simulations and ground-based XCH4 observations from the Total Carbon Column Network (TCCON reveal that stratospheric model-transport errors cause biases in XCH4 of ~ 20 ppb in the midlatitudes and ~ 27 ppb in the arctic region. Improved overall as well as seasonal model-observation agreement in XCH4 suggests that the proposed, age-of-air-based stratospheric correction is reasonable. The latitudinal model bias in XCH4 is supposed to reduce the accuracy of inverse estimates using satellite-derived XCH4 data. Therefore, we provide an estimate of the impact of stratospheric model-transport errors in terms of CH4 flux errors. Using a one-box approximation, we show that average model errors in stratospheric transport correspond to an overestimation of CH4 emissions by ~ 40 % (~ 7 Tg yr−1 for the arctic, ~ 5 % (~ 7 Tg yr−1 for the northern, and ~ 60 % (~ 7 Tg yr−1 for the southern hemispheric mid-latitude region. We conclude that an improved modeling of stratospheric transport is highly desirable for the joint use with atmospheric XCH4 observations in atmospheric inversions.

  3. Ultrafast green laser exceeding 400 W of average power

    Science.gov (United States)

    Gronloh, Bastian; Russbueldt, Peter; Jungbluth, Bernd; Hoffmann, Hans-Dieter

    2014-05-01

    We present the world's first laser at 515 nm with sub-picosecond pulses and an average power of 445 W. To realize this beam source we utilize an Yb:YAG-based infrared laser consisting of a fiber MOPA system as a seed source, a rod-type pre-amplifier and two Innoslab power amplifier stages. The infrared system delivers up to 930 W of average power at repetition rates between 10 and 50 MHz and with pulse durations around 800 fs. The beam quality in the infrared is M2 = 1.1 and 1.5 in fast and slow axis. As a frequency doubler we chose a Type-I critically phase-matched Lithium Triborate (LBO) crystal in a single-pass configuration. To preserve the infrared beam quality and pulse duration, the conversion was carefully modeled using numerical calculations. These take dispersion-related and thermal effects into account, thus enabling us to provide precise predictions of the properties of the frequency-doubled beam. To be able to model the influence of thermal dephasing correctly and to choose appropriate crystals accordingly, we performed extensive absorption measurements of all crystals used for conversion experiments. These measurements provide the input data for the thermal FEM analysis and calculation. We used a Photothermal Commonpath Interferometer (PCI) to obtain space-resolved absorption data in the bulk and at the surfaces of the LBO crystals. The absorption was measured at 1030 nm as well as at 515 nm in order to take into account the different absorption behavior at both occurring wavelengths.

  4. The average crossing number of equilateral random polygons

    Science.gov (United States)

    Diao, Y.; Dobay, A.; Kusner, R. B.; Millett, K.; Stasiak, A.

    2003-11-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form \\frac{3}{16} n \\ln n +O(n) . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the \\langle ACN({\\cal K})\\rangle for each knot type \\cal K can be described by a function of the form \\langle ACN({\\cal K})\\rangle=a (n-n_0) \\ln (n-n_0)+b (n-n_0)+c where a, b and c are constants depending on \\cal K and n0 is the minimal number of segments required to form \\cal K . The \\langle ACN({\\cal K})\\rangle profiles diverge from each other, with more complex knots showing higher \\langle ACN({\\cal K})\\rangle than less complex knots. Moreover, the \\langle ACN({\\cal K})\\rangle profiles intersect with the langACNrang profile of all closed walks. These points of intersection define the equilibrium length of \\cal K , i.e., the chain length n_e({\\cal K}) at which a statistical ensemble of configurations with given knot type \\cal K —upon cutting, equilibration and reclosure to a new knot type \\cal K^\\prime —does not show a tendency to increase or decrease \\langle ACN({\\cal K^\\prime)}\\rangle . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration langRgrang.

  5. Average glandular dose in digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2012-10-15

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  6. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  7. Spatially-Averaged Diffusivities for Pollutant Transport in Vegetated Flows

    Science.gov (United States)

    Huang, Jun; Zhang, Xiaofeng; Chua, Vivien P.

    2016-06-01

    Vegetation in wetlands can create complicated flow patterns and may provide many environmental benefits including water purification, flood protection and shoreline stabilization. The interaction between vegetation and flow has significant impacts on the transport of pollutants, nutrients and sediments. In this paper, we investigate pollutant transport in vegetated flows using the Delft3D-FLOW hydrodynamic software. The model simulates the transport of pollutants with the continuous release of a passive tracer at mid-depth and mid-width in the region where the flow is fully developed. The theoretical Gaussian plume profile is fitted to experimental data, and the lateral and vertical diffusivities are computed using the least squares method. In previous tracer studies conducted in the laboratory, the measurements were obtained at a single cross-section as experimental data is typically collected at one location. These diffusivities are then used to represent spatially-averaged values. With the numerical model, sensitivity analysis of lateral and vertical diffusivities along the longitudinal direction was performed at 8 cross-sections. Our results show that the lateral and vertical diffusivities increase with longitudinal distance from the injection point, due to the larger size of the dye cloud further downstream. A new method is proposed to compute diffusivities using a global minimum least squares method, which provides a more reliable estimate than the values obtained using the conventional method.

  8. Orientation-averaged optical properties of natural aerosol aggregates

    International Nuclear Information System (INIS)

    Orientation-averaged optical properties of natural aerosol aggregates were analyzed by using discrete dipole approximation (DDA) for the effective radius in the range of 0.01 to 2 μm with the corresponding size parameter from 0.1 to 23 for the wavelength of 0.55 μm. Effects of the composition and morphology on the optical properties were also investigated. The composition show small influence on the extinction-efficiency factor in Mie scattering region, scattering- and backscattering-efficiency factors. The extinction-efficiency factor with the size parameter from 9 to 23 and asymmetry factor with the size parameter below 2.3 are almost independent of the natural aerosol composition. The extinction-, absorption, scattering-, and backscattering-efficiency factors with the size parameter below 0.7 are irrespective of the aggregate morphology. The intrinsic symmetry and discontinuity of the normal direction of the particle surface have obvious effects on the scattering properties for the size parameter above 4.6. Furthermore, the scattering phase functions of natural aerosol aggregates are enhanced at the backscattering direction (opposition effect) for large size parameters in the range of Mie scattering. (authors)

  9. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  10. Understanding Stokes forces in the wave-averaged equations

    Science.gov (United States)

    Suzuki, Nobuhiro; Fox-Kemper, Baylor

    2016-05-01

    The wave-averaged, or Craik-Leibovich, equations describe the dynamics of upper ocean flow interacting with nonbreaking, not steep, surface gravity waves. This paper formulates the wave effects in these equations in terms of three contributions to momentum: Stokes advection, Stokes Coriolis force, and Stokes shear force. Each contribution scales with a distinctive parameter. Moreover, these contributions affect the turbulence energetics differently from each other such that the classification of instabilities is possible accordingly. Stokes advection transfers energy between turbulence and Eulerian mean-flow kinetic energy, and its form also parallels the advection of tracers such as salinity, buoyancy, and potential vorticity. Stokes shear force transfers energy between turbulence and surface waves. The Stokes Coriolis force can also transfer energy between turbulence and waves, but this occurs only if the Stokes drift fluctuates. Furthermore, this formulation elucidates the unique nature of Stokes shear force and also allows direct comparison of Stokes shear force with buoyancy. As a result, the classic Langmuir instabilities of Craik and Leibovich, wave-balanced fronts and filaments, Stokes perturbations of symmetric and geostrophic instabilities, the wavy Ekman layer, and the wavy hydrostatic balance are framed in terms of intuitive physical balances.

  11. Average snowcover density values in Eastern Alps mountain

    Science.gov (United States)

    Valt, M.; Moro, D.

    2009-04-01

    The Italian Avalanche Warning Services monitor the snow cover characteristics through networks evenly distributed all over the alpine chain. Measurements of snow stratigraphy and density are very frequently performed with sampling rates of 1 -2 times per week. Snow cover density values are used to compute the dimensions of the building roofs as well as to design avalanche barriers. Based on the measured snow densities the Electricity Board can predict the amount of water resources deriving from snow melt in high relieves drainage basins. In this work it was possible to compute characteristic density values of the snow cover in the Eastern Alps using the information contained in the database from the ARPA (Agenzia Regionale Protezione Ambiente)-Centro Valanghe di Arabba, and Ufficio Valanghe- Udine. Among the other things, this database includes 15 years of stratigraphic measurements. More than 6,000 snow stratigraphic logs were analysed, in order to derive typical values as for geographical area, altitude, exposure, snow cover thickness and season. Computed values were compared to those established by the current Italian laws. Eventually, experts identified and evaluated the correlations between the seasonal variations of the average snow density and the variations related to the snowfall rate in the period 1994-2008 in the Eastern Alps mountain range

  12. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  13. Analytic continuation by averaging Padé approximants

    Science.gov (United States)

    Schött, Johan; Locht, Inka L. M.; Lundin, Elin; Grânäs, Oscar; Eriksson, Olle; Di Marco, Igor

    2016-02-01

    The ill-posed analytic continuation problem for Green's functions and self-energies is investigated by revisiting the Padé approximants technique. We propose to remedy the well-known problems of the Padé approximants by performing an average of several continuations, obtained by varying the number of fitted input points and Padé coefficients independently. The suggested approach is then applied to several test cases, including Sm and Pr atomic self-energies, the Green's functions of the Hubbard model for a Bethe lattice and of the Haldane model for a nanoribbon, as well as two special test functions. The sensitivity to numerical noise and the dependence on the precision of the numerical libraries are analyzed in detail. The present approach is compared to a number of other techniques, i.e., the nonnegative least-squares method, the nonnegative Tikhonov method, and the maximum entropy method, and is shown to perform well for the chosen test cases. This conclusion holds even when the noise on the input data is increased to reach values typical for quantum Monte Carlo simulations. The ability of the algorithm to resolve fine structures is finally illustrated for two relevant test functions.

  14. Determination of the average lifetime of b-baryons

    CERN Document Server

    Abreu, P; Adye, T; Agasi, E; Ajinenko, I; Aleksan, Roy; Alekseev, G D; Alemany, R; Allport, P P; Almehed, S; Amaldi, Ugo; Amato, S; Andreazza, A; Andrieux, M L; Antilogus, P; Apel, W D; Arnoud, Y; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Bambade, P; Barão, F; Barate, R; Barbi, M S; Bardin, Dimitri Yuri; Baroncelli, A; Bärring, O; Barrio, J A; Bartl, Walter; Bates, M J; Battaglia, Marco; Baubillier, M; Baudot, J; Becks, K H; Begalli, M; Beillière, P; Belokopytov, Yu A; Belous, K S; Benvenuti, Alberto C; Berggren, M; Bertrand, D; Bianchi, F; Bigi, M; Bilenky, S M; Billoir, P; Bloch, D; Blume, M; Blyth, S; Bolognese, T; Bonesini, M; Bonivento, W; Booth, P S L; Borisov, G; Bosio, C; Bosworth, S; Botner, O; Boudinov, E; Bouquet, B; Bourdarios, C; Bowcock, T J V; Bozzo, M; Branchini, P; Brand, K D; Brenke, T; Brenner, R A; Bricman, C; Brillault, L; Brown, R C A; Brückman, P; Brunet, J M; Bugge, L; Buran, T; Burgsmüller, T; Buschmann, P; Buys, A; Cabrera, S; Caccia, M; Calvi, M; Camacho-Rozas, A J; Camporesi, T; Canale, V; Canepa, M; Cankocak, K; Cao, F; Carena, F; Carroll, L; Caso, Carlo; Castillo-Gimenez, M V; Cattai, A; Cavallo, F R; Cerrito, L; Chabaud, V; Chapkin, M M; Charpentier, P; Chaussard, L; Chauveau, J; Checchia, P; Chelkov, G A; Chen, M; Chierici, R; Chliapnikov, P V; Chochula, P; Chorowicz, V; Chudoba, J; Cindro, V; Collins, P; Contreras, J L; Contri, R; Cortina, E; Cosme, G; Cossutti, F; Crawley, H B; Crennell, D J; Crosetti, G; Cuevas-Maestro, J; Czellar, S; Dahl-Jensen, Erik; Dahm, J; D'Almagne, B; Dam, M; Damgaard, G; Dauncey, P D; Davenport, Martyn; Da Silva, W; Defoix, C; Deghorain, A; Della Ricca, G; Delpierre, P A; Demaria, N; De Angelis, A; de Boer, Wim; De Brabandere, S; De Clercq, C; La Vaissière, C de; De Lotto, B; De Min, A; De Paula, L S; De Saint-Jean, C; Dijkstra, H; Di Ciaccio, Lucia; Djama, F; Dolbeau, J; Dönszelmann, M; Doroba, K; Dracos, M; Drees, J; Drees, K A; Dris, M; Dufour, Y; Edsall, D M; Ehret, R; Eigen, G; Ekelöf, T J C; Ekspong, Gösta; Elsing, M; Engel, J P; Ershaidat, N; Erzen, B; Falk, E; Fassouliotis, D; Feindt, Michael; Ferrer, A; Filippas-Tassos, A; Firestone, A; Fischer, P A; Föth, H; Fokitis, E; Fontanelli, F; Formenti, F; Franek, B J; Frenkiel, P; Fries, D E C; Frodesen, A G; Frühwirth, R; Fulda-Quenzer, F; Fuster, J A; Galloni, A; Gamba, D; Gandelman, M; García, C; García, J; Gaspar, C; Gasparini, U; Gavillet, P; Gazis, E N; Gelé, D; Gerber, J P; Gerdyukov, L N; Gibbs, M; Gokieli, R; Golob, B; Gopal, Gian P; Gorn, L; Górski, M; Guz, Yu; Gracco, Valerio; Graziani, E; Grosdidier, G; Grzelak, K; Gumenyuk, S A; Gunnarsson, P; Günther, M; Guy, J; Hahn, F; Hahn, S; Hajduk, Z; Hallgren, A; Hamacher, K; Hao, W; Harris, F J; Hedberg, V; Henriques, R P; Hernández, J J; Herquet, P; Herr, H; Hessing, T L; Higón, E; Hilke, Hans Jürgen; Hill, T S; Holmgren, S O; Holt, P J; Holthuizen, D J; Hoorelbeke, S; Houlden, M A; Hrubec, Josef; Huet, K; Hultqvist, K; Jackson, J N; Jacobsson, R; Jalocha, P; Janik, R; Jarlskog, C; Jarlskog, G; Jarry, P; Jean-Marie, B; Johansson, E K; Jönsson, L B; Jönsson, P E; Joram, Christian; Juillot, P; Kaiser, M; Kapusta, F; Karafasoulis, K; Karlsson, M; Karvelas, E; Katsanevas, S; Katsoufis, E C; Keränen, R; Khokhlov, Yu A; Khomenko, B A; Khovanskii, N N; King, B J; Kjaer, N J; Klein, H; Klovning, A; Kluit, P M; Köne, B; Kokkinias, P; Koratzinos, M; Korcyl, K; Kourkoumelis, C; Kuznetsov, O; Kramer, P H; Krammer, Manfred; Kreuter, C; Kronkvist, I J; Krumshtein, Z; Krupinski, W; Kubinec, P; Kucewicz, W; Kurvinen, K L; Lacasta, C; Laktineh, I; Lamblot, S; Lamsa, J; Lanceri, L; Lane, D W; Langefeld, P; Last, I; Laugier, J P; Lauhakangas, R; Leder, Gerhard; Ledroit, F; Lefébure, V; Legan, C K; Leitner, R; Lemoigne, Y; Lemonne, J; Lenzen, Georg; Lepeltier, V; Lesiak, T; Liko, D; Lindner, R; Lipniacka, A; Lippi, I; Lörstad, B; Loken, J G; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J N; Maehlum, G; Maio, A; Malychev, V; Mandl, F; Marco, J; Marco, R P; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Maron, T; Martínez-Rivero, C; Martínez-Vidal, F; Martí i García, S; Masik, J; Matorras, F; Matteuzzi, C; Matthiae, Giorgio; Mazzucato, M; McCubbin, M L; McKay, R; McNulty, R; Medbo, J; Merk, M; Meroni, C; Meyer, S; Meyer, W T; Michelotto, M; Migliore, E; Mirabito, L; Mitaroff, Winfried A; Mjörnmark, U; Moa, T; Møller, R; Mönig, K; Monge, M R; Morettini, P; Müller, H; Mundim, L M; Murray, W J; Muryn, B; Myatt, Gerald; Naraghi, F; Navarria, Francesco Luigi; Navas, S; Nawrocki, K; Negri, P; Némécek, S; Neumann, W; Neumeister, N; Nicolaidou, R; Nielsen, B S; Nieuwenhuizen, M; Nikolaenko, V; Niss, P; Nomerotski, A; Normand, Ainsley; Oberschulte-Beckmann, W; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, Risto; Österberg, K; Ouraou, A; Paganini, P; Paganoni, M; Pagès, P; Palka, H; Papadopoulou, T D; Papageorgiou, K; Pape, L; Parkes, C; Parodi, F; Passeri, A; Pegoraro, M; Peralta, L; Pernegger, H; Perrotta, A; Petridou, C; Petrolini, A; Petrovykh, M; Phillips, H T; Piana, G; Pierre, F; Pimenta, M; Pindo, M; Plaszczynski, S; Podobrin, O; Pol, M E; Polok, G; Poropat, P; Pozdnyakov, V; Prest, M; Privitera, P; Pukhaeva, N; Pullia, Antonio; Radojicic, D; Ragazzi, S; Rahmani, H; Ratoff, P N; Read, A L; Reale, M; Rebecchi, P; Redaelli, N G; Regler, Meinhard; Reid, D; Renton, P B; Resvanis, L K; Richard, F; Richardson, J; Rídky, J; Rinaudo, G; Ripp, I; Romero, A; Roncagliolo, I; Ronchese, P; Roos, L; Rosenberg, E I; Rosso, E; Roudeau, Patrick; Rovelli, T; Rückstuhl, W; Ruhlmann-Kleider, V; Ruiz, A; Rybicki, K; Saarikko, H; Sacquin, Yu; Sadovskii, A; Sajot, G; Salt, J; Sánchez, J; Sannino, M; Schimmelpfennig, M; Schneider, H; Schwickerath, U; Schyns, M A E; Sciolla, G; Scuri, F; Seager, P; Sedykh, Yu; Segar, A M; Seitz, A; Sekulin, R L; Shellard, R C; Siccama, I; Siegrist, P; Simonetti, S; Simonetto, F; Sissakian, A N; Sitár, B; Skaali, T B; Smadja, G; Smirnov, N; Smirnova, O G; Smith, G R; Solovyanov, O; Sosnowski, R; Souza-Santos, D; Spassoff, Tz; Spiriti, E; Sponholz, P; Squarcia, S; Stanescu, C; Stapnes, Steinar; Stavitski, I; Stichelbaut, F; Stocchi, A; Strauss, J; Strub, R; Stugu, B; Szczekowski, M; Szeptycka, M; Tabarelli de Fatis, T; Tavernet, J P; Chikilev, O G; Tilquin, A; Timmermans, J; Tkatchev, L G; Todorov, T; Toet, D Z; Tomaradze, A G; Tomé, B; Tonazzo, A; Tortora, L; Tranströmer, G; Treille, D; Trischuk, W; Tristram, G; Trombini, A; Troncon, C; Tsirou, A L; Turluer, M L; Tyapkin, I A; Tyndel, M; Tzamarias, S; Überschär, B; Ullaland, O; Uvarov, V; Valenti, G; Vallazza, E; Van der Velde, C; van Apeldoorn, G W; van Dam, P; Van Doninck, W K; Van Eldik, J; Vassilopoulos, N; Vegni, G; Ventura, L; Venus, W A; Verbeure, F; Verlato, M; Vertogradov, L S; Vilanova, D; Vincent, P; Vitale, L; Vlasov, E; Vodopyanov, A S; Vrba, V; Wahlen, H; Walck, C; Waldner, F; Weierstall, M; Weilhammer, Peter; Weiser, C; Wetherell, Alan M; Wicke, D; Wickens, J H; Wielers, M; Wilkinson, G R; Williams, W S C; Winter, M; Witek, M; Woschnagg, K; Yip, K; Yushchenko, O P; Zach, F; Zaitsev, A; Zalewska-Bak, A; Zalewski, Piotr; Zavrtanik, D; Zevgolatakos, E; Zimin, N I; Zito, M; Zontar, D; Zuberi, R; Zucchelli, G C; Zumerle, G; Belokopytov, Yu; Charpentier, Ph; Gavillet, Ph; Gouz, Yu; Jarlskog, Ch; Khokhlov, Yu; Papadopoulou, Th D

    1996-01-01

    The average lifetime of b-baryons has been studied using 3 \\times 10^6 hadronic Z^0 decays collected by the DELPHI detector at LEP. Three methods have been used, based on the measurement of different observables: the proper decay time distribution of 206 vertices reconstructed with a \\Lambda, a lepton and an oppositely charged pion; the impact parameter distribution of 441 muons with high transverse momentum accompanied by a \\mLs in the same jet; and the proper decay time distribution of 125 \\Lambda_c-lepton decay vertices with the \\Lambda_c exclusively reconstructed through its pK\\pi, pK^0 and \\mLs3\\pi decay modes. The combined result is~:\\par \\begin{center} \\tau(b-baryon) = (1.25^{+0.13}_{-0.11}\\ pm0.04(syst)^{+0.03}_{-0. 05}(syst)) ps\\par \\end{center} where the first systematic error is due to experimental uncertainties and the second to the uncertainties in the modelling of the b-baryon production and semi-leptonic decay. Including the measurement recently published by DELPHI based on a sample of proton-m...

  15. A simple depth-averaged model for dry granular flow

    Science.gov (United States)

    Hung, Chi-Yao; Stark, Colin P.; Capart, Herve

    Granular flow over an erodible bed is an important phenomenon in both industrial and geophysical settings. Here we develop a depth-averaged theory for dry erosive flows using balance equations for mass, momentum and (crucially) kinetic energy. We assume a linearized GDR-Midi rheology for granular deformation and Coulomb friction along the sidewalls. The theory predicts the kinematic behavior of channelized flows under a variety of conditions, which we test in two sets of experiments: (1) a linear chute, where abrupt changes in tilt drive unsteady uniform flows; (2) a rotating drum, to explore steady non-uniform flow. The theoretical predictions match the experimental results well in all cases, without the need to tune parameters or invoke an ad hoc equation for entrainment at the base of the flow. Here we focus on the drum problem. A dimensionless rotation rate (related to Froude number) characterizes flow geometry and accounts not just for spin rate, drum radius and gravity, but also for grain size, wall friction and channel width. By incorporating Coriolis force the theory can treat behavior under centrifuge-induced enhanced gravity. We identify asymptotic flow regimes at low and high dimensionless rotation rates that exhibit distinct power-law scaling behaviors.

  16. High Brightness, High Average Current Injector Development at Cornell

    CERN Document Server

    Sinclair, C K

    2005-01-01

    Cornell University is constructing a 100 mA average current, high brightness electron injector for a planned Energy Recovery Linac (ERL) hard X-ray synchrotron radiation source. This injector will employ a very high voltage DC gun with a negative electron affinity photoemission cathode. Relatively long duration electron pulses from the photocathode will be drift bunched, and accelerated to 5-15 MeV with five two-cell, 1300 MHz superconducting cavities. The total beam power will be limited to 575 kW by the DC and RF power sources. A genetic algorithm based computational optimization of this injector has resulted in simulated rms normalized emittances of 0.1 mm-mrad at 80 pC/bunch, and 0.7 mm-mrad at 1 nC/bunch. The many technical issues and their design solutions will be discussed. Construction of the gun and the SRF cavities is well underway. The schedule for completion, and the planned measurements, will be presented.

  17. An evaluation of the average DMF in hemodialyzed patients

    Directory of Open Access Journals (Sweden)

    Arami S. Assistant Professor

    2003-07-01

    Full Text Available Statement of Problem: Rapid increases in the population of hemodialyzed patients induce the dentists to acquire a complete understanding of the special therapeutic considerations for such patients. Purpose: The goal of this research was to study the amount of DMF in hemodialyzed patients, age ranging from 12-20 years, in the city of Tehran."nMaterials and Methods: In this cross- sectional and analytic- descriptive research, 50 kidney patients (27 mail and 23 females, with the age range of 12-20 years were selected. They had referred to one of the following hospitals for hemodialysis: Imam Khomeini, Children Medical Center Fayyazbakhsh, Haft-e-Tir, Ashrafi Esfahani, Labafinejad and Hasheminejad. The data, based on clinical examination, patient's answers, patient's medical files, parents replies, were collected and analyzed by Chi- Square test. Results: The average DMF, for. patients under study was 2.46, comparing to the normal subjects of the society, no significant difference was observed. Factors such as sex, Mother's education, oral hygiene and the number of daily brushing did not show any statistically significant difference about this index. The results also showed a 38% prevalence of severe gingivitis and 32% of moderate gingivitis. Conclusion: This restricted study emphasizes the necessity to use proper preventive methods and to improve the patient's and parents' knowledge about oral and dental health.

  18. Model characteristics of average skill boxers’ competition functioning

    Directory of Open Access Journals (Sweden)

    Martsiv V.P.

    2015-08-01

    Full Text Available Purpose: analysis of competition functioning of average skill boxers. Material: 28 fights of boxers-students have been analyzed. The following coefficients have been determined: effectiveness of punches, reliability of defense. The fights were conducted by formula: 3 rounds (3 minutes - every round. Results: models characteristics of boxers for stage of specialized basic training have been worked out. Correlations between indicators of specialized and general exercises have been determined. It has been established that sportsmanship of boxers manifests as increase of punches’ density in a fight. It has also been found that increase of coefficient of punches’ effectiveness results in expansion of arsenal of technical-tactic actions. Importance of consideration of standard specialized loads has been confirmed. Conclusions: we have recommended means to be applied in training process at this stage of training. On the base of our previous researches we have made recommendations on complex assessment of sportsmen-students’ skillfulness. Besides, we have shown approaches to improvement of different sides of sportsmen’s fitness.

  19. Multifractal detrended moving average analysis of global temperature records

    CERN Document Server

    Mali, Provash

    2015-01-01

    Long-range correlation and multifractal nature of the global monthly mean temperature anomaly time series over the period 1850-2012 are studied in terms of the multifractal detrended moving average (MFDMA) method. We try to address the source(s) of multifractality in the time series by comparing the results derived from the actual series with those from a set of shuffled and surrogate series. It is seen that the newly developed MFDMA method predicts a multifractal structure of the temperature anomaly time series that is more or less similar to that observed by other multifractal methods. In our analysis the major contribution of multifractality in the temperature records is found to be stemmed from long-range temporal correlation among the measurements, however the contribution of fat-tail distribution function of the records is not negligible. The results of the MFDMA analysis, which are found to depend upon the location of the detrending window, tend towards the observations of the multifractal detrended fl...

  20. The Average Size and Temperature Profile of Quasar Accretion Disks

    CERN Document Server

    Jiménez-Vicente, J; Kochanek, C S; Muñoz, J A; Motta, V; Falco, E; Mosquera, A M

    2014-01-01

    We use multi-wavelength microlensing measurements of a sample of 10 image pairs from 8 lensed quasars to study the structure of their accretion disks. By using spectroscopy or narrow band photometry we have been able to remove contamination from the weakly microlensed broad emission lines, extinction and any uncertainties in the large-scale macro magnification of the lens model. We determine a maximum likelihood estimate for the exponent of the size versus wavelength scaling ($r_s\\propto \\lambda^p$ corresponding to a disk temperature profile of $T\\propto r^{-1/p}$) of $p=0.75^{+0.2}_{-0.2}$, and a Bayesian estimate of $p=0.8\\pm0.2$, which are significantly smaller than the prediction of thin disk theory ($p=4/3$). We have also obtained a maximum likelihood estimate for the average quasar accretion disk size of $r_s=4.5^{+1.5}_{-1.2} $ lt-day at a rest frame wavelength of $\\lambda = 1026~{\\mathrm \\AA}$ for microlenses with a mean mass of $M=1 M_\\sun$, in agreement with previous results, and larger than expecte...

  1. MHD stability of torsatrons using the average method

    International Nuclear Information System (INIS)

    The stability of torsatrons is studied using the average method, or stellarator expansion. Attention is focused upon the Advanced Toroidal Fusion Device (ATF), an l = 2, 12 field period, moderate aspect ratio configuration which, through a combination of shear and toroidally induced magnetic well, is stable to ideal modes. Using the vertical field (VF) coil system of ATF it is possible to enhance this stability by shaping the plasma to control the rotational transform. The VF coils are also useful tools for exploring the stability boundaries of ATF. By shifting the plasma inward along the major radius, the magnetic well can be removed, leading to three types of long wavelength instabilities: (1) A free boundary ''edge mode'' occurs when the rotational transform at the plasma edge is just less than unity. This mode is stabilized by the placement of a conducting wall at 1.5 times the plasma radius. (2) A free boundary global kink mode is observed at high β. When either β is lowered or a conducting wall is placed at the plasma boundary, the global mode is suppressed, and (3) an interchange mode is observed instead. For this interchange mode, calculations of the second, third, etc., most unstable modes are used to understand the nature of the degeneracy breaking induced by toroidal effects. Thus, the ATF configuration is well chosen for the study of torsatron stability limits

  2. Declining average daily census. Part 2: Possible solutions.

    Science.gov (United States)

    Weil, T P

    1986-01-01

    Several possible solutions are available to hospitals experiencing a declining average daily census, including: Closure of some U.S. hospitals; Joint ventures between physicians and hospitals; Development of integrated and coordinated medical-fiscal-management information systems; Improvements in the hospital's short-term marketing strategy; Reduction of the facility's internal operation expenses; Vertical more than horizontal diversification to develop a multilevel (acute through home care) regional health care system with an alternative health care payment system that is a joint venture with the medical staff(s); Acquisition or management by a not-for-profit or investor-owned multihospital system (emphasis on horizontal versus vertical integration). Many reasons exist for an institution to choose the solution of developing a regional multilevel health care system rather than being part of a large, geographically scattered, multihospital system. Geographic proximity, lenders' preferences, service integration, management recruitment, and local remedies to a declining census all favor the regional system. More answers lie in emphasizing the basics of health care regionalization and focusing on vertical integration, including a prepayment plan, rather than stressing large multihospital systems with institutions in several states or selling out to the investor-owned groups.

  3. Tortuosity and the Averaging of Microvelocity Fields in Poroelasticity.

    Science.gov (United States)

    Souzanchi, M F; Cardoso, L; Cowin, S C

    2013-03-01

    The relationship between the macro- and microvelocity fields in a poroelastic representative volume element (RVE) has not being fully investigated. This relationship is considered to be a function of the tortuosity: a quantitative measure of the effect of the deviation of the pore fluid streamlines from straight (not tortuous) paths in fluid-saturated porous media. There are different expressions for tortuosity based on the deviation from straight pores, harmonic wave excitation, or from a kinetic energy loss analysis. The objective of the work presented is to determine the best expression for tortuosity of a multiply interconnected open pore architecture in an anisotropic porous media. The procedures for averaging the pore microvelocity over the RVE of poroelastic media by Coussy and by Biot were reviewed as part of this study, and the significant connection between these two procedures was established. Success was achieved in identifying the Coussy kinetic energy loss in the pore fluid approach as the most attractive expression for the tortuosity of porous media based on pore fluid viscosity, porosity, and the pore architecture. The fabric tensor, a 3D measure of the architecture of pore structure, was introduced in the expression of the tortuosity tensor for anisotropic porous media. Practical considerations for the measurement of the key parameters in the models of Coussy and Biot are discussed. In this study, we used cancellous bone as an example of interconnected pores and as a motivator for this study, but the results achieved are much more general and have a far broader application than just to cancellous bone. PMID:24891725

  4. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.

    2015-08-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  5. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  6. Side chain conformational averaging in human dihydrofolate reductase.

    Science.gov (United States)

    Tuttle, Lisa M; Dyson, H Jane; Wright, Peter E

    2014-02-25

    The three-dimensional structures of the dihydrofolate reductase enzymes from Escherichia coli (ecDHFR or ecE) and Homo sapiens (hDHFR or hE) are very similar, despite a rather low level of sequence identity. Whereas the active site loops of ecDHFR undergo major conformational rearrangements during progression through the reaction cycle, hDHFR remains fixed in a closed loop conformation in all of its catalytic intermediates. To elucidate the structural and dynamic differences between the human and E. coli enzymes, we conducted a comprehensive analysis of side chain flexibility and dynamics in complexes of hDHFR that represent intermediates in the major catalytic cycle. Nuclear magnetic resonance relaxation dispersion experiments show that, in marked contrast to the functionally important motions that feature prominently in the catalytic intermediates of ecDHFR, millisecond time scale fluctuations cannot be detected for hDHFR side chains. Ligand flux in hDHFR is thought to be mediated by conformational changes between a hinge-open state when the substrate/product-binding pocket is vacant and a hinge-closed state when this pocket is occupied. Comparison of X-ray structures of hinge-open and hinge-closed states shows that helix αF changes position by sliding between the two states. Analysis of χ1 rotamer populations derived from measurements of (3)JCγCO and (3)JCγN couplings indicates that many of the side chains that contact helix αF exhibit rotamer averaging that may facilitate the conformational change. The χ1 rotamer adopted by the Phe31 side chain depends upon whether the active site contains the substrate or product. In the holoenzyme (the binary complex of hDHFR with reduced nicotinamide adenine dinucleotide phosphate), a combination of hinge opening and a change in the Phe31 χ1 rotamer opens the active site to facilitate entry of the substrate. Overall, the data suggest that, unlike ecDHFR, hDHFR requires minimal backbone conformational rearrangement as

  7. LENOS and BELINA Facilities for Measuring Maxwellian Averaged Cross Section

    International Nuclear Information System (INIS)

    Full text: The Laboratori Nazionali di Legnaro is one of the 5 laboratories of Instituto Nazionale di Fisica Nucleare (INFN), Italy; the one devoted to nuclear physics. The Lab has 4 accelerators: a 14 MV tandem, a 7 MV Van de Graaff, a 2 MV electrostatic and a superconductive linac. The electrostatic accelerators are able to accelerate ions up to Li, while the tandem and linac can accelerate heavy ions up to Ni. The smaller energy machines, namely the 7 MV (CN accelerator) and the 2 MV (AN2000 accelerator), are mostly devoted to nuclear physics applications. Within the CN accelerator, the neutron beam line for astrophysics (BELINA) is under development. The BELINA beam line will be devoted to the measurement of Maxwellian averaged cross section at several stellar temperatures, using a new method to generate the Maxwell-Boltzmann neutron spectra developed within the framework of the LENOS project. BELINA well characterized neutron spectra can also be used for validation of evaluated data as requested by the IRDFF CRP. The proposed new method deals with the shaping of the proton beam energy distribution by inserting a thin layer of material between the beam line and the lithium target. The thickness of the foil, the foil material and the proton energy are chosen in order to produce quasi-Gaussian spectra of protons that, impinging directly on the lithium target, produce the desired MBNS (Maxwell Boltzmann Neutron Spectra). The lithium target is a low mass target cooled by thin layer of forced water all around the beam spot, necessary to sustain the high specific power delivered to the target in CW (activation measurements). The LENOS method is able to produce MBNS with tuneable neutron energy ranging from 25 to 60 keV with high accuracy. Higher neutron energies up to 100 keV can be achieved if some deviation from MBNS is accepted. Recently, we have developed an upgrade of the pulsing system of the CN accelerator. The system has been tested already and works well

  8. The Dow Jones Industrial Average: Issues of Downward Bias and Increased Volatility

    OpenAIRE

    Mueller, Paul A.; Raj A. Padmaraj; Ralph C. St. John

    1999-01-01

    Does the method of divisor adjustment used for stock splits in the Dow Jones Industrial Average (DJIA) cause a downward bias in the averageÕs level and does this method of adjustment cause increased volatility in the average? To investigate these issues, two averages are created using DJIA stocks. One average is adjusted for stock splits through adjustment in the divisor. This method is identical to the DJIA method of adjustment. The other average makes adjustment for stock splits by adjustin...

  9. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    Science.gov (United States)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  10. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Science.gov (United States)

    2010-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or imported...: ER26FE07.012 Where: Bavg = Average benzene concentration for the applicable averaging period...

  11. 40 CFR 600.510-86 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600.510-86 Calculation of average fuel economy. (a) Average fuel economy will be calculated to the...

  12. 78 FR 35054 - All Items Consumer Price Index for All Urban Consumers United States City Average

    Science.gov (United States)

    2013-06-11

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers United States City Average... Commission and publishes this notice in the Federal Register that the United States City Average All Items... average of 147.7 to its 2012 annual average of 687.761 and that it increased 29.7 percent from its...

  13. 77 FR 23282 - All Items Consumer Price Index for All Urban Consumers; United States City Average

    Science.gov (United States)

    2012-04-18

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers; United States City Average... Election Commission and publishes this notice in the Federal Register that the United States City Average... 1974 annual average of 147.7 to its 2011 annual average of 673.818 and that it increased 27.0...

  14. 47 CFR 24.53 - Calculation of height above average terrain (HAAT).

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Calculation of height above average terrain... average terrain (HAAT). (a) HAAT is determined by subtracting average terrain elevation from antenna height above mean sea level. (b) Average terrain elevation shall be calculated using elevation data...

  15. 26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to a qualified source...

  16. 7 CFR 1437.11 - Average market price and payment factors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Average market price and payment factors. 1437.11... ASSISTANCE PROGRAM General Provisions § 1437.11 Average market price and payment factors. (a) An average... Animal-unit-days (AUD) value will be based on the national average price of corn and the...

  17. 75 FR 22164 - All Items Consumer Price Index for All Urban Consumers United States City Average

    Science.gov (United States)

    2010-04-27

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers United States City Average... Commission and publishes this notice in the Federal Register that the United States City Average All Items... average of 147.7 to its 2009 annual average of 642.658 and that it increased 21.2 percent from its...

  18. 47 CFR 65.305 - Calculation of the weighted average cost of capital.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation of the weighted average cost of... Carriers § 65.305 Calculation of the weighted average cost of capital. (a) The composite weighted average... Commission determines to the contrary in a prescription proceeding, the composite weighted average cost...

  19. Gender Differences in Gifted and Average-Ability Students: Comparing Girls' and Boys' Achievement, Self-Concept, Interest, and Motivation in Mathematics

    Science.gov (United States)

    Preckel, Franzis; Goetz, Thomas; Pekrun, Reinhard; Kleine, Michael

    2008-01-01

    This article investigates gender differences in 181 gifted and 181 average-ability sixth graders in achievement, academic self-concept, interest, and motivation in mathematics. Giftedness was conceptualized as nonverbal reasoning ability and defined by a rank of at least 95% on a nonverbal reasoning subscale of the German Cognitive Abilities Test.…

  20. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns

    DEFF Research Database (Denmark)

    Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour

    The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias...... correction term) as the (scaled) sum of squared pre-averaged returns, where the pre-averaging is done over all possible non-overlapping blocks of consecutive observations. Pre-averaging reduces the influence of the noise and allows for realized volatility estimation on the pre-averaged returns. The non...

  1. What's Average?

    Science.gov (United States)

    Stack, Sue; Watson, Jane; Hindley, Sue; Samson, Pauline; Devlin, Robyn

    2010-01-01

    This paper reports on the experiences of a group of teachers engaged in an action research project to develop critical numeracy classrooms. The teachers initially explored how contexts in the media could be used as bases for activities to encourage student discernment and critical thinking about the appropriate use of the underlying mathematical…

  2. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Science.gov (United States)

    2010-10-01

    ... such services in compliance with its geographic rate averaging and rate integration obligations... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED)...

  3. Multiplicative and implicative importance weighted averaging aggregation operators with accurate andness direction

    DEFF Research Database (Denmark)

    Larsen, Henrik Legind

    2009-01-01

    Weighted averaging aggregation plays a key role in utilizations of electronic data and information resources for retrieving, fusing, and extracting information and knowledge, as needed for decision making. Of particular interest for such utilizations are the weighted averaging aggregation operato...

  4. Conceptual difficulties with the q-averages in non-extensive statistical mechanics

    Science.gov (United States)

    Abe, Sumiyoshi

    2012-11-01

    The q-average formalism of nonextensive statistical mechanics proposed in the literature is critically examined by considerations of several pedagogical examples. It is shown that there exist a number of difficulties with the concept of q-averages.

  5. Quick-Determination of the Average Atomic Number Z by X-Ray Scattering

    DEFF Research Database (Denmark)

    Kunzendorf, Helmar

    1972-01-01

    X-ray scattering ratio measurements are proposed for a quick determination of the average atomic number of rock powders.......X-ray scattering ratio measurements are proposed for a quick determination of the average atomic number of rock powders....

  6. Why one-dimensional models fail in the diagnosis of average spectra from inhomogeneous stellar atmospheres

    OpenAIRE

    Uitenbroek, Han; Criscuoli, Serena

    2011-01-01

    We investigate the feasibility of representing a structured multi-dimensional stellar atmosphere with a single one-dimensional average stratification for the purpose of spectral diagnosis of the atmosphere's average spectrum. In particular we construct four different one-dimensional stratifications from a single snapshot of a magneto-hydrodynamic simulation of solar convection: one by averaging its properties over surfaces of constant height, and three different ones by averaging over surface...

  7. Analysis on Change Characteristics of the Average Temperature in Sichuan in 50 Years

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The research aimed to analyze change characteristics of the average temperature in Sichuan in 50 years.[Method] By using average temperature data at 156 stations of Sichuan from 1961 to 2010,interannual and interdecadal evolution characteristics,regional and seasonal differences of the average temperature in Sichuan in 50 years were analyzed.[Result] Variations of the average temperatures in the whole province and each climatic region in 50 years all presented rise trends.Rise amplitude of the a...

  8. 47 CFR 64.1801 - Geographic rate averaging and rate integration.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration... CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Geographic Rate Averaging and Rate Integration § 64.1801 Geographic rate averaging and rate integration. (a) The rates charged...

  9. Isotropic averaging for cell-dynamical-system simulation of spinodal decomposition

    Indian Academy of Sciences (India)

    Anand Kumar

    2003-07-01

    Formulae have been developed for the isotropic averagings in two and three dimensions. Averagings are employed in the cell-dynamical-system simulation of spinodal decomposition for inter-cell coupling. The averagings used in earlier works on spinodal decomposition have been discussed.

  10. 75 FR 54073 - Medicaid Program; Withdrawal of Determination of Average Manufacturer Price, Multiple Source Drug...

    Science.gov (United States)

    2010-09-03

    ...; Withdrawal of Determination of Average Manufacturer Price, Multiple Source Drug Definition, and Upper Limits... of `average manufacturer price' or the statutory definition of `multiple source drug' as stated by... determination of average manufacturer price (AMP), and the Federal upper limits (FULs) for multiple source...

  11. Numerical examination of commutativity between Backus and Gazis et al. averages

    CERN Document Server

    Dalton, David R

    2016-01-01

    Dalton and Slawinski (2016) show that, in general, the Backus (1962) average and the Gazis et al. (1963) average do not commute. Herein, we examine the extent of this noncommutativity. We illustrate numerically that the extent of noncommutativity is a function of the strength of anisotropy. The averages nearly commute in the case of weak anisotropy.

  12. 40 CFR 91.103 - Averaging, banking, and trading of exhaust emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging, banking, and trading of... Standards and Certification Provisions § 91.103 Averaging, banking, and trading of exhaust emission credits. Regulations regarding averaging, banking, and trading provisions along with applicable...

  13. 40 CFR 89.111 - Averaging, banking, and trading of exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging, banking, and trading of... ENGINES Emission Standards and Certification Provisions § 89.111 Averaging, banking, and trading of exhaust emissions. Regulations regarding the availability of an averaging, banking, and trading...

  14. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Science.gov (United States)

    2010-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year...

  15. 20 CFR 404.210 - Average-indexed-monthly-earnings method.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-indexed-monthly-earnings method. 404... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Indexed-Monthly-Earnings Method of Computing Primary Insurance Amounts § 404.210 Average-indexed-monthly-earnings method. (a) Who is...

  16. 29 CFR 548.306 - Average earnings for year or quarter year preceding the current quarter.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Average earnings for year or quarter year preceding the... PAY Interpretations Authorized Basic Rates § 548.306 Average earnings for year or quarter year... hour for each workweek equal to the average hourly remuneration of the employee for employment...

  17. 19 CFR 141.112 - Liens for freight, charges, or contribution in general average.

    Science.gov (United States)

    2010-04-01

    ... general average. 141.112 Section 141.112 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF....112 Liens for freight, charges, or contribution in general average. (a) Definitions. The following are... connected with the transportation of the goods. (3) General average. “General average” means the...

  18. 26 CFR 1.410(b)-5 - Average benefit percentage test.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Average benefit percentage test. 1.410(b)-5...) INCOME TAX (CONTINUED) INCOME TAXES Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.410(b)-5 Average benefit percentage test. (a) General rule. A plan satisfies the average benefit percentage test of...

  19. 77 FR 23283 - All Items Consumer Price Index for All Urban Consumers; United States City Average

    Science.gov (United States)

    2012-04-18

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers; United States City Average... this notice in the Federal Register that the United States City Average All Items Consumer Price Index for All Urban Consumers (1967 = 100) increased 116.6 percent from its 1984 annual average of 311.1...

  20. 42 CFR 100.2 - Average cost of a health insurance policy.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Average cost of a health insurance policy. 100.2... VACCINE INJURY COMPENSATION § 100.2 Average cost of a health insurance policy. For purposes of determining..., less certain deductions. One of the deductions is the average cost of a health insurance policy,...

  1. 26 CFR 31.3402(h)(1)-1 - Withholding on basis of average wages.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 15 2010-04-01 2010-04-01 false Withholding on basis of average wages. 31.3402... of average wages. (a) In general. An employer may determine the amount of tax to be deducted and withheld upon a payment of wages to an employee on the basis of the employee's average estimated...

  2. 25 CFR 700.173 - Average net earnings of business or farm.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings...

  3. 76 FR 31991 - All Items Consumer Price Index for All Urban Consumers; United States City Average

    Science.gov (United States)

    2011-06-02

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers; United States City Average... this notice in the Federal Register that the United States City Average All Items Consumer Price Index for All Urban Consumers (1967 = 100) increased 110.0 percent from its 1984 annual average of 311.1...

  4. 20 CFR 404.1574a - When and how we will average your earnings.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false When and how we will average your earnings... Activity § 404.1574a When and how we will average your earnings. (a) If your work as an employee or as a... has been no change in the substantial gainful activity earnings levels, we will average your...

  5. 20 CFR 416.974a - When and how we will average your earnings.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false When and how we will average your earnings... Activity § 416.974a When and how we will average your earnings. (a) To determine your initial eligibility for benefits, we will average any earnings you make during the month you file for benefits and...

  6. 78 FR 35054 - All Items Consumer Price Index for All Urban Consumers; United States City Average

    Science.gov (United States)

    2013-06-11

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers; United States City Average... in the Federal Register that the United States City Average All Items Consumer Price Index for All Urban Consumers (1967=100) increased 121.1 percent from its 1984 annual average of 311.1 to its...

  7. 29 CFR 548.303 - Average earnings for each type of work.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Average earnings for each type of work. 548.303 Section 548... Basic Rates § 548.303 Average earnings for each type of work. (a) Section 548.3(c) authorizes as an... such average is regularly computed under the agreement or understanding. Such a rate may be used...

  8. 75 FR 22164 - All Items Consumer Price Index for All Urban Consumers; United States City Average

    Science.gov (United States)

    2010-04-27

    ... of the Secretary All Items Consumer Price Index for All Urban Consumers; United States City Average... this notice in the Federal Register that the United States City Average All Items Consumer Price Index for All Urban Consumers (1967=100) increased 106.6 percent from its 1984 annual average of 311.1...

  9. 77 FR 24940 - Energy Conservation Program for Consumer Products: Representative Average Unit Costs of Energy

    Science.gov (United States)

    2012-04-26

    ...: Representative Average Unit Costs of Energy'', dated March 10, 2011, 76 FR 13168. May 29, 2012, the cost figures...: Representative Average Unit Costs of Energy AGENCY: Office of Energy Efficiency and Renewable Energy, Department... forecasting the representative average unit costs of five residential energy sources for the year...

  10. 29 CFR 548.302 - Average earnings for period other than a workweek.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Average earnings for period other than a workweek. 548.302... Authorized Basic Rates § 548.302 Average earnings for period other than a workweek. (a) Section 548.3(b... days for which such average is regularly computed under the agreement or understanding. Such a rate...

  11. 42 CFR 414.904 - Average sales price as the basis for payment.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Average sales price as the basis for payment. 414... for Drugs and Biologicals Under Part B § 414.904 Average sales price as the basis for payment. (a...) The actual charge on the claim for program benefits; or (2) 106 percent of the average sales...

  12. Gender differences in gifted and average-ability students : Comparing girls' and boys' achievement, self-concept, interest, and motivation in mathematics

    OpenAIRE

    Preckel, Franzis; Götz, Thomas; Pekrun, Reinhard; Kleine, Michael

    2008-01-01

    This article investigates gender differences in 181 gifted and 181 average-ability sixth graders in achievement, academic self-concept, interest, and motivation in mathematics. Giftedness was conceptualized as nonverbal reasoning ability and defined by a rank of at least 95% on a nonverbal reasoning subscale of the German Cognitive Abilities Test. Mathematical achievement was measured by teacher-assigned grades and a standardized mathematics test. Self-concept, interest, and motivation were a...

  13. From moving averages to anomalous diffusion: a Rényi-entropy approach

    International Nuclear Information System (INIS)

    Moving averages, also termed convolution filters, are widely applied in science and engineering at large. As moving averages transform inputs to outputs by convolution, they induce correlation. In effect, moving averages are perhaps the most fundamental and ubiquitous mechanism of transforming uncorrelated inputs to correlated outputs. In this paper we study the correlation structure of general moving averages, unveil the Rényi-entropy meaning of a moving-average's overall correlation, address the maximization of this overall correlation, and apply this overall correlation to the dispersion-measurement and to the classification of regular and anomalous diffusion transport processes. (fast track communication)

  14. Average blood flow and oxygen uptake in the human brain during resting wakefulness

    DEFF Research Database (Denmark)

    Madsen, P L; Holm, S; Herning, M;

    1993-01-01

    The Kety-Schmidt technique can be regarded as the reference method for measurement of global average cerebral blood flow (average CBF) and global average cerebral metabolic rate of oxygen (average CMRO2). However, in the practical application of the method, diffusion equilibrium for inert gas...... the measured data, we find that the true average values for CBF and CMRO2 in the healthy young adult are approximately 46 ml 100 g-1 min-1 and approximately 3.0 ml 100 g-1 min-1. Previous studies have suggested that some of the variation in CMRO2 values could be ascribed to differences in cerebral venous...

  15. Study about thoracic perimeter average performances in Romanian Hucul horse breed – Prislop bloodline

    Directory of Open Access Journals (Sweden)

    Marius Maftei

    2015-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 93 hucul horse from Prislop bloodline divided in 3 stallion families analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for thoracic perimeter was 148.55 cm. at 18 months, 160.44 at 30 months old and 167.77 cm. at 42 months old. We can observe a good growth rate from one age to another and a small differences between sexes. The average performances of the character are between characteristic limits of the breed.

  16. Scalability of components for kW-level average power few-cycle lasers.

    Science.gov (United States)

    Hädrich, Steffen; Rothhardt, Jan; Demmler, Stefan; Tschernajew, Maxim; Hoffmann, Armin; Krebs, Manuel; Liem, Andreas; de Vries, Oliver; Plötner, Marco; Fabian, Simone; Schreiber, Thomas; Limpert, Jens; Tünnermann, Andreas

    2016-03-01

    In this paper, the average power scalability of components that can be used for intense few-cycle lasers based on nonlinear compression of modern femtosecond solid-state lasers is investigated. The key components of such a setup, namely, the gas-filled waveguides, laser windows, chirped mirrors for pulse compression and low dispersion mirrors for beam collimation, focusing, and beam steering are tested under high-average-power operation using a kilowatt cw laser. We demonstrate the long-term stable transmission of kW-level average power through a hollow capillary and a Kagome-type photonic crystal fiber. In addition, we show that sapphire substrates significantly improve the average power capability of metal-coated mirrors. Ultimately, ultrabroadband dielectric mirrors show negligible heating up to 1 kW of average power. In summary, a technology for scaling of few-cycle lasers up to 1 kW of average power and beyond is presented.

  17. How do partly omitted control variables influence the averages used in meta-analysis in economics?

    DEFF Research Database (Denmark)

    Paldam, Martin

    of the primary studies. They are the POCs, partly omitted controls, of the meta-study. Some POCs are ceteris paribus controls chosen to make results from different data samples comparable. They should differ. Others are model variables. They may be true and should always be included, while others are false......Meta regression analysis is used to extract the best average from a set of N primary studies of one economic parameter. Three averages of the N-set are discussed: The mean, the PET meta-average and the augmented meta-average. They are affected by control variables that are used in some...... and should always be excluded, if only we knew. If POCs are systematically included for their effect on the estimate of the parameter, it gives publication bias. It is corrected by the meta-average. If a POC is randomly included, it gives a bias, which is corrected by the augmented meta-average. With many...

  18. Scalability of components for kW-level average power few-cycle lasers.

    Science.gov (United States)

    Hädrich, Steffen; Rothhardt, Jan; Demmler, Stefan; Tschernajew, Maxim; Hoffmann, Armin; Krebs, Manuel; Liem, Andreas; de Vries, Oliver; Plötner, Marco; Fabian, Simone; Schreiber, Thomas; Limpert, Jens; Tünnermann, Andreas

    2016-03-01

    In this paper, the average power scalability of components that can be used for intense few-cycle lasers based on nonlinear compression of modern femtosecond solid-state lasers is investigated. The key components of such a setup, namely, the gas-filled waveguides, laser windows, chirped mirrors for pulse compression and low dispersion mirrors for beam collimation, focusing, and beam steering are tested under high-average-power operation using a kilowatt cw laser. We demonstrate the long-term stable transmission of kW-level average power through a hollow capillary and a Kagome-type photonic crystal fiber. In addition, we show that sapphire substrates significantly improve the average power capability of metal-coated mirrors. Ultimately, ultrabroadband dielectric mirrors show negligible heating up to 1 kW of average power. In summary, a technology for scaling of few-cycle lasers up to 1 kW of average power and beyond is presented. PMID:26974623

  19. On Adequacy of Two-point Averaging Schemes for Composites with Nonlinear Viscoelastic Phases

    OpenAIRE

    Zeman, J.; Valenta, R.; M. Šejnoha

    2004-01-01

    Finite element simulations on fibrous composites with nonlinear viscoelastic response of the matrix phase are performed to explain why so called two-point averaging schemes may fail to deliver a realistic macroscopic response. Nevertheless, the potential of two-point averaging schemes (the overall response estimated in terms of localized averages of a two-phase composite medium) has been put forward in number of studies either in its original format or modified to overcome the inherited stiff...

  20. Minimum wage and the average wage in France: a circular relationship?

    OpenAIRE

    Cette, Gilbert; Chouard, Valérie; Verdugo, Gregory

    2013-01-01

    International audience This paper investigates whether increases in the minimum wage in France have the same impact on the average wage when intended to preserve the purchasing power of the minimum wage as when intended to raise it. We find that the impact of the minimum wage on the average wage is strong, but differs depending on the indexation factor. We also find some empirical evidence of circularity between the average wage and the minimum wage.

  1. A Comparison Between Two Average Modelling Techniques of AC-AC Power Converters

    OpenAIRE

    Pawel Szczesniak

    2015-01-01

    In this paper, a comparative evaluation of two modelling tools for switching AC-AC power converters is presented. Both of them are based on average modelling techniques. The first approach is based on the circuit averaging technique and consists in the topological manipulations, applied to a converters states. The second approach makes use of state-space averaged model of the converter and is based on analytical manipulations using the different state representations of a converter. The two m...

  2. Influence on natural circulation nuclear thermal coupling average power under rolling motion

    International Nuclear Information System (INIS)

    By performing simulation computations of single-phase flow natural circulation considering nuclear thermal coupling under rolling motion conditions, the influence factors which have great effect on the average heating power of this circulation system were studied. The analysis results indicate that under rolling motion conditions, the average heating power which considers the nuclear thermal coupling effect is in direct proportion to the average flow rate and average heat transfer coefficient, while it has an inverse relationship with the ratio between temperature-feedback coefficient of moderator and that of fuel. The effect of rolling parameters on the average heating power is related with the ratio between temperature-feedback coefficient of moderator and that of fuel. When the effect of the variation of the average heat transfer coefficient on the reactivity plays a leading role in the process, the stronger the rolling motion is, the higher the average heating power is. However, when the effect of the variation for the average friction coefficient on the reactivity takes the lead, the stronger the rolling motion is, the lower the average heating power is. (authors)

  3. On the average capacity and bit error probability of wireless communication systems

    KAUST Repository

    Yilmaz, Ferkan

    2011-12-01

    Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.

  4. Semiclassical vibration-rotation transition probabilities for motion in molecular state averaged potentials.

    Science.gov (United States)

    Stallcop, J. R.

    1971-01-01

    Collision-induced vibration-rotation transition probabilities are calculated from a semiclassical three-dimensional model, in which the collision trajectory is determined by the classical motion in the interaction potential that is averaged over the molecular rotational state, and compared with those for which the motion is governed by a spherically averaged potential. For molecules that are in highly excited rotational states, thus dominating the vibrational relaxation rate at high temperature, it is found that the transition probability for rotational state averaging is smaller than that for spherical averaging. For typical collisions, the transition cross section is decreased by a factor of about 1.5 to 2.

  5. Impulsive synchronization schemes of stochastic complex networks with switching topology: average time approach.

    Science.gov (United States)

    Li, Chaojie; Yu, Wenwu; Huang, Tingwen

    2014-06-01

    In this paper, a novel impulsive control law is proposed for synchronization of stochastic discrete complex networks with time delays and switching topologies, where average dwell time and average impulsive interval are taken into account. The side effect of time delays is estimated by Lyapunov-Razumikhin technique, which quantitatively gives the upper bound to increase the rate of Lyapunov function. By considering the compensation of decreasing interval, a better impulsive control law is recast in terms of average dwell time and average impulsive interval. Detailed results from a numerical illustrative example are presented and discussed. Finally, some relevant conclusions are drawn.

  6. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.

  7. Comparison of conventional averaged and rapid averaged, autoregressive-based extracted auditory evoked potentials for monitoring the hypnotic level during propofol induction

    DEFF Research Database (Denmark)

    Litvan, Héctor; Jensen, Erik W; Galan, Josefina;

    2002-01-01

    The extraction of the middle latency auditory evoked potentials (MLAEP) is usually done by moving time averaging (MTA) over many sweeps (often 250-1,000), which could produce a delay of more than 1 min. This problem was addressed by applying an autoregressive model with exogenous input (ARX) that...

  8. A constant travel time budget? In search for explanations for an increase in average travel time

    NARCIS (Netherlands)

    Rietveld, P.; Wee, van B.

    2002-01-01

    Recent research suggests that during the past decades the average travel time of the Dutch population has probably increased. However, different datasources show different levels of increase. Possible causes of the increase in average travel time are presented here. Increased incomes have probablyre

  9. 47 CFR 69.606 - Computation of average schedule company payments.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Computation of average schedule company... schedule company payments. (a) Payments shall be made in accordance with a formula approved or modified by the Commission. Such formula shall be designed to produce disbursements to an average schedule...

  10. Multi-objective calibration of forecast ensembles using Bayesian model averaging

    NARCIS (Netherlands)

    J.A. Vrugt; M.P. Clark; C.G.H. Diks; Q. Duan; B.A. Robinson

    2006-01-01

    Bayesian Model Averaging (BMA) has recently been proposed as a method for statistical postprocessing of forecast ensembles from numerical weather prediction models. The BMA predictive probability density function (PDF) of any weather quantity of interest is a weighted average of PDFs centered on the

  11. An Approach to Average Modeling and Simulation of Switch-Mode Systems

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of average modeling of PWM switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The paper discusses the derivation of PSPICE/ORCAD-compatible average models of the switch-mode power stages, their software implementation, and…

  12. 40 CFR Figure 1 to Subpart Qqq of... - Data Summary Sheet for Determination of Average Opacity

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Data Summary Sheet for Determination of.... 63, Subpt. QQQ, Fig. 1 Figure 1 to Subpart QQQ of Part 63—Data Summary Sheet for Determination of Average Opacity Clock time Number of converters blowing Converter aisle activity Average opacity for...

  13. Computation of the Metric Average of 2D Sets with Piecewise Linear Boundaries

    OpenAIRE

    Kels, Shay; Dyn, Nira; Lipovetsky, Evgeny

    2010-01-01

    The metric average is a binary operation between sets in Rn which is used in the approximation of set-valued functions. We introduce an algorithm that applies tools of computational geometry to the computation of the metric average of 2D sets with piecewise linear boundaries.

  14. 76 FR 5518 - Antidumping Proceedings: Calculation of the Weighted Average Dumping Margin and Assessment Rate...

    Science.gov (United States)

    2011-02-01

    ... certain antidumping duty proceedings (75 FR 81533). That proposed rule and proposed modification indicated... Weighted Average Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings AGENCY: Import... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate...

  15. A mass-spectroscopic analysis of the sulfurous compounds in average fractions of Romashkin oils

    Energy Technology Data Exchange (ETDEWEB)

    Kubas, M.; Kubelka, V.; Mostecky, J.; Vodicka, L.

    1982-01-01

    Results are presented of an analysis of sulfurous compounds in an average fraction of Romashkin (USSR) oils, produced by the method of mass-spectrometry. Some individual properties of the benzothiophene type were determined. It is shown, that in the average distillate of Romashkin petroleum, benzothiophenes are dominant.

  16. A Characterization of the Average Tree Solution for Cycle-Free Graph Games

    NARCIS (Netherlands)

    Mishra, D.; Talman, A.J.J.

    2009-01-01

    Herings et al. (2008) proposed a solution concept called the average tree solution for cycle-free graph games. We provide a characterization of the average tree solution for cycle-free graph games. The characteration underlines an important difference, in terms of symmetric treatment of agents, betw

  17. Age-specific average head template for typically developing 6-month-old infants.

    Directory of Open Access Journals (Sweden)

    Lisa F Akiyama

    Full Text Available Due to the rapid anatomical changes that occur within the brain structure in early human development and the significant differences between infant brains and the widely used standard adult templates, it becomes increasingly important to utilize appropriate age- and population-specific average templates when analyzing infant neuroimaging data. In this study we created a new and highly detailed age-specific unbiased average head template in a standard MNI152-like infant coordinate system for healthy, typically developing 6-month-old infants by performing linear normalization, diffeomorphic normalization and iterative averaging processing on 60 subjects' structural images. The resulting age-specific average templates in a standard MNI152-like infant coordinate system demonstrate sharper anatomical detail and clarity compared to existing infant average templates and successfully retains the average head size of the 6-month-old infant. An example usage of the average infant templates transforms magnetoencephalography (MEG estimated activity locations from MEG's subject-specific head coordinate space to the standard MNI152-like infant coordinate space. We also created a new atlas that reflects the true 6-month-old infant brain anatomy. Average templates and atlas are publicly available on our website (http://ilabs.washington.edu/6-m-templates-atlas.

  18. Analysis of litter size and average litter weight in pigs using a recursive model

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel; Thompson, Robin

    2007-01-01

    An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one-to-one...

  19. The Dopaminergic Midbrain Mediates an Effect of Average Reward on Pavlovian Vigor.

    Science.gov (United States)

    Rigoli, Francesco; Chew, Benjamin; Dayan, Peter; Dolan, Raymond J

    2016-09-01

    Dopamine plays a key role in motivation. Phasic dopamine response reflects a reinforcement prediction error (RPE), whereas tonic dopamine activity is postulated to represent an average reward that mediates motivational vigor. However, it has been hard to find evidence concerning the neural encoding of average reward that is uncorrupted by influences of RPEs. We circumvented this difficulty in a novel visual search task where we measured participants' button pressing vigor in a context where information (underlying an RPE) about future average reward was provided well before the average reward itself. Despite no instrumental consequence, participants' pressing force increased for greater current average reward, consistent with a form of Pavlovian effect on motivational vigor. We recorded participants' brain activity during task performance with fMRI. Greater average reward was associated with enhanced activity in dopaminergic midbrain to a degree that correlated with the relationship between average reward and pressing vigor. Interestingly, an opposite pattern was observed in subgenual cingulate cortex, a region implicated in negative mood and motivational inhibition. These findings highlight a crucial role for dopaminergic midbrain in representing aspects of average reward and motivational vigor. PMID:27082045

  20. 40 CFR 86.1817-08 - Complete heavy-duty vehicle averaging, trading, and banking program.

    Science.gov (United States)

    2010-07-01

    ..., trading, and banking program. 86.1817-08 Section 86.1817-08 Protection of Environment ENVIRONMENTAL... Complete heavy-duty vehicle averaging, trading, and banking program. Section 86.1817-08 includes text that...-cycle vehicles may participate in an NMHC averaging, banking and trading program to show compliance...