WorldWideScience

Sample records for average

  1. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  2. Quaternion Averaging

    Science.gov (United States)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  3. Average Interest

    OpenAIRE

    George Chacko; Sanjiv Ranjan Das

    1997-01-01

    We develop analytic pricing models for options on averages by means of a state-space expansion method. These models augment the class of Asian options to markets where the underlying traded variable follows a mean-reverting process. The approach builds from the digital Asian option on the average and enables pricing of standard Asian calls and puts, caps and floors, as well as other exotica. The models may be used (i) to hedge long period interest rate risk cheaply, (ii) to hedge event risk (...

  4. Neutron resonance averaging

    International Nuclear Information System (INIS)

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  5. Averaging anisotropic cosmologies

    International Nuclear Information System (INIS)

    We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of anisotropic pressure-free models. Adopting the Buchert scheme, we recast the averaged scalar equations in Bianchi-type form and close the standard system by introducing a propagation formula for the average shear magnitude. We then investigate the evolution of anisotropic average vacuum models and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. The presence of nonzero average shear in our equations also allows us to examine the constraints that a phase of backreaction-driven accelerated expansion might put on the anisotropy of the averaged domain. We close by assessing the status of these and other attempts to define and calculate 'average' spacetime behaviour in general relativity

  6. Average-energy games

    OpenAIRE

    Bouyer, Patricia; Markey, Nicolas; Randour, Mickael; Larsen, Kim G.; Laursen, Simon

    2015-01-01

    Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this ...

  7. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... natural approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  8. Average Angular Velocity

    OpenAIRE

    Van Essen, H.

    2004-01-01

    This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to th...

  9. On the Averaging Principle

    OpenAIRE

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and interchangibility is O(\\epsilon^2) equivalent to the outcome of the corresponding homogeneous model, where \\epsilon is the level of heterogeneity. We then use this averaging pr...

  10. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... natural approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  11. Averaged extreme regression quantile

    OpenAIRE

    Jureckova, Jana

    2015-01-01

    Various events in the nature, economics and in other areas force us to combine the study of extremes with regression and other methods. A useful tool for reducing the role of nuisance regression, while we are interested in the shape or tails of the basic distribution, is provided by the averaged regression quantile and namely by the average extreme regression quantile. Both are weighted means of regression quantile components, with weights depending on the regressors. Our primary interest is ...

  12. Averaging anisotropic cosmologies

    CERN Document Server

    Barrow, J D; Barrow, John D.; Tsagas, Christos G.

    2006-01-01

    We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of pressure-free Bianchi-type models. Adopting the Buchert averaging scheme, we identify the kinematic backreaction effects by focussing on spacetimes with zero or isotropic spatial curvature. This allows us to close the system of the standard scalar formulae with a propagation equation for the shear magnitude. We find no change in the already known conditions for accelerated expansion. The backreaction terms are expressed as algebraic relations between the mean-square fluctuations of the models' irreducible kinematical variables. Based on these we investigate the early evolution of averaged vacuum Bianchi type $I$ universes and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. We also discuss the possibility of accelerated expansion due to ...

  13. Average Angular Velocity

    CERN Document Server

    Essén, H

    2003-01-01

    This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to three parts: center of mass, rotational, plus the remaining internal energy relative to an optimally translating and rotating frame.

  14. On sparsity averaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2013-01-01

    Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013) introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.

  15. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong to...

  16. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  17. The averaging principle

    OpenAIRE

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of \\emph{differentiability} and \\emph{interchangibility}, is $O(\\epsilon^2)$ equivalent to the outcome of the corresponding homogeneous model, where $\\epsilon$ is the level of heterogeneity. We then us...

  18. Robust Averaging Level Control

    OpenAIRE

    Rosander, Peter; Isaksson, Alf; Löfberg, Johan; Forsman, Krister

    2011-01-01

    Frequent inlet flow changes typically cause problems for averaging level controllers. For a frequently changing inlet flow the upsets do not occur when the system is in steady state and the tank level at its set-point. For this reason the tuning of the level controller gets quite complicated, since not only the size of the upsets but also the time in between them relative to the hold up of the tank have to be considered. One way to obtain optimal flow filtering while directly accounting for futur...

  19. Negative Average Preference Utilitarianism

    Directory of Open Access Journals (Sweden)

    Roger Chao

    2012-03-01

    Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.

  20. Average nuclear surface properties

    International Nuclear Information System (INIS)

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  1. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    OpenAIRE

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to...

  2. Average Range and Network Synchronizability

    International Nuclear Information System (INIS)

    The influence of structural properties of a network on the network synchronizability is studied by introducing a new concept of average range of edges. For both small-world and scale-free networks, the effect of average range on the synchronizability of networks with bounded or unbounded synchronization regions is illustrated through numerical simulations. The relations between average range, range distribution, average distance, and maximum betweenness are also explored, revealing the effects of these factors on the network synchronizability of the small-world and scale-free networks, respectively. (general)

  3. Physical Theories with Average Symmetry

    CERN Document Server

    Alamino, Roberto C

    2013-01-01

    This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.

  4. "Pricing Average Options on Commodities"

    OpenAIRE

    Kenichiro Shiraya; Akihiko Takahashi

    2010-01-01

    This paper proposes a new approximation formula for pricing average options on commodities under a stochastic volatility environment. In particular, it derives an option pricing formula under Heston and an extended lambda-SABR stochastic volatility models (which includes an extended SABR model as a special case). Moreover, numerical examples support the accuracy of the proposed average option pricing formula.

  5. Power convergence of Abel averages

    OpenAIRE

    Kozitsky, Yuri; Shoikhet, David; Zemanek, Jaroslav

    2012-01-01

    Necessary and sufficient conditions are presented for the Abel averages of discrete and strongly continuous semigroups, $T^k$ and $T_t$, to be power convergent in the operator norm in a complex Banach space. These results cover also the case where $T$ is unbounded and the corresponding Abel average is defined by means of the resolvent of $T$. They complement the classical results by Michael Lin establishing sufficient conditions for the corresponding convergence for a bounded $T$.

  6. High-average-power lasers

    International Nuclear Information System (INIS)

    The goals of the High-Average-Power Laser Program at LLNL are to develop a broad technology base for solid state lasers and to demonstrate high-average-power laser operation with more efficiency and higher beam quality than has been possible with current technology. Major activities are the zig-zag laser testbed and the gas-cooled-slab laser test bed. This section describes these activities as well as discussion of material development; nonlinear optics; laser materials, and applications

  7. Vocal attractiveness increases by averaging.

    Science.gov (United States)

    Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal

    2010-01-26

    Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047

  8. Sparsity Averaging for Compressive Imaging

    CERN Document Server

    Carrillo, Rafael E; Van De Ville, Dimitri; Thiran, Jean-Philippe; Wiaux, Yves

    2012-01-01

    We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms state-of-the-art priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.

  9. On generalized averaged Gaussian formulas

    Science.gov (United States)

    Spalevic, Miodrag M.

    2007-09-01

    We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.

  10. On T-matrix averaging

    International Nuclear Information System (INIS)

    The T-matrix averaging procedure advocated by Burke, Berrington and Sukumar [1981, J. Phys. B. At. Mol. Phys. 14, 289] is demonstrated to hold in a class of soluble models for two different L2 basis expansions. The convergence rates as the bases are extended to completeness are determined. (author)

  11. Stochastic Approximation with Averaging Innovation

    CERN Document Server

    Laruelle, Sophie

    2010-01-01

    The aim of the paper is to establish a convergence theorem for multi-dimensional stochastic approximation in a setting with innovations satisfying some averaging properties and to study some applications. The averaging assumptions allow us to unify the framework where the innovations are generated (to solve problems from Numerical Probability) and the one with exogenous innovations (market data, output of "device" $e.g.$ an Euler scheme) with stationary or ergodic properties. We propose several fields of applications with random innovations or quasi-random numbers. In particular we provide in both setting a rule to tune the step of the algorithm. At last we illustrate our results on five examples notably in Finance.

  12. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    J C Travers

    2010-11-01

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.

  13. Michel Parameters averages and interpretation

    International Nuclear Information System (INIS)

    The new measurements of Michel parameters in τ decays are combined to world averages. From these measurements model independent limits on non-standard model couplings are derived and interpretations in the framework of specific models are given. A lower limit of 2.5 tan β GeV on the mass of a charged Higgs boson in models with two Higgs doublets can be set and a 229 GeV limit on a right-handed W-boson in left-right symmetric models (95 % c.l.)

  14. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  15. Averaging along Uniform Random Integers

    CERN Document Server

    Janvresse, Élise

    2011-01-01

    Motivated by giving a meaning to "The probability that a random integer has initial digit d", we define a URI-set as a random set E of natural integers such that each n>0 belongs to E with probability 1/n, independently of other integers. This enables us to introduce two notions of densities on natural numbers: The URI-density, obtained by averaging along the elements of E, and the local URI-density, which we get by considering the k-th element of E and letting k go to infinity. We prove that the elements of E satisfy Benford's law, both in the sense of URI-density and in the sense of local URI-density. Moreover, if b_1 and b_2 are two multiplicatively independent integers, then the mantissae of a natural number in base b_1 and in base b_2 are independent. Connections of URI-density and local URI-density with other well-known notions of densities are established: Both are stronger than the natural density, and URI-density is equivalent to log-density. We also give a stochastic interpretation, in terms of URI-...

  16. Averages of Values of L-Series

    OpenAIRE

    Alkan, Emre; Ono, Ken

    2013-01-01

    We obtain an exact formula for the average of values of L-series over two independent odd characters. The average of any positive moment of values at s = 1 is then expressed in terms of finite cotangent sums subject to congruence conditions. As consequences, bounds on such cotangent sums, limit points for the average of first moment of L-series at s = 1 and the average size of positive moments of character sums related to the class number are deduced.

  17. Spectral averaging techniques for Jacobi matrices

    CERN Document Server

    del Rio, Rafael; Schulz-Baldes, Hermann

    2008-01-01

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  18. Average-cost based robust structural control

    Science.gov (United States)

    Hagood, Nesbitt W.

    1993-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.

  19. Coherent ensemble averaging techniques for impedance cardiography

    OpenAIRE

    Hurwitz, Barry E.; Shyu, Liang-Yu; Reddy, Sridhar P; Schneiderman, Neil; Nagel, Joachim H.

    1990-01-01

    EKG synchronized ensemble averaging of the impedance cardiogram tends to blur or suppress signal events due to signal jitter or event latency variability. Although ensemble averaging provides some improvement in the stability of the signal and signal to noise ratio under conditions of nonperiodic influences of respiration and motion, coherent averaging techniques were developed to determine whether further enhancement of the impedance cardiogram could be obtained. Physiological signals were o...

  20. MEASUREMENT AND MODELLING AVERAGE PHOTOSYNTHESIS OF MAIZE

    OpenAIRE

    ZS LÕKE

    2005-01-01

    The photosynthesis of fully developed maize was investigated in the Agrometeorological Research Station Keszthely, in 2000. We used LI-6400 type measurement equipment to locate measurement points where the intensity of photosynthesis mostly nears the average. So later we could obtain average photosynthetic activities featuring the crop, with only one measurement. To check average photosynthesis of maize we used Goudriaan’s simulation model (CMSM) as well to calculate values on cloudless sampl...

  1. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  2. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  3. A note on generalized averaged Gaussian formulas

    Science.gov (United States)

    Spalevic, Miodrag

    2007-11-01

    We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.

  4. Average Transmission Probability of a Random Stack

    Science.gov (United States)

    Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg

    2010-01-01

    The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…

  5. Labour Turnover Costs and Average Labour Demand

    OpenAIRE

    Bertola, Giuseppe

    1991-01-01

    The effect of labour turnover costs on average employment in a partial equilibrium model of labour demand, depends on the form of the revenue function, on the rates of discount and labour attrition, and on the relative size of hiring and firing costs. If discount and attrition rates are strictly positive, firing costs may well increase average employment even when hiring costs reduce it.

  6. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the applicable emission limitation in § 76.5, 76.6, or 76.7,...

  7. New results on averaging theory and applications

    Science.gov (United States)

    Cândido, Murilo R.; Llibre, Jaume

    2016-08-01

    The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.

  8. The Hubble rate in averaged cosmology

    International Nuclear Information System (INIS)

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H0, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions

  9. Average luminosity distance in inhomogeneous universes

    CERN Document Server

    Kostov, Valentin

    2010-01-01

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, and includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A form...

  10. Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.

    Science.gov (United States)

    Caruk, Joan Marie

    To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…

  11. Time averaging of instantaneous quantities in HYDRA

    Energy Technology Data Exchange (ETDEWEB)

    McCallen, R.C.

    1996-09-01

    For turbulent flow the evaluation of direct numerical simulations (DNS) where all scales are resolved and large-eddy simulation (LES) where only large-scales are resolved is difficult because the results are three-dimensional and transient. To simplify the analysis, the instantaneous flow field can be averaged in time for evaluation and comparison to experimental results. The incompressible Navier-Stokes flow code HYDRA has been modified for calculation of time-average quantities for both DNS and LES. This report describes how time averages of instantaneous quantities are generated during program execution (i.e., while generating the instantaneous quantities, instead of as a postprocessing operation). The calculations are performed during program execution to avoid storing values at each time step and thus to reduce storage requirements. The method used in calculating the time-average velocities, turbulent intensities, <{ital u}{sup ``}{sup 2}>, <{ital va}{sup ``}{sup 2}>, and <{ital w}{sup ``}{sup 2}>, and turbulent shear, <{ital u}{sup ``}{ital v}{sup ``}> are outlined. The brackets <> used here represent a time average. the described averaging methods were implemented in the HYDRA code for three-dimensional problem solutions. Also presented is a method for taking the time averages for a number of consecutive intervals and calculating the time average for the sum of the intervals. This method could be used for code restarts or further postprocessing of the timer averages from consecutive intervals. This method was not used in the HYDRA implementation, but is included here for completeness. In HYDRA, the running sums needed fro time averaging are simply written to the restart dump.

  12. Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.

    Science.gov (United States)

    Alvarez-Castro, José M; Yang, Rong-Cai

    2012-01-01

    Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178

  13. Clarifying the relationship between average excesses and average effects of allele substitutions

    Directory of Open Access Journals (Sweden)

    Jose M eÁlvarez-Castro

    2012-03-01

    Full Text Available Fisher’s concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one-locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance.

  14. Small scale magnetic flux-averaged magnetohydrodynamics

    International Nuclear Information System (INIS)

    By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits

  15. Self-averaging characteristics of spectral fluctuations

    OpenAIRE

    Braun, Petr; Haake, Fritz

    2014-01-01

    The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second a small imaginary part of the quasi-energy. Self-averaging universal (like the CUE average) behavior is found f...

  16. Averaged Lema\\^itre-Tolman-Bondi dynamics

    CERN Document Server

    Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried

    2016-01-01

    We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.

  17. Experimental Demonstration of Squeezed State Quantum Averaging

    CERN Document Server

    Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.

  18. Average Shape of Transport-Limited Aggregates

    Science.gov (United States)

    Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.

    2005-08-01

    We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.

  19. Average Vegetation Growth 1992 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  20. Average Vegetation Growth 1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  1. Average Vegetation Growth 1991 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1991 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  2. Average Vegetation Growth 1993 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  3. Average Vegetation Growth 1998 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  4. Average Vegetation Growth 1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1999 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  5. Average Vegetation Growth 1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  6. Average Vegetation Growth 2003 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2003 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  7. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  8. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using...

  9. Averaging procedure in variable-G cosmologies

    CERN Document Server

    Cardone, Vincenzo F

    2008-01-01

    Previous work in the literature had built a formalism for spatially averaged equations for the scale factor, giving rise to an averaged Raychaudhuri equation and averaged Hamiltonian constraint, which involve a backreaction source term. The present paper extends these equations to include models with variable Newton parameter and variable cosmological term, motivated by the non-perturbative renormalization program for quantum gravity based upon the Einstein--Hilbert action. The coupling between backreaction and spatially averaged three-dimensional scalar curvature is found to survive, and all equations involving contributions of a variable Newton parameter are worked out in detail. Interestingly, under suitable assumptions, an approximate solution can be found where the universe tends to a FLRW model, while keeping track of the original inhomogeneities through two effective fluids.

  10. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  11. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets...

  12. Average Vegetation Growth 1997 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  13. Average Vegetation Growth 2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...

  14. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  15. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  16. Development of average wages in CR regions

    OpenAIRE

    Bejvlová, Jana

    2013-01-01

    The purpose of this study is to analyse trends in average gross monthly earnings of employees – individuals - in particular regions of the Czech Republic. The analysed time series begin in 2000 as the regions were decisively established on 1st January 2000. Moreover the self-governing competencies were introduced by the Act No. 129/2000 Coll., on Regions (Establishment of Regions). The researched period ends in 2010. Based on model construction of referential sets, the study predicts average ...

  17. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....

  18. Hyperplane Arrangements with Large Average Diameter

    OpenAIRE

    Deza, Antoine; Xie, Feng

    2007-01-01

    The largest possible average diameter of a bounded cell of a simple hyperplane arrangement is conjectured to be not greater than the dimension. We prove that this conjecture holds in dimension 2, and is asymptotically tight in fixed dimension. We give the exact value of the largest possible average diameter for all simple arrangements in dimension 2, for arrangements having at most the dimension plus 2 hyperplanes, and for arrangements having 6 hyperplanes in dimension 3. In dimension 3, we g...

  19. The Hubble rate in averaged cosmology

    OpenAIRE

    Umeh, Obinna; Larena, Julien; Clarkson, Chris

    2010-01-01

    The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaitre-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H_0, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate ...

  20. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  1. Averaging Problem in Cosmology and Macroscopic Gravity

    OpenAIRE

    Zalaletdinov, Roustam

    2007-01-01

    The Averaging problem in general relativity and cosmology is discussed. The approach of macroscopic gravity to resolve the problem is presented. An exact cosmological solution to the equations of macroscopic gravity is given and its properties are discussed. Contents: 1. Introduction to General Relativity 2. General Relativity -> Relativistic Cosmology 3. Introduction to Relativistic Cosmology 4. Relativistic Cosmology -> Mathematical Cosmology 5. Averaging Problem in Relativistic Cosmology 6...

  2. Method of averaging in Clifford algebras

    OpenAIRE

    Shirokov, D. S.

    2014-01-01

    In this paper we consider different operators acting on Clifford algebras. We consider Reynolds operator of Salingaros' vee group. This operator average" an action of Salingaros' vee group on Clifford algebra. We consider conjugate action on Clifford algebra. We present a relation between these operators and projection operators onto fixed subspaces of Clifford algebras. Using method of averaging we present solutions of system of commutator equations.

  3. Modeling and Instability of Average Current Control

    OpenAIRE

    Fang, Chung-Chieh

    2012-01-01

    Dynamics and stability of average current control of DC-DC converters are analyzed by sampled-data modeling. Orbital stability is studied and it is found unrelated to the ripple size of the orbit. Compared with the averaged modeling, the sampled-data modeling is more accurate and systematic. An unstable range of compensator pole is found by simulations, and is predicted by sampled-data modeling and harmonic balance modeling.

  4. Disk-averaged synthetic spectra of Mars

    OpenAIRE

    Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a f...

  5. Self-averaging characteristics of spectral fluctuations

    Science.gov (United States)

    Braun, Petr; Haake, Fritz

    2015-04-01

    The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second, a small imaginary part of the quasi-energy. Self-averaging universal (like the circular unitary ensemble (CUE) average) behavior is found for the smoothed correlator, apart from noise which shrinks like 1/\\sqrt{N} as the dimension N of the quantum Hilbert space grows. There are periodically repeated quasi-energy windows of correlation decay and revival wherein the smoothed correlation remains finite as N\\to ∞ such that the noise is negligible. In between those windows (where the CUE averaged correlator takes on values of the order 1/{{N}2}) the noise becomes dominant and self-averaging is lost. We conclude that the noise forbids distinction of CUE and GUE-type behavior. Surprisingly, the underlying smoothed generating function does not enjoy any self-averaging outside the range of its variables relevant for determining the two-point correlator (and certain higher-order ones). We corroborate our numerical findings for the noise by analytically determining the CUE variance of the smoothed single-matrix correlator.

  6. Comparison of Mouse Brain DTI Maps Using K-space Average, Image-space Average, or No Average Approach

    OpenAIRE

    Sun, Shu-Wei; Mei, Jennifer; Tuel, Keelan

    2013-01-01

    Diffusion tensor imaging (DTI) is achieved by collecting a series of diffusion-weighted images (DWIs). Signal averaging of multiple repetitions can be performed in the k-space (k-avg) or in the image space (m-avg) to improve the image quality. Alternatively, one can treat each acquisition as an independent image and use all of the data to reconstruct the DTI without doing any signal averaging (no-avg). To compare these three approaches, in this study, in vivo DTI data was collected from five ...

  7. Basics of averaging of the Maxwell equations

    CERN Document Server

    Chipouline, A; Tretyakov, S

    2011-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for metamaterials, which is rather close to the case of compound materials but should include magnetic response of the inclusi...

  8. Books average previous decade of economic misery.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  9. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  10. Cosmic structure, averaging and dark energy

    CERN Document Server

    Wiltshire, David L

    2013-01-01

    These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...

  11. Average Cycle Period in Asymmetrical Flashing Ratchet

    Institute of Scientific and Technical Information of China (English)

    WANG Hai-Yan; HE Hou-Sheng; BAO Jing-Dong

    2005-01-01

    The directed motion of a Brownian particle in a flashing potential with various transition probabilities and waiting times in one of two states is studied. An expression for the average cycle period is proposed and the steady current J of the particle is calculated via Langevin simulation. The results show that the optimal cycle period rm,which takes the maximum of J, is shifted to a small value when the transition probability λ from the potential on to the potential off decreases, the maximalcurrent appears in the case of the average waiting time in the potential on being longer than in the potential off, and the direction of current depends on the ratio of the average times waiting in two states.

  12. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  13. Matrix averages relating to Ginibre ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Forrester, Peter J [Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia); Rains, Eric M [Department of Mathematics, California Institute of Technology, Pasadena, CA 91125 (United States)], E-mail: p.forrester@ms.unimelb.edu.au

    2009-09-25

    The theory of zonal polynomials is used to compute the average of a Schur polynomial of argument AX, where A is a fixed matrix and X is from the real Ginibre ensemble. This generalizes a recent result of Sommers and Khoruzhenko (2009 J. Phys. A: Math. Theor. 42 222002), and furthermore allows analogous results to be obtained for the complex and real quaternion Ginibre ensembles. As applications, the positive integer moments of the general variance Ginibre ensembles are computed in terms of generalized hypergeometric functions; these are written in terms of averages over matrices of the same size as the moment to give duality formulas, and the averages of the power sums of the eigenvalues are expressed as finite sums of zonal polynomials.

  14. High Average Power Yb:YAG Laser

    Energy Technology Data Exchange (ETDEWEB)

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  15. Books Average Previous Decade of Economic Misery

    OpenAIRE

    R Alexander Bentley; Alberto Acerbi; Paul Ormerod; Vasileios Lampos

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is signific...

  16. On the average pairing energy in nuclei

    International Nuclear Information System (INIS)

    The macroscopic-microscopic method is applied to calculate the nuclear energies, especially the microscopic shell and pairing corrections. The single-particle levels are obtained with the Yukawa folded mean-field potential. The macroscopic energy is evaluated using the Lublin-Strasbourg Drop model. The shell corrections are obtained using the Strutinsky method with smoothing in nucleon number space. The average nuclear pairing energy is also determined by folding the BCS sums in nucleon number space. The average pairing energy dependence on the nuclear elongation is investigated. (author)

  17. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    J M M Senovilla

    2007-07-01

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.

  18. Changing mortality and average cohort life expectancy

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2005-01-01

    Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four...

  19. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  20. An improved moving average technical trading rule

    Science.gov (United States)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  1. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  2. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  3. Average utility maximization: A preference foundation

    NARCIS (Netherlands)

    A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)

    2014-01-01

    textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen

  4. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, ΒΘ, is derived. A method for unobtrusively measuring the quantities used to evaluate ΒΘ in Extrap T1 is described. The results if a series of measurements yielding ΒΘ as a function of externally applied toroidal field are presented. (author)

  5. A Gaussian Average Property for Banach Spaces

    OpenAIRE

    Casazza, Peter G.; Nielsen, Niels Jorgen

    1996-01-01

    In this paper we investigate a Gaussian average property of Banach spaces. This property is weaker than the Gordon Lewis property but closely related to this and other unconditional structures. It is also shown that this property implies that certain Hilbert space valued operators defined on subspaces of the given space can be extended.

  6. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  7. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single...

  8. A Functional Measurement Study on Averaging Numerosity

    Science.gov (United States)

    Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio

    2014-01-01

    In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…

  9. Reformulation of Ensemble Averages via Coordinate Mapping.

    Science.gov (United States)

    Schultz, Andrew J; Moustafa, Sabry G; Lin, Weisong; Weinstein, Steven J; Kofke, David A

    2016-04-12

    A general framework is established for reformulation of the ensemble averages commonly encountered in statistical mechanics. This "mapped-averaging" scheme allows approximate theoretical results that have been derived from statistical mechanics to be reintroduced into the underlying formalism, yielding new ensemble averages that represent exactly the error in the theory. The result represents a distinct alternative to perturbation theory for methodically employing tractable systems as a starting point for describing complex systems. Molecular simulation is shown to provide one appealing route to exploit this advance. Calculation of the reformulated averages by molecular simulation can proceed without contamination by noise produced by behavior that has already been captured by the approximate theory. Consequently, accurate and precise values of properties can be obtained while using less computational effort, in favorable cases, many orders of magnitude less. The treatment is demonstrated using three examples: (1) calculation of the heat capacity of an embedded-atom model of iron, (2) calculation of the dielectric constant of the Stockmayer model of dipolar molecules, and (3) calculation of the pressure of a Lennard-Jones fluid. It is observed that improvement in computational efficiency is related to the appropriateness of the underlying theory for the condition being simulated; the accuracy of the result is however not impacted by this. The framework opens many avenues for further development, both as a means to improve simulation methodology and as a new basis to develop theories for thermophysical properties. PMID:26950263

  10. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...

  11. A Measure of the Average Intercorrelation

    Science.gov (United States)

    Meyer, Edward P.

    1975-01-01

    Bounds are obtained for a coefficient proposed by Kaiser as a measure of average correlation and the coefficient is given an interpretation in the context of reliability theory. It is suggested that the root-mean-square intercorrelation may be a more appropriate measure of degree of relationships among a group of variables. (Author)

  12. Full averaging of fuzzy impulsive differential inclusions

    Directory of Open Access Journals (Sweden)

    Natalia V. Skripnik

    2010-09-01

    Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.

  13. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...

  14. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...

  15. High average-power induction linacs

    International Nuclear Information System (INIS)

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  16. Error estimates on averages of correlated data

    International Nuclear Information System (INIS)

    We describe how the true statistical error on an average of correlated data can be obtained with ease and efficiency by a renormalization group method. The method is illustrated with numerical and analytical examples, having finite as well as infinite range correlations. (orig.)

  17. Average Equivalent Diameter of A Particulate Material

    OpenAIRE

    AL-MAGHRABI, Mohammed-Noor N. H.

    2010-01-01

    In the field of mineral processing, it is important to determine the size of a particle. A method of defining an average diameter for a collection of particles is presented. The theoretical basis developed for the purpose is verified by a specially designed experimental technique.  Key words: mineral processing, particle size, equivalent diameter

  18. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for...

  19. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  20. From cellular doses to average lung dose

    International Nuclear Information System (INIS)

    Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions. (authors)

  1. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  2. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  3. Endogenous average cost based access pricing

    OpenAIRE

    Fjell, Kenneth; Foros, Øystein; Pal, Debashis

    2006-01-01

    We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...

  4. Extended Bidirectional Texture Function Moving Average Model

    Czech Academy of Sciences Publication Activity Database

    Havlíček, Michal

    Praha: České vysoké učení technické v Praze, 2015 - (Ambrož, P.; Masáková, Z.), s. 1-7 [Doktorandské dny 2015. Praha (CZ), 20.11.2015,27.11.2015] Institutional support: RVO:67985556 Keywords : Bidirectional texture function * moving average random field model Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2016/RO/havlicek-0455325.pdf

  5. Average Drift Analysis and Population Scalability

    OpenAIRE

    He, Jun; Yao, Xin

    2013-01-01

    This paper aims to study how the population size affects the computation time of evolutionary algorithms in a rigorous way. The computation time of an evolutionary algorithm can be measured by either the expected number of generations (hitting time) or the expected number of fitness evaluations (running time) to find an optimal solution. Population scalability is the ratio of the expected hitting time between a benchmark algorithm and an algorithm using a larger population size. Average drift...

  6. Average Regression-Adjusted Controlled Regenerative Estimates

    OpenAIRE

    Lewis, Peter A.W.; Ressler, Richard

    1991-01-01

    Proceedings of the 1991 Winter Simulation Conference Barry L. Nelson, W. David Kelton, Gordon M. Clark (eds.) One often uses computer simulations of queueing systems to generate estimates of system characteristics along with estimates of their precision. Obtaining precise estimates, espescially for high traffic intensities, can require large amounts of computer time. Average regression-adjusted controlled regenerative estimates result from combining the two techniques ...

  7. Time-dependent angularly averaged inverse transport

    OpenAIRE

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured al...

  8. Average Light Intensity Inside a Photobioreactor

    Directory of Open Access Journals (Sweden)

    Herby Jean

    2011-01-01

    Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.

  9. A Visibility Graph Averaging Aggregation Operator

    OpenAIRE

    Chen, Shiyu; Hu, Yong; Mahadevan, Sankaran; Deng, Yong

    2013-01-01

    The problem of aggregation is considerable importance in many disciplines. In this paper, a new type of operator called visibility graph averaging (VGA) aggregation operator is proposed. This proposed operator is based on the visibility graph which can convert a time series into a graph. The weights are obtained according to the importance of the data in the visibility graph. Finally, the VGA operator is used in the analysis of the TAIEX database to illustrate that it is practical and compare...

  10. On Heroes and Average Moral Human Beings

    OpenAIRE

    Kirchgässner, Gebhard

    2001-01-01

    After discussing various approaches about heroic behaviour in the literature, we first give a definition and classification of moral behaviour, in distinction to intrinsically motivated and ‘prudent' behaviour. Then, we present some arguments on the function of moral behaviour according to ‘minimal' standards of the average individual in a modern democratic society, before we turn to heroic behaviour. We conclude with some remarks on methodological as well as social problems which arise or ma...

  11. Dollar-Cost Averaging: An Investigation

    OpenAIRE

    Fang, Wei

    2007-01-01

    Dollar-cost Averaging (DCA) is a common and useful systematic investment strategy for mutual fund managers, private investors, financial analysts and retirement planners. The issue of performance effectiveness of DCA is greatly controversial among academics and professionals. As a popularly recommended investment strategy, DCA is recognized as a risk reduction strategy; however, the advantage was claimed as the expense of generating higher returns. The dissertation is to intensively inves...

  12. Spatial averaging infiltration model for layered soil

    Institute of Scientific and Technical Information of China (English)

    HU HePing; YANG ZhiYong; TIAN FuQiang

    2009-01-01

    To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.

  13. Average neutron detection efficiency for DEMON detectors

    International Nuclear Information System (INIS)

    The neutron detection efficiency of a DEMON detector, averaged over the whole volume, was calculated using GEANT and applied to determine neutron multiplicities in an intermediate heavy ion reaction. When a neutron source is set at a distance of about 1 m from the front surface of the detector, the average efficiency, ϵav, is found to be significantly lower (20–30%) than the efficiency measured at the center of the detector, ϵ0. In the GEANT simulation the ratio R=ϵav/ϵ0 was calculated as a function of neutron energy. The experimental central efficiency multiplied by R was then used to determine the average efficiency. The results were applied to a study of the 64Zn+112Sn reaction at 40 A MeV which employed 16 DEMON detectors. The neutron multiplicity was extracted using a moving source fit. The derived multiplicities are compared well with those determined using the neutron ball in the NIMROD detector array in a separate experiment. Both are in good agreement with multiplicities predicted by a transport model calculation using an antisymmetric molecular dynamics (AMD) model code

  14. Modern average global sea-surface temperature

    Science.gov (United States)

    Schweitzer, Peter N.

    1993-01-01

    The data contained in this data set are derived from the NOAA Advanced Very High Resolution Radiometer Multichannel Sea Surface Temperature data (AVHRR MCSST), which are obtainable from the Distributed Active Archive Center at the Jet Propulsion Laboratory (JPL) in Pasadena, Calif. The JPL tapes contain weekly images of SST from October 1981 through December 1990 in nine regions of the world ocean: North Atlantic, Eastern North Atlantic, South Atlantic, Agulhas, Indian, Southeast Pacific, Southwest Pacific, Northeast Pacific, and Northwest Pacific. This data set represents the results of calculations carried out on the NOAA data and also contains the source code of the programs that made the calculations. The objective was to derive the average sea-surface temperature of each month and week throughout the whole 10-year series, meaning, for example, that data from January of each year would be averaged together. The result is 12 monthly and 52 weekly images for each of the oceanic regions. Averaging the images in this way tends to reduce the number of grid cells that lack valid data and to suppress interannual variability.

  15. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  16. On Backus average for generally anisotropic layers

    CERN Document Server

    Bos, Len; Slawinski, Michael A; Stanoev, Theodore

    2016-01-01

    In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...

  17. Disk-averaged synthetic spectra of Mars

    CERN Document Server

    Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy

    2004-01-01

    The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...

  18. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  19. The average free volume model for liquids

    CERN Document Server

    Yu, Yang

    2014-01-01

    In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.

  20. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  1. Sparsity averaging for radio-interferometric imaging

    CERN Document Server

    Carrillo, Rafael E; Wiaux, Yves

    2014-01-01

    We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

  2. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  3. PROFILE OF HIRED FARMWORKERS, 1998 ANNUAL AVERAGES

    OpenAIRE

    Runyan, Jack L.

    2000-01-01

    An average of 875,000 persons 15 years of age and older did hired farmwork each week as their primary job in 1998. An additional 63,000 people did hired farmwork each week as their secondary job. Hired farmworkers were more likely than the typical U.S. wage and salary worker to be male, Hispanic, younger, less educated, never married, and not U.S. citizens. The West (42 percent) and South (31.4 percent) census regions accounted for almost three-fourths of the hired farmworkers. The rate of un...

  4. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  5. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  6. Time-dependent angularly averaged inverse transport

    CERN Document Server

    Bal, Guillaume

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain.

  7. High average power laser for EUV lithography

    Energy Technology Data Exchange (ETDEWEB)

    Kania, D.R.; Gaines, D.P.; Hermann, M.; Honig, J.; Hostetler, R.; Levesque, R.; Sommargren, G.E.; Spitzer, R.C.; Vernon, S.P.

    1995-01-19

    We have demonstrated the operation of a high average power, all solid state laser and target system for EUV lithography. The laser operates at 1.06 {mu}m with a pulse repetition rate of 200 Hz. Each pulse contains up to 400 mJ of energy and is less than 10 ns in duration. The ELTV conversion efficiency measured with the laser is independent of the laser repetition rate. Operating at 200 Hz, the laser has been used for lithography using a 3 bounce Kohler illuminator.

  8. Some averaging functions in image reduction

    Czech Academy of Sciences Publication Activity Database

    Paternain, D.; Bustince, H.; Fernández, J.; Beliakov, G.; Mesiar, Radko

    Berlin: Springer, 2010 - (García-Pedrajas, N.; Herrera, F.; Benítez, J.), s. 399-408. (Lecture Notes in Artificial Intelligence . 6098). ISBN 978-3-642-13032-8. ISSN 0302-9743. [IEA/AIE 2010. Cordoba (ES), 01.06.2010-04.06.2010] Institutional research plan: CEZ:AV0Z10750506 Keywords : image reduction * local reduction operators * aggregation functions Subject RIV: BA - General Mathematics http://library.utia.cas.cz/separaty/2010/E/mesiar-some averaging functions in image reduction.pdf

  9. Rademacher averages on noncommutative symmetric spaces

    CERN Document Server

    Merdy, Christian Le

    2008-01-01

    Let E be a separable (or the dual of a separable) symmetric function space, let M be a semifinite von Neumann algebra and let E(M) be the associated noncommutative function space. Let $(\\epsilon_k)_k$ be a Rademacher sequence, on some probability space $\\Omega$. For finite sequences $(x_k)_k of E(M), we consider the Rademacher averages $\\sum_k \\epsilon_k\\otimes x_k$ as elements of the noncommutative function space $E(L^\\infty(\\Omega)\\otimes M)$ and study estimates for their norms $\\Vert \\sum_k \\epsilon_k \\otimes x_k\\Vert_E$ calculated in that space. We establish general Khintchine type inequalities in this context. Then we show that if E is 2-concave, the latter norm is equivalent to the infimum of $\\Vert (\\sum y_k^*y_k)^{{1/2}}\\Vert + \\Vert (\\sum z_k z_k^*)^{{1/2}}\\Vert$ over all $y_k,z_k$ in E(M) such that $x_k=y_k+z_k$ for any k. Dual estimates are given when E is 2-convex and has a non trivial upper Boyd index. We also study Rademacher averages for doubly indexed families of E(M).

  10. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  11. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  12. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  13. A Moving Average Bidirectional Texture Function Model

    Czech Academy of Sciences Publication Activity Database

    Havlíček, Michal; Haindl, Michal

    Vol. II. Heidelberg: Springer, 2013 - (Wilson, R.; Bors, A.; Hancock, E.; Smith, W.), s. 338-345. (Lecture Notes in Computer Science. 8048). ISBN 978-3-642-40245-6. ISSN 0302-9743. [International Conference on Computer Analysis of Images and Patterns (CAIP 2013) /15./. York (GB), 27.08.2013-29.08.2013] R&D Projects: GA ČR GA102/08/0593; GA ČR GAP103/11/0335 Institutional support: RVO:67985556 Keywords : BTF * texture analysis * texture synthesis * data compression Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/havlicek-a moving average bidirectional texture function model.pdf

  14. Averaging lifetimes for B hadron species

    International Nuclear Information System (INIS)

    The measurement of the lifetimes of the individual B species are of great interest. Many of these measurements are well below the 10% level of precision. However, in order to reach the precision necessary to test the current theoretical predictions, the results from different experiments need to be averaged together. Therefore, the relevant systematic uncertainties of each measurement need to be well defined in order to understand the correlations between the results from different experiments. In this paper we discuss the dominant sources of systematic errors which lead to correlations between the different measurements. We point out problems connected with the conventional approach of combining lifetime data and discuss methods which overcome these problems. (orig.)

  15. The Lang-Trotter Conjecture on Average

    OpenAIRE

    Baier, Stephan

    2006-01-01

    For an elliptic curve $E$ over $\\ratq$ and an integer $r$ let $\\pi_E^r(x)$ be the number of primes $p\\le x$ of good reduction such that the trace of the Frobenius morphism of $E/\\fie_p$ equals $r$. We consider the quantity $\\pi_E^r(x)$ on average over certain sets of elliptic curves. More in particular, we establish the following: If $A,B>x^{1/2+\\epsilon}$ and $AB>x^{3/2+\\epsilon}$, then the arithmetic mean of $\\pi_E^r(x)$ over all elliptic curves $E$ : $y^2=x^3+ax+b$ with $a,b\\in \\intz$, $|a...

  16. Electromagnetic modes induced by averaged geodesic curvature

    International Nuclear Information System (INIS)

    Full text: Kinetic theory of geodesic acoustic and related modes is developed with emphasis on the electromagnetic effects due to electron parallel motion, higher order dispersion and drift effects. In general, dispersion of GAM is determined by the ion sound Larmor radius, ion Larmor radius, and electron inertia. Relative contribution of these effects depends on the particular regime and mode localization. It is shown that there are exist new type of electromagnetic (Alfven) modes induced by averaged geodesic curvature. It is shown that the fluid limit of the kinetic dispersion relation is exactly recovered by the extended MHD (Grad hydrodynamics) exactly recovers the kinetic dispersion relation for geodesic acoustic modes (GAMs). The coupling of modes of different polarization is investigated within the extended MHD and kinetic models. The role of drift effects, in particular, electron temperature gradient on GAMs and related modes is investigated. (author)

  17. Average transverse momentum quantities approaching the lightfront

    CERN Document Server

    Boer, Daniel

    2014-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of such integrated quantities, using Bessel-weighting and rapidity cut-offs, with the conventional definitions as limiting cases. The regularized quantities are given in terms of integrals over the TMDs of interest that are well-defined and moreover have the advantage of being amenable to lattice evaluations.

  18. Average prime-pair counting formula

    Science.gov (United States)

    Korevaar, Jaap; Riele, Herman Te

    2010-04-01

    Taking r>0 , let π_{2r}(x) denote the number of prime pairs (p, p+2r) with p≤ x . The prime-pair conjecture of Hardy and Littlewood (1923) asserts that π_{2r}(x)˜ 2C_{2r} {li}_2(x) with an explicit constant C_{2r}>0 . There seems to be no good conjecture for the remainders ω_{2r}(x)=π_{2r}(x)- 2C_{2r} {li}_2(x) that corresponds to Riemann's formula for π(x)-{li}(x) . However, there is a heuristic approximate formula for averages of the remainders ω_{2r}(x) which is supported by numerical results.

  19. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-01-01

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214

  20. Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport

    Science.gov (United States)

    Parker, J. C.; van Genuchten, M. Th.

    1984-07-01

    Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.

  1. Dynamic speckle texture processing using averaged dimensions

    Science.gov (United States)

    Rabal, Héctor; Arizaga, Ricardo; Cap, Nelly; Trivi, Marcelo; Mavilio Nuñez, Adriana; Fernandez Limia, Margarita

    2006-08-01

    Dynamic speckle or biospeckle is a phenomenon generated by laser light scattering in biological tissues. It is also present in some industrial processes where the surfaces exhibit some kind of activity. There are several methods to characterize the dynamic speckle pattern activity. For quantitative measurements, the Inertia Moment of the co occurrence matrix of the temporal history of the speckle pattern (THSP) is usually used. In this work we propose the use of average dimensions (AD) for quantitative classifications of textures of THSP images corresponding to different stages of the sample. The AD method was tested in an experiment with the drying of paint, a non biological phenomenon that we usually use as dynamic speckle initial test. We have chosen this phenomenon because its activity can be followed in a relatively simple way by gravimetric measures and because its behaviour is rather predictable. Also, the AD was applied to numerically simulated THSP images and the performance was compared with other quantitative method. Experiments with biological samples are currently under development.

  2. Average path length for Sierpinski pentagon

    CERN Document Server

    Peng, Junhao

    2011-01-01

    In this paper,we investigate diameter and average path length(APL) of Sierpinski pentagon based on its recursive construction and self-similar structure.We find that the diameter of Sierpinski pentagon is just the shortest path lengths between two nodes of generation 0. Deriving and solving the linear homogenous recurrence relation the diameter satisfies, we obtain rigorous solution for the diameter. We also obtain approximate solution for APL of Sierpinski pentagon, both diameter and APL grow approximately as a power-law function of network order $N(t)$, with the exponent equals $\\frac{\\ln(1+\\sqrt{3})}{\\ln(5)}$. Although the solution for APL is approximate,it is trusted because we have calculated all items of APL accurately except for the compensation($\\Delta_{t}$) of total distances between non-adjacent branches($\\Lambda_t^{1,3}$), which is obtained approximately by least-squares curve fitting. The compensation($\\Delta_{t}$) is only a small part of total distances between non-adjacent branches($\\Lambda_t^{1...

  3. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  4. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  5. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  6. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  7. A New CFAR Detector Based on Automatic Censoring Cell Averaging and Cell Averaging

    Directory of Open Access Journals (Sweden)

    Yuhua Qin

    2013-06-01

    Full Text Available In order to improve the interference immunity of the detector, a new CFAR detector (ACGCA-CFAR based on automatic censoring cell averaging (ACCA and cell averaging (CA is presented in this paper. It takes the greatest value of ACCA and CA local estimation as the noise power estimation. Under swerling II assumption, the analytic expressions of  in homogeneous background are derived. In contrast to other detectors, the ACGCA-CFAR detector has higher detection performance both in homogeneous and nonhomogeneous backgrounds, while the sample sorting time of ACGCA is only quarter that of OS and ACCA.    

  8. Global Average Brightness Temperature for April 2003

    Science.gov (United States)

    2003-01-01

    [figure removed for brevity, see original site] Figure 1 This image shows average temperatures in April, 2003, observed by AIRS at an infrared wavelength that senses either the Earth's surface or any intervening cloud. Similar to a photograph of the planet taken with the camera shutter held open for a month, stationary features are captured while those obscured by moving clouds are blurred. Many continental features stand out boldly, such as our planet's vast deserts, and India, now at the end of its long, clear dry season. Also obvious are the high, cold Tibetan plateau to the north of India, and the mountains of North America. The band of yellow encircling the planet's equator is the Intertropical Convergence Zone (ITCZ), a region of persistent thunderstorms and associated high, cold clouds. The ITCZ merges with the monsoon systems of Africa and South America. Higher latitudes are increasingly obscured by clouds, though some features like the Great Lakes, the British Isles and Korea are apparent. The highest latitudes of Europe and Eurasia are completely obscured by clouds, while Antarctica stands out cold and clear at the bottom of the image. The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  9. Optimal estimation of the diffusion coefficient from non-averaged and averaged noisy magnitude data

    Science.gov (United States)

    Kristoffersen, Anders

    2007-08-01

    The magnitude operation changes the signal distribution in MRI images from Gaussian to Rician. This introduces a bias that must be taken into account when estimating the apparent diffusion coefficient. Several estimators are known in the literature. In the present paper, two novel schemes are proposed. Both are based on simple least squares fitting of the measured signal, either to the median (MD) or to the maximum probability (MP) value of the Probability Density Function (PDF). Fitting to the mean (MN) or a high signal-to-noise ratio approximation to the mean (HS) is also possible. Special attention is paid to the case of averaged magnitude images. The PDF, which cannot be expressed in closed form, is analyzed numerically. A scheme for performing maximum likelihood (ML) estimation from averaged magnitude images is proposed. The performance of several estimators is evaluated by Monte Carlo (MC) simulations. We focus on typical clinical situations, where the number of acquisitions is limited. For non-averaged data the optimal choice is found to be MP or HS, whereas uncorrected schemes and the power image (PI) method should be avoided. For averaged data MD and ML perform equally well, whereas uncorrected schemes and HS are inadequate. MD provides easier implementation and higher computational efficiency than ML. Unbiased estimation of the diffusion coefficient allows high resolution diffusion tensor imaging (DTI) and may therefore help solving the problem of crossing fibers encountered in white matter tractography.

  10. Hearing Office Average Processing Time Ranking Report, April 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  11. Hearing Office Average Processing Time Ranking Report, February 2016

    Data.gov (United States)

    Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...

  12. Correlation of average scaling coefficient with asymmetric parameter and average power index with quadrupole deformation parameter

    International Nuclear Information System (INIS)

    The nuclear structure of even-even nuclei in ground state band and other excited bands with non zero band head is collectively built. The level energy in medium mass region deviates below the ideal rotor energy formula EI = AI(I+1). The average scaling coefficient with asymmetric parameter and bAV rises for Er-Os nuclei when N increases from 88 to 104

  13. Forecasting Equity Premium: Global Historical Average versus Local Historical Average and Constraints

    OpenAIRE

    Tae-Hwy Lee; Yundong Tu; Aman Ullah

    2014-01-01

    The equity premium, return on equity minus return on risk-free asset, is expected to be positive. We consider imposing such positivity constraint in local historical average (LHA) in nonparametric kernel regression framework. It is also extended to the semiparametric single index model when multiple predictors are used. We construct the constrained LHA estimator via an indicator function which operates as `model-selection' between the unconstrained LHA and the bound of the constraint (zero fo...

  14. 40 CFR 1033.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more...

  15. On the Individual Expectations of Non-Average Investors

    OpenAIRE

    Lucia Del Chicca; Gerhard Larcher

    2011-01-01

    An “average investor” is an investor who has “average risk aversion”, “average expectations” on the market returns and should invest in the “market portfolio” (this is, according to the Capital Asset Pricing Model, the best possible portfolio for such an investor). He is compared with a “non-average investor”. This - in our setting - is an investor who has the same “average risk aversion” but invests in other investment strategies, for example options. Such a “`non-average investor” must cons...

  16. Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...

  17. Cost averaging techniques for robust control of flexible structural systems

    Science.gov (United States)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    Viewgraphs on cost averaging techniques for robust control of flexible structural systems are presented. Topics covered include: modeling of parameterized systems; average cost analysis; reduction of parameterized systems; and static and dynamic controller synthesis.

  18. Average annual runoff in the United States, 1951-80

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States

  19. The SU(N) Wilson Loop Average in 2 Dimensions

    OpenAIRE

    Karjalainen, Esa

    1993-01-01

    We solve explicitly a closed, linear loop equation for the SU(2) Wilson loop average on a two-dimensional plane and generalize the solution to the case of the SU(N) Wilson loop average with an arbitrary closed contour. Furthermore, the flat space solution is generalized to any two-dimensional manifold for the SU(2) Wilson loop average and to any two-dimensional manifold of genus 0 for the SU(N) Wilson loop average.

  20. Spectral averaging techniques for Jacobi matrices with matrix entries

    CERN Document Server

    Sadel, Christian

    2009-01-01

    A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.

  1. 40 CFR 1042.710 - Averaging emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families...

  2. Sample Size Bias in Judgments of Perceptual Averages

    Science.gov (United States)

    Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.

    2014-01-01

    Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…

  3. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  4. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  5. Evaluation of the average ion approximation for a tokamak plasma

    International Nuclear Information System (INIS)

    The average ion approximation, sometimes used to calculated atomic processes in plasmas, is assessed by computing deviations in various rates over a set of conditions representative of tokamak edge plasmas. Conditions are identified under which the rates are primarily a function of the average ion charge and plasma parameters, as assumed in the average ion approximation. (Author) 19 refs., tab., 5 figs

  6. Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements

    Science.gov (United States)

    Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.

    2012-12-01

    To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.

  7. 40 CFR 62.15210 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Science.gov (United States)

    2010-07-01

    ... of 40 CFR part 60, section 4.3, to calculate the daily geometric average concentrations of sulfur... 40 Protection of Environment 8 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic... convert my 1-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation...

  8. 40 CFR 60.1755 - How do I convert my 1-hour arithmetic averages into appropriate averaging times and units?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false How do I convert my 1-hour arithmetic...-hour arithmetic averages into appropriate averaging times and units? (a) Use the equation in § 60.1935... calculate the 4-hour or 24-hour daily block averages (as applicable) for concentrations of carbon monoxide....

  9. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  10. On the incentive effects of damage averaging in tort law

    OpenAIRE

    Tim Friehe

    2007-01-01

    It has been generally accepted for unilateral-care models that care incentives are not affected by the use of either accurate damages or average damages if injurers lack knowledge of the precise damage level they might cause. This paper shows that in bilateral-care models with heterogeneous victims, consequences of averages as damage measure are critically dependent on the weighing of respective harm levels. Importantly, we establish that there is an average measure which allows the attainmen...

  11. Basics of averaging of the Maxwell equations for bulk materials

    OpenAIRE

    Chipouline, A.; Simovski, C.; Tretyakov, S.

    2012-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some b...

  12. Iterrative Correction of Measurement with Averaging of Dithered Samples

    Directory of Open Access Journals (Sweden)

    Miroslav Kamensky

    2008-01-01

    Full Text Available Self-calibration techniques could eliminate measurement errors caused by time changes and component aging. For ADC performance enhancement also averaging is necessary. In the paper the iterative measurement error correction method is presented in combination with averaging. Dither theory for Gaussian noise has been used for exhibition of averaging abilities in ADC characteristic improvement. Experimental ENOB value improvement is more than 1.5 bit.

  13. Thomson scattering in the average-atom approximation

    OpenAIRE

    Johnson, W. R.; Nilsen, J.; Cheng, K. T.

    2012-01-01

    The average-atom model is applied to study Thomson scattering of x-rays from warm-dense matter with emphasis on scattering by bound electrons. Parameters needed to evaluate the dynamic structure function (chemical potential, average ionic charge, free electron density, bound and continuum wave-functions and occupation numbers) are obtained from the average-atom model. The resulting analysis provides a relatively simple diagnostic for use in connection with x-ray scattering measurements. Appli...

  14. Average-Consensus Algorithms in a Deterministic Framework

    OpenAIRE

    Topley, Kevin; Krishnamurthy, Vikram

    2011-01-01

    We consider the average-consensus problem in a multi-node network of finite size. Communication between nodes is modeled by a sequence of directed signals with arbitrary communication delays. Four distributed algorithms that achieve average-consensus are proposed. Necessary and sufficient communication conditions are given for each algorithm to achieve average-consensus. Resource costs for each algorithm are derived based on the number of scalar values that are required for communication and ...

  15. Orbit-averaged Guiding-center Fokker-Planck Operator

    CERN Document Server

    Brizard, A J; Decker, J; Duthoit, F -X

    2009-01-01

    A general orbit-averaged guiding-center Fokker-Planck operator suitable for the numerical analysis of transport processes in axisymmetric magnetized plasmas is presented. The orbit-averaged guiding-center operator describes transport processes in a three-dimensional guiding-center invariant space: the orbit-averaged magnetic-flux invariant $\\ov{\\psi}$, the minimum-B pitch-angle coordinate $\\xi_{0}$, and the momentum magnitude $p$.

  16. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  17. Thermodynamic properties of average-atom interatomic potentials for alloys

    Science.gov (United States)

    Nöhring, Wolfram Georg; Curtin, William Arthur

    2016-05-01

    The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.

  18. A Characterization of the average tree solution for tree games

    OpenAIRE

    Debasis Mishra; Dolf Talman

    2009-01-01

    For the class of tree games, a new solution called the average tree solution has been proposed recently. We provide a characterization of this solution. This characterization underlines an important difference, in terms of symmetric treatment of the agents, between the average tree solution and the Myerson value for the class of tree games.

  19. UNEMPLOYMENT BENEFIT, MINIMUM WAGE AND AVERAGE SALARY EARNINGS IN ROMANIA

    OpenAIRE

    2012-01-01

    The existence of a long-run equilibrium between average salary earnings and labour market public institutions, such as unemployment benefit and minimum wage, is checked using ARDL bounds testing procedure. The results pointed out that long-run causality runs from average salary earnings to labour market public institutions and not vice versa. The short-run dynamics are depicted as well.

  20. 40 CFR 63.652 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... appendix A. Mj=Molecular weight of organic HAP j, gram per gram-mole. n=Number of organic HAP's in the... at the rack during the month, kilopascals. M=Weighted average molecular weight of organic HAP's... rack i to calculate the weighted average rack molecular weight: ER18AU95.008 where: Mj=Molecular...

  1. 40 CFR 63.150 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ..., appendix A. Mj=Molecular weight of organic HAP j, gram per gram-mole. n=Number of organic HAP's. (A) The..., kilopascals. M = Weighted average molecular weight of organic HAP's transferred at the transfer rack during... transfer rack i to calculate the weighted average rack molecular weight: ER22AP94.267 where: Mj =......

  2. 40 CFR 63.503 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... Method 18 or Method 25A of 40 CFR part 60, appendix A. Mj=Molecular weight of organic HAP j, gram per... demonstrate compliance, the number of emission points allowed to be included in the emission average is... demonstrate compliance, the number of emission points allowed in the emissions average for those...

  3. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    2007-01-01

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in

  4. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  5. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  6. Partial Averaging Near a Resonance in Planetary Dynamics

    CERN Document Server

    Haghighipour, N

    1999-01-01

    Following the general numerical analysis of Melita and Woolfson (1996), I showed in a recent paper that a restricted, planar, circular planetary system consisting of Sun, Jupiter and Saturn would be captured in a near (2:1) resonance when one would allow for frictional dissipation due to interplanetary medium (Haghighipour, 1998). In order to analytically explain this resonance phenomenon, the method of partial averaging near a resonance was utilized and the dynamics of the first-order partially averaged system at resonance was studied. Although in this manner, the finding that resonance lock occurs for all initial relative positions of Jupiter and Saturn was confirmed, the first-order partially averaged system at resonance did not provide a complete picture of the evolutionary dynamics of the system and the similarity between the dynamical behavior of the averaged system and the main planetary system held only for short time intervals. To overcome these limitations, the method of partial averaging near a res...

  7. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995)) is...... valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that...

  8. On the extremal properties of the average eccentricity

    CERN Document Server

    Ilic, Aleksandar

    2011-01-01

    The eccentricity of a vertex is the maximum distance from it to another vertex and the average eccentricity $ecc (G)$ of a graph $G$ is the mean value of eccentricities of all vertices of $G$. The average eccentricity is deeply connected with a topological descriptor called the eccentric connectivity index, defined as a sum of products of vertex degrees and eccentricities. In this paper we analyze extremal properties of the average eccentricity, introducing two graph transformations that increase or decrease $ecc (G)$. Furthermore, we resolve four conjectures, obtained by the system AutoGraphiX, about the average eccentricity and other graph parameters (the clique number, the Randi\\' c index and the independence number), refute one AutoGraphiX conjecture about the average eccentricity and the minimum vertex degree and correct one AutoGraphiX conjecture about the domination number.

  9. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  10. Inversion of the circular averages transform using the Funk transform

    International Nuclear Information System (INIS)

    The integral of a function defined on the half-plane along the semi-circles centered on the boundary of the half-plane is known as the circular averages transform. Circular averages transform arises in many tomographic image reconstruction problems. In particular, in synthetic aperture radar (SAR) when the transmitting and receiving antennas are colocated, the received signal is modeled as the integral of the ground reflectivity function of the illuminated scene over the intersection of spheres centered at the antenna location and the surface topography. When the surface topography is flat the received signal becomes the circular averages transform of the ground reflectivity function. Thus, SAR image formation requires inversion of the circular averages transform. Apart from SAR, circular averages transform also arises in thermo-acoustic tomography and sonar inverse problems. In this paper, we present a new inversion method for the circular averages transform using the Funk transform. For a function defined on the unit sphere, its Funk transform is given by the integrals of the function along the great circles. We used hyperbolic geometry to establish a diffeomorphism between the circular averages transform, hyperbolic x-ray and Funk transforms. The method is exact and numerically efficient when fast Fourier transforms over the sphere are used. We present numerical simulations to demonstrate the performance of the inversion method. Dedicated to Dennis Healy, a friend of Applied Mathematics and Engineering

  11. Phase-compensated averaging for analyzing electroencephalography and magnetoencephalography epochs.

    Science.gov (United States)

    Matani, Ayumu; Naruse, Yasushi; Terazono, Yasushi; Iwasaki, Taro; Fujimaki, Norio; Murata, Tsutomu

    2010-05-01

    Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset. PMID:20172813

  12. A comparison of the average prekernel and the prekernel

    OpenAIRE

    Serrano, Roberto; Shimomura, Ken-Ichi

    2001-01-01

    We propose positive and normative foundations for the average prekernel of NTU games, and compare them with the existing ones for the prekernel. In our non-cooperative analysis, the average prekernel is approximated by the set of equilibrium payoffs of a game where each player faces the possibility of bargaining at random against any other player. In the cooperative analysis, we characterize the average prekernel as the unique solution that satisfies a set of Nash-like axioms for two-person g...

  13. A space-averaged model of branched structures

    CERN Document Server

    Lopez, Diego; Michelin, Sébastien

    2014-01-01

    Many biological systems and artificial structures are ramified, and present a high geometric complexity. In this work, we propose a space-averaged model of branched systems for conservation laws. From a one-dimensional description of the system, we show that the space-averaged problem is also one-dimensional, represented by characteristic curves, defined as streamlines of the space-averaged branch directions. The geometric complexity is then captured firstly by the characteristic curves, and secondly by an additional forcing term in the equations. This model is then applied to mass balance in a pipe network and momentum balance in a tree under wind loading.

  14. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  15. Average and Quantile Effects in Nonseparable Panel Models

    CERN Document Server

    Chernozhukov, Victor; Hahn, Jinyong; Newey, Whitney

    2011-01-01

    This paper gives identification and estimation results for average and quantile effects in nonseparable panel models. Nonseparable models are important for modeling in a variety of economic settings, including discrete choice. We find that linear fixed effects estimators are not consistent for the average effect, due in part to that effect not being identified. Nonparametric bounds for quantile and average effects are derived for discrete regressors that are strictly exogenous or predetermined. We allow for location and scale time effects and show how monotonicity can be used to shrink the bounds. We derive rates at which the bounds tighten as the number $T$ of time series observations grows. We also consider semiparametric discrete choice models and find that the bounds for average effects tighten considerably. In numerical calculations we find that the bounds may be very tight for small numbers of observations, suggesting their use in practice. We propose two novel inference methods for parameters defined a...

  16. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  17. Ensemble vs. time averages in financial time series analysis

    Science.gov (United States)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  18. United States Average Annual Precipitation, 1995-1999 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1995-1999. Parameter-elevation...

  19. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  20. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  1. Effects of spatial variability and scale on areal -average evapotranspiration

    Science.gov (United States)

    Famiglietti, J. S.; Wood, Eric F.

    1993-01-01

    This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.

  2. Does subduction zone magmatism produce average continental crust

    International Nuclear Information System (INIS)

    The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition

  3. United States Average Annual Precipitation, 1990-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-2009. Parameter-elevation...

  4. United States Average Annual Precipitation, 1961-1990 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...

  5. United States Average Annual Precipitation, 2005-2009 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2005-2009. Parameter-elevation...

  6. United States Average Annual Precipitation, 2000-2004 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2000-2004. Parameter-elevation...

  7. United States Average Annual Precipitation, 1990-1994 - Direct Download

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-1994. Parameter-elevation...

  8. Averaging Methods for Design of Spacecraft Hysteresis Damper

    Directory of Open Access Journals (Sweden)

    Ricardo Gama

    2013-01-01

    Full Text Available This work deals with averaging methods for dynamics of attitude stabilization systems. The operation of passive gravity-gradient attitude stabilization systems involving hysteresis rods is described by discontinuous differential equations. We apply recently developed averaging techniques for discontinuous system in order to simplify its analysis and to perform parameter optimization. The results obtained using this analytic method are compared with those of numerical optimization.

  9. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  10. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  11. Updated measurement of the average b hadron lifetime

    Science.gov (United States)

    Buskulic, D.; Decamp, D.; Goy, C.; Lees, J.-P.; Minard, M.-N.; Mours, B.; Alemany, R.; Ariztizabal, F.; Comas, P.; Crespo, J. M.; Delfino, M.; Fernandez, E.; Gaitan, V.; Garrido, Ll.; Mattison, T.; Pacheco, A.; Pascual, A.; Creanza, D.; de Palma, M.; Farilla, A.; Iaselli, G.; Maggi, G.; Maggi, M.; Natali, S.; Nuzzo, S.; Quattromini, M.; Ranieri, A.; Raso, G.; Romano, F.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Zito, G.; Hu, H.; Huang, D.; Huang, X.; Lin, J.; Lou, J.; Qiao, C.; Wang, T.; Xie, Y.; Xu, D.; Xu, R.; Zhang, J.; Zhao, W.; Bauerdick, L. A. T.; Blucher, E.; Bonvicini, G.; Bossi, F.; Boudreau, J.; Casper, D.; Drevermann, H.; Forty, R. W.; Ganis, G.; Gay, C.; Hagelberg, R.; Harvey, J.; Haywood, S.; Hilgart, J.; Jacobsen, R.; Jost, B.; Knobloch, J.; Lançon, E.; Lehraus, I.; Lohse, T.; Lusiani, A.; Martinez, M.; Mato, P.; Meinhard, H.; Minten, A.; Miquel, R.; Moser, H.-G.; Palazzi, P.; Perlas, J. A.; Pusztaszeri, J.-F.; Ranjard, F.; Redlinger, G.; Rolandi, L.; Rothberg, J.; Ruan, T.; Saich, M.; Schlatter, D.; Schmelling, M.; Sefkow, F.; Tejessy, W.; Wachsmuth, H.; Wiedenmann, W.; Wildish, T.; Witzeling, W.; Wotschack, J.; Ajaltouni, Z.; Badaud, F.; Bardadin-Otwinowska, M.; Bencheikh, A. M.; El Fellous, R.; Falvard, A.; Gay, P.; Guicheney, C.; Henrad, P.; Jousset, J.; Michel, B.; Montret, J.-C.; Pallin, D.; Perret, P.; Pietrzyk, B.; Proriol, J.; Prulhière, F.; Stimpfl, G.; Fearnley, T.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Møllerud, R.; Nilsson, B. S.; Efthymiopoulos, I.; Kyriakis, A.; Simopoulou, E.; Vayaki, A.; Zachariadou, K.; Badier, J.; Blondel, A.; Bonneaud, G.; Brient, J. C.; Fouque, G.; Orteu, S.; Rosowsky, A.; Rougé, A.; Rumpf, M.; Tanaka, R.; Verderi, M.; Videau, H.; Candlin, D. J.; Parsons, M. I.; Veitch, E.; Moneta, L.; Parrini, G.; Corden, M.; Georgiopoulos, C.; Ikeda, M.; Lannutti, J.; Levinthal, D.; Mermikides, M.; Sawyer, L.; Wasserbaech, S.; Antonelli, A.; Baldini, R.; Bencivenni, G.; Bologna, G.; Campana, P.; Capon, G.; Cerutti, F.; Chiarella, V.; D'Ettorre-Piazzoli, B.; Felici, G.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Passalacqua, L.; Pepe-Altarelli, M.; Picchi, P.; Altoon, B.; Boyle, O.; Colrain, P.; Ten Have, I.; Lynch, J. G.; Maitland, W.; Morton, W. T.; Raine, C.; Scarr, J. M.; Smith, K.; Thompson, A. S.; Turnbull, R. M.; Brandl, B.; Braun, O.; Geweniger, C.; Hanke, P.; Hepp, V.; Kluge, E. E.; Maumary, Y.; Putzer, A.; Rensch, B.; Stahl, A.; Tittel, K.; Wunsch, M.; Belk, A. T.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Cattaneo, M.; Colling, D. J.; Dornan, P. J.; Dugeay, S.; Greene, A. M.; Hassard, J. F.; Lieske, N. M.; Nash, J.; Patton, S. J.; Payne, D. G.; Phillips, M. J.; Sedgbeer, J. K.; Tomalin, I. R.; Wright, A. G.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bowdery, C. K.; Brodbeck, T. J.; Finch, A. J.; Foster, F.; Hughes, G.; Jackson, D.; Keemer, N. R.; Nuttall, M.; Patel, A.; Sloan, T.; Snow, S. W.; Whelan, E. P.; Kleinknecht, K.; Raab, J.; Renk, B.; Sander, H.-G.; Schmidt, H.; Steeg, F.; Walther, S. M.; Wolf, B.; Aubert, J.-J.; Benchouk, C.; Bonissent, A.; Carr, J.; Coyle, P.; Drinkard, J.; Etienne, F.; Papalexiou, S.; Payre, P.; Qian, Z.; Roos, L.; Rousseau, D.; Schwemling, P.; Talby, M.; Adlung, S.; Bauer, C.; Blum, W.; Brown, D.; Cattaneo, P.; Cowan, G.; Dehning, B.; Dietl, H.; Dydak, F.; Fernandez-Bosman, M.; Frank, M.; Halley, A. W.; Lauber, J.; Lütjens, G.; Lutz, G.; Männer, W.; Richter, R.; Rotscheidt, H.; Schröder, J.; Schwarz, A. S.; Settles, R.; Seywerd, H.; Stierlin, U.; Stiegler, U.; Denis, R. St.; Takashima, M.; Thomas, J.; Wolf, G.; Boucrot, J.; Callot, O.; Cordier, A.; Davier, M.; Grivaz, J.-F.; Heusse, Ph.; Jaffe, D. E.; Janot, P.; Kim, D. W.; Le Diberder, F.; Lefrançois, J.; Lutz, A.-M.; Schune, M.-H.; Veillet, J.-J.; Videau, I.; Zhang, Z.; Abbaneo, D.; Amendolia, S. R.; Bagliesi, G.; Batignani, G.; Bosisio, L.; Bottigli, U.; Bozzi, C.; Bradaschia, C.; Carpinelli, M.; Ciocci, M. A.; Dell'Orso, R.; Ferrante, I.; Fidecaro, F.; Foà, L.; Focardi, E.; Forti, F.; Giassi, A.; Giorgi, M. A.; Ligabue, F.; Mannelli, E. B.; Marrocchesi, P. S.; Messineo, A.; Palla, F.; Rizzo, G.; Sanguinetti, G.; Spagnolo, P.; Steinberger, J.; Tenchini, R.; Tonelli, G.; Triggiani, G.; Vannini, C.; Venturi, A.; Verdini, P. G.; Walsh, J.; Carter, J. M.; Green, M. G.; March, P. V.; Mir, Ll. M.; Medcalf, T.; Quazi, I. S.; Strong, J. A.; West, L. R.; Botterill, D. R.; Clifft, R. W.; Edgecock, T. R.; Edwards, M.; Fisher, S. M.; Jones, T. J.; Norton, P. R.; Salmon, D. P.; Thompson, J. C.; Bloch-Devaux, B.; Colas, P.; Duarte, H.; Kozanecki, W.; Lemaire, M. C.; Locci, E.; Loucatos, S.; Monnier, E.; Perez, P.; Perrier, F.; Rander, J.; Renardy, J.-F.; Roussarie, A.; Schuller, J.-P.; Schwindling, J.; Si Mohand, D.; Vallage, B.; Johnson, R. P.; Litke, A. M.; Taylor, G.; Wear, J.; Ashman, J. G.; Babbage, W.; Booth, C. N.; Buttar, C.; Carney, R. E.; Cartwright, S.; Combley, F.; Hatfield, F.; Reeves, P.; Thompson, L. F.; Barberio, E.; Böhrer, A.; Brandt, S.; Grupen, C.; Mirabito, L.; Rivera, F.; Schäfer, U.; Giannini, G.; Gobbo, B.; Ragusa, F.; Bellantoni, L.; Chen, W.; Cinabro, D.; Conway, J. S.; Cowen, D. F.; Feng, Z.; Ferguson, D. P. S.; Gao, Y. S.; Grahl, J.; Harton, J. L.; Jared, R. C.; Leclaire, B. W.; Lishka, C.; Pan, Y. B.; Pater, J. R.; Saadi, Y.; Sharma, V.; Schmitt, M.; Shi, Z. H.; Walsh, A. M.; Weber, F. V.; Whitney, M. H.; Sau Lan Wu; Wu, X.; Zobernig, G.; Aleph Collaboration

    1992-11-01

    An improved measurement of the average lifetime of b hadrons has been performed with the ALEPH detector. From a sample of 260 000 hadronic Z 0 decays, recorded during the 1991 LEP run with the silicon vertex detector fully operational, a fit to the impact parameter distribution of lepton tracks coming from semileptonic decays yields an average b hadron lifetime of 1.49 ± 0.03 ± 0.06 ps.

  12. A precise measurement of the average b hadron lifetime

    Science.gov (United States)

    Buskulic, D.; de Bonis, I.; Casper, D.; Decamp, D.; Ghez, P.; Goy, C.; Lees, J.-P.; Lucotte, A.; Minard, M.-N.; Odier, P.; Pietrzyk, B.; Ariztizabal, F.; Chmeissani, M.; Crespo, J. M.; Efthymiopoulos, I.; Fernandez, E.; Fernandez-Bosman, M.; Gaitan, V.; Garrido, Ll.; Martinez, M.; Orteu, S.; Pacheco, A.; Padilla, C.; Palla, F.; Pascual, A.; Perlas, J. A.; Sanchez, F.; Teubert, F.; Colaleo, A.; Creanza, D.; de Palma, M.; Farilla, A.; Gelao, G.; Girone, M.; Iaselli, G.; Maggi, G.; Maggi, M.; Marinelli, N.; Natali, S.; Nuzzo, S.; Ranieri, A.; Raso, G.; Romano, F.; Ruggieri, F.; Selvaggi, G.; Silvestris, L.; Tempesta, P.; Zito, G.; Huang, X.; Lin, J.; Ouyang, Q.; Wang, T.; Xie, Y.; Xu, R.; Xue, S.; Zhang, J.; Zhang, L.; Zhao, W.; Bonvicini, G.; Cattaneo, M.; Comas, P.; Coyle, P.; Drevermann, H.; Forty, R. W.; Frank, M.; Hagelberg, R.; Harvey, J.; Jacobsen, R.; Janot, P.; Jost, B.; Knobloch, J.; Lehraus, I.; Markou, C.; Martin, E. B.; Mato, P.; Minten, A.; Miquel, R.; Oest, T.; Palazzi, P.; Pater, J. R.; Pusztaszeri, J.-F.; Ranjard, F.; Rensing, P.; Rolandi, L.; Schlatter, D.; Schmelling, M.; Schneider, O.; Tejessy, W.; Tomalin, I. R.; Venturi, A.; Wachsmuth, H.; Wiedenmann, W.; Wildish, T.; Witzeling, W.; Wotschack, J.; Ajaltouni, Z.; Bardadin-Otwinowska, M.; Barrès, A.; Boyer, C.; Falvard, A.; Gay, P.; Guicheney, C.; Henrard, P.; Jousset, J.; Michel, B.; Monteil, S.; Montret, J.-C.; Pallin, D.; Perret, P.; Podlyski, F.; Proriol, J.; Rossignol, J.-M.; Saadi, F.; Fearnley, T.; Hansen, J. B.; Hansen, J. D.; Hansen, J. R.; Hansen, P. H.; Nilsson, B. S.; Kyriakis, A.; Simopoulou, E.; Siotis, I.; Vayaki, A.; Zachariadou, K.; Blondel, A.; Bonneaud, G.; Brient, J. C.; Bourdon, P.; Passalacqua, L.; Rougé, A.; Rumpf, M.; Tanaka, R.; Valassi, A.; Verderi, M.; Videau, H.; Candlin, D. J.; Parsons, M. I.; Focardi, E.; Parrini, G.; Corden, M.; Delfino, M.; Georgiopoulos, C.; Jaffe, D. E.; Antonelli, A.; Bencivenni, G.; Bologna, G.; Bossi, F.; Campana, P.; Capon, G.; Chiarella, V.; Felici, G.; Laurelli, P.; Mannocchi, G.; Murtas, F.; Murtas, G. P.; Pepe-Altarelli, M.; Dorris, S. J.; Halley, A. W.; Ten Have, I.; Knowles, I. G.; Lynch, J. G.; Morton, W. T.; O'Shea, V.; Raine, C.; Reeves, P.; Scarr, J. M.; Smith, K.; Smith, M. G.; Thompson, A. S.; Thomson, F.; Thorn, S.; Turnbull, R. M.; Becker, U.; Braun, O.; Geweniger, C.; Graefe, G.; Hanke, P.; Hepp, V.; Kluge, E. E.; Putzer, A.; Rensch, B.; Schmidt, M.; Sommer, J.; Stenzel, H.; Tittel, K.; Werner, S.; Wunsch, M.; Abbaneo, D.; Beuselinck, R.; Binnie, D. M.; Cameron, W.; Colling, D. J.; Dornan, P. J.; Konstantinidis, N.; Moneta, L.; Moutoussi, A.; Nash, J.; San Martin, G.; Sedgbeer, J. K.; Stacey, A. M.; Dissertori, G.; Girtler, P.; Kneringer, E.; Kuhn, D.; Rudolph, G.; Bowdery, C. K.; Brodbeck, T. J.; Colrain, P.; Crawford, G.; Finch, A. J.; Foster, F.; Hughes, G.; Sloan, T.; Whelan, E. P.; Williams, M. I.; Galla, A.; Greene, A. M.; Kleinknecht, K.; Quast, G.; Raab, J.; Renk, B.; Sander, H.-G.; van Gemmeren, P.; Wanke, R.; Zeitnitz, C.; Aubert, J. J.; Bencheikh, A. M.; Benchouk, C.; Bonissent, A.; Bujosa, G.; Calvet, D.; Carr, J.; Diaconu, C.; Etienne, F.; Nicod, D.; Payre, P.; Rousseau, D.; Talby, M.; Thulasidas, M.; Abt, I.; Assmann, R.; Bauer, C.; Blum, W.; Brown, D.; Dietl, H.; Dydak, F.; Ganis, G.; Gotzhein, C.; Jakobs, K.; Kroha, H.; Lütjens, G.; Lutz, G.; Männer, W.; Moser, H.-G.; Richter, R.; Rosado-Schlosser, A.; Schael, S.; Settles, R.; Seywerd, H.; Stierlin, U.; Denis, R. St.; Wolf, G.; Alemany, R.; Boucrot, J.; Callot, O.; Cordier, A.; Courault, F.; Davier, M.; Duflot, L.; Grivaz, J.-F.; Heusse, Ph.; Jacquet, M.; Kim, D. W.; Le Diberder, F.; Lefrançois, J.; Lutz, A.-M.; Musolino, G.; Nikolic, I.; Park, H. J.; Park, I. C.; Schune, M.-H.; Simion, S.; Veillet, J.-J.; Videau, I.; Azzurri, P.; Bagliesi, G.; Batignani, G.; Bettarini, S.; Bozzi, C.; Calderini, G.; Carpinelli, M.; Ciocci, M. A.; Ciulli, V.; Dell'Orso, R.; Fantechi, R.; Ferrante, I.; Foà, L.; Forti, F.; Giassi, A.; Giorgi, M. A.; Gregorio, A.; Ligabue, F.; Lusiani, A.; Marrocchesi, P. S.; Messineo, A.; Rizzo, G.; Sanguinetti, G.; Sciabà, A.; Spagnolo, P.; Steinberger, J.; Tenchini, R.; Tonelli, G.; Triggiani, G.; Vannini, C.; Verdini, P. G.; Walsh, J.; Betteridge, A. P.; Blair, G. A.; Bryant, L. M.; Cerutti, F.; Gao, Y.; Green, M. G.; Johnson, D. L.; Medcalf, T.; Mir, Ll. M.; Perrodo, P.; Strong, J. A.; Bertin, V.; Botterill, D. R.; Clifft, R. W.; Edgecock, T. R.; Haywood, S.; Edwards, M.; Maley, P.; Norton, P. R.; Thompson, J. C.; Bloch-Devaux, B.; Colas, P.; Duarte, H.; Emery, S.; Kozanecki, W.; Lançon, E.; Lemaire, M. C.; Locci, E.; Marx, B.; Perez, P.; Rander, J.; Renardy, J.-F.; Rosowsky, A.; Roussarie, A.; Schuller, J.-P.; Schwindling, J.; Si Mohand, D.; Trabelsi, A.; Vallage, B.; Johnson, R. P.; Kim, H. Y.; Litke, A. M.; McNeil, M. A.; Taylor, G.; Beddall, A.; Booth, C. N.; Boswell, R.; Cartwright, S.; Combley, F.; Dawson, I.; Koksal, A.; Letho, M.; Newton, W. M.; Rankin, C.; Thompson, L. F.; Böhrer, A.; Brandt, S.; Cowan, G.; Feigl, E.; Grupen, C.; Lutters, G.; Minguet-Rodriguez, J.; Rivera, F.; Saraiva, P.; Smolik, L.; Stephan, F.; Apollonio, M.; Bosisio, L.; Della Marina, R.; Giannini, G.; Gobbo, B.; Ragusa, F.; Rothberg, J.; Wasserbaech, S.; Armstrong, S. R.; Bellantoni, L.; Elmer, P.; Feng, Z.; Ferguson, D. P. S.; Gao, Y. S.; González, S.; Grahl, J.; Harton, J. L.; Hayes, O. J.; Hu, H.; McNamara, P. A.; Nachtman, J. M.; Orejudos, W.; Pan, Y. B.; Saadi, Y.; Schmitt, M.; Scott, I. J.; Sharma, V.; Turk, J. D.; Walsh, A. M.; Sau Lan Wu; Wu, X.; Yamartino, J. M.; Zheng, M.; Zobernig, G.; Aleph Collaboration

    1996-02-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 ± 0.013 ± 0.022 ps.

  13. On the convergence time of asynchronous distributed quantized averaging algorithms

    OpenAIRE

    ZHU, MINGHUI; Martinez, Sonia

    2010-01-01

    We come up with a class of distributed quantized averaging algorithms on asynchronous communication networks with fixed, switching and random topologies. The implementation of these algorithms is subject to the realistic constraint that the communication rate, the memory capacities of agents and the computation precision are finite. The focus of this paper is on the study of the convergence time of the proposed quantized averaging algorithms. By appealing to random walks on graphs, we derive ...

  14. Ensemble averaging applied to the flow of a multiphase mixture

    International Nuclear Information System (INIS)

    Ensemble averaging theorems are used to derive a two-fluid model describing the flow of a dilute fluid-solid mixture. The model is valid for mixtures containing particles that are small compared to the length scales describing variations in ensemble-averaged field quantities, such as fluid or particle phase density, pressure or velocity. For the case where the mixture is pseudo-homogeneous, the equations obtained reproduce the Einstein viscosity correction

  15. Homogeneous conformal averaging operators on semisimple Lie algebras

    OpenAIRE

    Kolesnikov, Pavel

    2014-01-01

    In this note we show a close relation between the following objects: Classical Yang---Baxter equation (CYBE), conformal algebras (also known as vertex Lie algebras), and averaging operators on Lie algebras. It turns out that the singular part of a solution of CYBE (in the operator form) on a Lie algebra $\\mathfrak g$ determines an averaging operator on the corresponding current conformal algebra $\\mathrm{Cur} \\mathfrak g$. For a finite-dimensional semisimple Lie algebra $\\mathfrak g$, we desc...

  16. Average resonance parameters of zirconium and molybdenum nuclei

    International Nuclear Information System (INIS)

    Full sets of average resonance parameters S0, S1, R0', R1', S1,3/2 for zirconium and molybdenum nuclei with natural mixture of isotopes are determined by means of the method designed by authors. The determination is realized from analysis of the average experimental differential cross sections of neutron elastic scattering in the field of energy before 440 keV. Analysis of recommended parameters and some of the literary data had been performed also.

  17. Average resonance parameters of ruthenium and palladium nuclei

    International Nuclear Information System (INIS)

    Full sets of the average resonance parameters S0, S1, R0', R1', S1,3/2 for ruthenium and palladium nuclei with natural mixture of isotopes are determined by means of the method designed by authors. The determination is realized from analysis of the average experimental differential cross sections of neutron elastic scattering in the field of energy before 440 keV. The analysis of recommended parameters and of some of the literary data had been performed also.

  18. Analysis of Height Affect on Average Wind Speed by Ann

    OpenAIRE

    Ata, Raşit; Çetin, Numan

    2011-01-01

    The power generated by wind turbines depends on several factors. Two of them are the wind speed and the tower height of wind turbine. In this study, the annual average wind speed based on the tower height is predicted using Artificial Neural Networks (ANN) and comparisons made with conventional model approach. The backpropagation multi layer ANNs were used to estimate annual average wind speed for three locations in Turkey. The Model has been developed with the help of neural network methodol...

  19. Analysis of Height Affect on Average Wind Speed by ANN

    OpenAIRE

    Ata, Raşit; Çetin, Numan

    2011-01-01

    The power generated by wind turbines depends on several factors. Two of them are the wind speed and the tower height of wind turbine. In this study, the annual average wind speed based on the tower height is predicted using Artificial Neural Networks (ANN) and comparisons made with conventional model approach. The backpropagation multi layer ANNs were used to estimate annual average wind speed for three locations in Turkey. The Model has been developed with the help of neural network methodol...

  20. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  1. Averaged universe confronted to cosmological observations: a fully covariant approach

    CERN Document Server

    Wijenayake, Tharake; Ishak, Mustapha

    2016-01-01

    One of the outstanding problems in general relativistic cosmology is that of the averaging. That is, how the lumpy universe that we observe at small scales averages out to a smooth Friedmann-Lemaitre-Robertson-Walker (FLRW) model. The root of the problem is that averaging does not commute with the Einstein equations that govern the dynamics of the model. This leads to the well-know question of backreaction in cosmology. In this work, we approach the problem using the covariant framework of Macroscopic Gravity (MG). We use its cosmological solution with a flat FLRW macroscopic background where the result of averaging cosmic inhomogeneities has been encapsulated into a backreaction density parameter denoted $\\Omega_\\mathcal{A}$. We constrain this averaged universe using available cosmological data sets of expansion and growth including, for the first time, a full CMB analysis from Planck temperature anisotropy and polarization data, the supernovae data from Union 2.1, the galaxy power spectrum from WiggleZ, the...

  2. Basics of averaging of the Maxwell equations for bulk materials

    CERN Document Server

    Chipouline, A; Tretyakov, S

    2012-01-01

    Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for bulk MM, which is rather close to the case of compound materials but should include magnetic response of the inclusions an...

  3. Time-averaged photon-counting digital holography.

    Science.gov (United States)

    Demoli, Nazif; Skenderović, Hrvoje; Stipčević, Mario

    2015-09-15

    Time-averaged holography has been using photo-emulsions (early stage) and digital photo-sensitive arrays (later) to record holograms. We extend the recording possibilities by utilizing a photon-counting camera, and we further investigate the possibility of obtaining accurate hologram reconstructions in rather severe experimental conditions. To achieve this, we derived an expression for fringe function comprising the main parameters affecting the hologram recording. Influence of the main parameters, namely the exposure time and the number of averaged holograms, is analyzed by simulations and experiments. It is demonstrated that taking long exposure times can be avoided by averaging over many holograms with the exposure times much shorter than the vibration cycle. Conditions in which signal-to-noise ratio in reconstructed holograms can be substantially increased are provided. PMID:26371907

  4. Evaluation of average molecular weight of gamma-irradiated polytetrafluoroethylene

    International Nuclear Information System (INIS)

    Statistical treatment of the decrease in the number-average molecular weight of gamma-irradiated polytetrafluoroethylene (PTFE) sample was carried out by considering the random degradation of main chains, difference in the susceptibility to radiation damage between the crystalline and amorphous regions, and the evolution of low molecular weight gases. A specimen which consists of n chains was considered. The fracture density P was treated as the probability of fracture of main chains occurring per bond. The number of chain fractions was given. The monomer unit of the number-average molecule after evolution during gamma-irradiation was deduced. The fracture of main chains caused by radiation is dominant in the amorphous region. The dependence of amorphous fraction on radiation dose can be expressed. The calculated number-average molecular weight of irradiated PTFE was compared with the experimental results obtained from the viscoelastic method. (J.P.N.)

  5. Motion artifacts reduction from PPG using cyclic moving average filter.

    Science.gov (United States)

    Lee, Junyeon

    2014-01-01

    The photoplethysmogram (PPG) is an extremely useful medical diagnostic tool. However, PPG signals are highly susceptible to motion artifacts. In this paper, we propose a cyclic moving average filter that use similarity of Photoplethysmogram. This filtering method has the average value of each samples through separating the cycle of PPG signal. If there are some motion artifacts in continuous PPG signal, disjoin the signal based on cycle. And then, we made these signals to have same cycle by coordinating the number of sample. After arrange these cycles in 2 dimension, we put the average value of each samples from starting till now. So, we can eliminate the motion artifacts without damaged PPG signal. PMID:24704660

  6. Modified Adaptive Weighted Averaging Filtering Algorithm for Noisy Image Sequences

    Institute of Scientific and Technical Information of China (English)

    LI Weifeng; YU Daoyin; CHEN Xiaodong

    2007-01-01

    In order to avoid the influence of noise variance on the filtering performances, a modified adaptive weighted averaging (MAWA) filtering algorithm is proposed for noisy image sequences. Based upon adaptive weighted averaging pixel values in consecutive frames, this algorithm achieves the filtering goal by assigning smaller weights to the pixels with inappropriate estimated motion trajectory for noise. It only utilizes the intensity of pixels to suppress noise and accordingly is independent of noise variance. To evaluate the performance of the proposed filtering algorithm, its mean square error and percentage of preserved edge points were compared with those of traditional adaptive weighted averaging and non-adaptive mean filtering algorithms under different noise variances. Relevant results show that the MAWA filtering algorithm can preserve image structures and edges under motion after attenuating noise, and thus may be used in image sequence filtering.

  7. An Advanced Time Averaging Modelling Technique for Power Electronic Circuits

    Science.gov (United States)

    Jankuloski, Goce

    For stable and efficient performance of power converters, a good mathematical model is needed. This thesis presents a new modelling technique for DC/DC and DC/AC Pulse Width Modulated (PWM) converters. The new model is more accurate than the existing modelling techniques such as State Space Averaging (SSA) and Discrete Time Modelling. Unlike the SSA model, the new modelling technique, the Advanced Time Averaging Model (ATAM) includes the averaging dynamics of the converter's output. In addition to offering enhanced model accuracy, application of linearization techniques to the ATAM enables the use of conventional linear control design tools. A controller design application demonstrates that a controller designed based on the ATAM outperforms one designed using the ubiquitous SSA model. Unlike the SSA model, ATAM for DC/AC augments the system's dynamics with the dynamics needed for subcycle fundamental contribution (SFC) calculation. This allows for controller design that is based on an exact model.

  8. Digital pulse processor using a moving average technique

    International Nuclear Information System (INIS)

    Interest has recently grown in substituting purely digital methods for the analog techniques traditionally used in processing pulses from radiation detectors. A digital pulse processor with improved differential linearity and reduced dead time has been designed. The circuit uses an 8-bit flash ADC running at 36 MHz and continually sampling the signal from the preamplifier or shaping amplifier. The digitized signal is then processed by a digital moving averager. A digital peak detector is used for measuring the amplitude of the shaped pulses. A novel, threshold-free circuit has been designed that combines both the moving average and peak detection functions. The circuit also provides a timing signal with an uncertainty of one sampling period. The number of the averaged samples (equivalent to the shaping time constant) is digitally controlled

  9. Gauge-Invariant Average of Einstein Equations for finite Volumes

    CERN Document Server

    Smirnov, Juri

    2014-01-01

    For the study of cosmological backreacktion an avaragng procedure is required. In this work a covariant and gauge invariant averaging formalism for finite volumes will be developed. This averaging will be applied to the scalar parts of Einstein's equations. For this purpose dust as a physical laboratory will be coupled to the gravitating system. The goal is to study the deviation from the homogeneous universe and the impact of this deviation on the dynamics of our universe. Fields of physical observers are included in the studied system and used to construct a reference frame to perform the averaging without a formal gauge fixing. The derived equations resolve the question whether backreaction is gauge dependent.

  10. Average contraction and synchronization of complex switched networks

    International Nuclear Information System (INIS)

    This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)

  11. Despeckling vs averaging of retinal UHROCT tomograms: advantages and limitations

    Science.gov (United States)

    Eichel, Justin A.; Lee, Donghyun D.; Wong, Alexander; Fieguth, Paul W.; Clausi, David A.; Bizheva, Kostadinka K.

    2011-03-01

    Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research grade 1060nm spectral domain UHROCT system with 5μm axial resolution in the retina. Single cross-sectional retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise ratio (CNR) were evaluated for both cases.

  12. Vibration monitor for rotating machines using average frequency technique

    International Nuclear Information System (INIS)

    A vibration monitoring technique has been developed which can be applied to continuous monitoring and to patrol checking of many kinds of rotating machines in nuclear power plants. In this method, the vibrating condition in such equipment are represented in terms of two parameters, i.e. a vibration amplitude (RMS value) and an average frequency. The average frequency is defined as the root value of the second moment of the vibration frequency weighted by the power spectrum. The average frequency can be calculated by simple analogue circuits and does not need the spectrum analysis. Using these two parameter, not only the occurrence of abnormal vibration but also the type of vibration can be detected. (author)

  13. Optimum orientation versus orientation averaging description of cluster radioactivity

    CERN Document Server

    Seif, W M; Refaie, A I; Amer, L H

    2016-01-01

    Background: The deformation of the nuclei involved in the cluster decay of heavy nuclei affect seriously their half-lives against the decay. Purpose: We investigate the description of the different decay stages in both the optimum orientation and the orientation-averaged pictures of the cluster decay process. Method: We consider the decays of 232,233,234U and 236,238Pu isotopes. The quantum mechanical knocking frequency and penetration probability based on the Wentzel-Kramers-Brillouin approximation are used to find the decay width. Results: We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. The difference between the two values increases with decreasing the mass number of the emitted cluster. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformati...

  14. Kinematic corrections to the averaged luminosity distance in inhomogeneous universes

    CERN Document Server

    Kostov, Valentin

    2010-01-01

    The redshift surfaces within inhomogeneous universes are shifted by the matter peculiar velocities. The arising average corrections to the luminosity distance are calculated relativistically in several Swiss-cheese models with mass compensated Lemaitre-Tolman-Bondi voids. These kinematic corrections are different from weak lensing effects and can be much bigger close to the observer. The statistical averaging over all directions is performed by tracing numerically light rays propagating through a random void lattice. The probability of a supernova emision from a comoving volume is assumed proportional to the rest mass in it. The average corrections to the distance modulus can be significant for redshifts smaller than 0.02 for small voids (radius 30 Mpc) and redshifts smaller than 0.1 for big voids (radius 300 Mpc), yet not large enough to substitute for dark energy. The corrections decay inversely proportional to the distance from the observer. In addition, there is a random cancelation of corrections between...

  15. Optimum orientation versus orientation averaging description of cluster radioactivity

    Science.gov (United States)

    Seif, W. M.; Ismail, M.; Refaie, A. I.; Amer, Laila H.

    2016-07-01

    While the optimum-orientation concept is frequently used in studies on cluster decays involving deformed nuclei, the orientation-averaging concept is used in most alpha decay studies. We investigate the different decay stages in both the optimum-orientation and the orientation-averaging pictures of the cluster decay process. For decays of 232,233,234U and 236,238Pu isotopes, the quantum knocking frequency and penetration probability based on the Wentzel–Kramers–Brillouin approximation are used to find the decay width. The obtained decay width and the experimental half-life are employed to estimate the clusters preformation probability. We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformation probabilities estimated by the two considered schemes are in more or less comparable agreement with the Blendowske–Walliser (BW) formula based on the preformation probability of α ({S}α {{a}{{v}}{{e}}}) obtained from the orientation-averaging scheme. All the results, including the optimum-orientation ones, deviate substantially from the BW law based on {S}α {{o}{{p}}{{t}}} that was estimated from the optimum-orientation scheme. To account for the nuclear deformations, it is more relevant to calculate the decay width by averaging over the different possible orientations of the participating deformed nuclei, rather than considering the corresponding non-compact optimum orientation.

  16. The Role of the Harmonic Vector Average in Motion Integration

    Directory of Open Access Journals (Sweden)

    Alan eJohnston

    2013-10-01

    Full Text Available The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC solution. Here a new combination rule, the harmonic vector average (HVA, is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The harmonic vector average however provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the intersection of constraints direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the harmonic vector average.

  17. Method of Best Representation for Averages in Data Evaluation

    International Nuclear Information System (INIS)

    A new method for averaging data for which incomplete information is available is presented. For example, this method would be applicable during data evaluation where only the final outcomes of the experiments and the associated uncertainties are known. This method is based on using the measurements to construct a mean probability density for the data set. This “expected value method” (EVM) is designed to treat asymmetric uncertainties and has distinct advantages over other methods of averaging, including giving a more realistic uncertainty, being robust to outliers and consistent under various representations of the same quantity

  18. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673. ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economics Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  19. Bounce averaged trapped electron fluid equations for plasma turbulence

    International Nuclear Information System (INIS)

    A novel set of nonlinear fluid equations for mirror-trapped electrons is developed which differs from conventional fluid equations in two main respects: (1) the trapped-fluid moments average over only two of three velocity space dimensions, retaining the full pitch angle dependence of the traped electron dynamics, and (2) closure approximations include the effects of collisionless wave-particle resonances with the toroidal precession drift. By speeding up calculations by at least √ mi/me, these bounce averaged fluid equations make possible realistic nonlinear simulations of turbulent particle transport and electron heat transport in tokamaks and other magnetically confined plasmas

  20. Long-term rainfall averages for Ireland, 1981-2010

    OpenAIRE

    Walsh, Séamus

    2016-01-01

    Long-Term Averages (LTA) or Climate Normals are 30-year averages of weather elements. They are used to describe the current climate and to place current weather in context. Met Éireann has produced a suite of LTAs covering the period 1981-2010, which have replaced the 1961-1990 LTAs for day-to-day comparison purposes. LTAs of monthly rainfall and days of rain greater than or equal to 0.2mm, 1mm and 10mm have been compiled for over 750 locations. Using these data and data for Nor...

  1. Positivity of the spherically averaged atomic one-electron density

    DEFF Research Database (Denmark)

    Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas; Østergaard Sørensen, Thomas

    2008-01-01

    We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥  0. This article may be reproduced in its entirety for non-commercial purposes....

  2. Average Lorentz self-force from electric field lines

    International Nuclear Information System (INIS)

    We generalize the derivation of electromagnetic fields of a charged particle moving with a constant acceleration Singal (2011 Am. J. Phys. 79 1036) to a variable acceleration (piecewise constants) over a small finite time interval using Coulomb's law, relativistic transformations of electromagnetic fields and Thomson's construction Thomson (1904 Electricity and Matter (New York: Charles Scribners) ch 3). We derive the average Lorentz self-force for a charged particle in arbitrary non-relativistic motion via averaging the fields at retarded time. (paper)

  3. HAT AVERAGE MULTIRESOLUTION WITH ERROR CONTROL IN 2-D

    Institute of Scientific and Technical Information of China (English)

    Sergio Amat

    2004-01-01

    Multiresolution representations of data are a powerful tool in data compression. For a proper adaptation to the singularities, it is crucial to develop nonlinear methods which are not based on tensor product. The hat average framework permets develop adapted schemes for all types of singularities. In contrast with the wavelet framework these representations cannot be considered as a change of basis, and the stability theory requires different considerations. In this paper, non separable two-dimensional hat average multiresolution processing algorithms that ensure stability are introduced. Explicit error bounds are presented.

  4. Average patterns and coherent phenomena in wide aperture lasers

    Science.gov (United States)

    D'Alessandro, G.; Papoff, F.; Louvergneaux, E.; Glorieux, P.

    2004-06-01

    Using a realistic model of wide aperture, weakly astigmatic lasers we develop a framework to analyze experimental average intensity patterns. We use the model to explain the appearance of patterns in terms of the modes of the cavity and to show that the breaking of the symmetry of the average intensity patterns is caused by overlaps in the frequency spectra of nonvanishing of modes with different parity. This result can be used even in systems with very fast dynamics to detect experimentally overlaps of frequency spectra of modes.

  5. Light shift averaging in paraffin-coated alkali vapor cells

    CERN Document Server

    Zhivun, Elena; Sudyka, Julia; Pustelny, Szymon; Patton, Brian; Budker, Dmitry

    2015-01-01

    Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.

  6. Quantum state discrimination using the minimum average number of copies

    CERN Document Server

    Slussarenko, Sergei; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M; Pryde, Geoff J

    2016-01-01

    In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we consider minimizing the average resources for a fixed admissible error probability. We derive a detection scheme optimized for the latter task, and experimentally test it, along with schemes previously considered for the former task. We show that, for our new task, our new scheme outperforms all previously considered schemes.

  7. THEORETICAL CALCULATION OF THE RELATIVISTIC SUBCONFIGURATION-AVERAGED TRANSITION ENERGIES

    Institute of Scientific and Technical Information of China (English)

    张继彦; 杨向东; 杨国洪; 张保汉; 雷安乐; 刘宏杰; 李军

    2001-01-01

    A method for calculating the average energies of relativistic subconfigurations in highly ionized heavy atoms has been developed in the framework of the multiconfigurational Dirac-Fock theory. The method is then used to calculate the average transition energies of the spin-orbit-split 3d-4p transition of Co-like tungsten, the 3d-5f transition of Cu-like tantalum, and the 3d-5f transitions of Cu-like and Zn-like gold samples. The calculated results are in good agreement with those calculated with the relativistic parametric potential method and also with the experimental results.

  8. Average profiles, from tries to suffix-trees

    OpenAIRE

    Nicodème, Pierre

    2005-01-01

    We build upon previous work of [Fayj04] and [ParSzp05] to study asymptotically the average internal profile of tries and of suffix-trees. The binary keys and the strings are built from a Bernoulli source (p,q). We consider the average number p_k,\\textitP(ν ) of internal nodes at depth k of a trie whose number of input keys follows a Poisson law of parameter ν . The Mellin transform of the corresponding bivariate generating function has a major singularity at the origin, which implies a phase ...

  9. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  10. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  11. Group Averaging and Refined Algebraic Quantization: Where are we now?

    OpenAIRE

    Marolf, D.

    2000-01-01

    Refined Algebraic Quantization and Group Averaging are powerful methods for quantizing constrained systems. They give constructive algorithms for generating observables and the physical inner product. This work outlines the current status of these ideas with an eye toward quantum gravity. The main goal is provide a description of outstanding problems and possible research topics in the field.

  12. Average charge of superheavy recoil ion in helium gas

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, D.; Morita, K.; Morimoto, K.; Haba, H. [RIKEN, Wako, Saitama (Japan). Nishina Center for Accelerator Based Science; Kudo, H. [Niigata Univ. (Japan). Dept. of Chemistry

    2011-07-01

    The average equilibrium charges q{sub ave} of heavy recoil ions moving in helium gas were measured by a gasfilled recoil ion separator (GARIS). A new empirical formula to calculate q{sub ave} for superheavy recoil ions with a low velocity was derived. This formula was applicable to the search for a superheavy nuclide of {sup 266}Bh. (orig.)

  13. 75 FR 78157 - Farmer and Fisherman Income Averaging

    Science.gov (United States)

    2010-12-15

    ... July 22, 2008, temporary regulations (TD 9417) were published in the Federal Register (73 FR 42522... Register (73 FR 42538) on July 22, 2008. No comments in response to the notice of proposed rulemaking or... Internal Revenue Service 26 CFR Part 1 RIN 1545-BE23 Farmer and Fisherman Income Averaging AGENCY:...

  14. Average charge of superheavy recoil ion in helium gas

    International Nuclear Information System (INIS)

    The average equilibrium charges qave of heavy recoil ions moving in helium gas were measured by a gasfilled recoil ion separator (GARIS). A new empirical formula to calculate qave for superheavy recoil ions with a low velocity was derived. This formula was applicable to the search for a superheavy nuclide of 266Bh. (orig.)

  15. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109. ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  16. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  17. Average Error Bounds of Trigonometric Approximation on Periodic Wiener Spaces

    Institute of Scientific and Technical Information of China (English)

    Cheng Yong WANG; Rui Min WANG

    2013-01-01

    In this paper,we study the approximation of identity operator and the convolution integral operator Bm by Fourier partial sum operators,Fejér operators,Vallée-Poussin operators,Cesáro operators and Abel mean operators,respectively,on the periodic Wiener space (C1(R),W°) and obtain the average error estimations.

  18. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-01-01

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two. PMID:26041580

  19. The effect of cosmic inhomogeneities on the average cosmological dynamics

    CERN Document Server

    Singh, T P

    2011-01-01

    It is generally assumed that on sufficiently large scales the Universe is well-described as a homogeneous, isotropic FRW cosmology with a dark energy. Does the formation of nonlinear cosmic inhomogeneities produce a significant effect on the average large-scale FLRW dynamics? As an answer, we suggest that if the length scale at which homogeneity sets in is much smaller than the Hubble length scale, the back-reaction due to averaging over inhomogeneities is negligible. This result is supported by more than one approach to study of averaging in cosmology. Even if no single approach is sufficiently rigorous and compelling, they are all in agreement that the effect of averaging in the real Universe is small. On the other hand, it is perhaps fair to say that there is no definitive observational evidence yet that there indeed is a homogeneity scale which is much smaller than the Hubble scale, or for that matter, if today's Universe is indeed homogeneous on large scales. If the Copernican principle can be observatio...

  20. Designing a Response Scale to Improve Average Group Response Reliability

    Science.gov (United States)

    Davies, Randall

    2008-01-01

    Creating surveys is a common task in evaluation research; however, designing a survey instrument to gather average group response data that can be interpreted in a meaningful way over time can be challenging. When surveying groups of people for the purpose of longitudinal analysis, the reliability of the result is often determined by the response…

  1. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  2. 7 CFR 51.2548 - Average moisture content determination.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548 Section 51.2548 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER...

  3. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE...

  4. Evaluating methods for constructing average high-density electrode positions.

    Science.gov (United States)

    Richards, John E; Boswell, Corey; Stevens, Michael; Vendemia, Jennifer M C

    2015-01-01

    Accurate analysis of scalp-recorded electrical activity requires the identification of electrode locations in 3D space. For example, source analysis of EEG/ERP (electroencephalogram, EEG; event-related-potentials, ERP) with realistic head models requires the identification of electrode locations on the head model derived from structural MRI recordings. Electrode systems must cover the entire scalp in sufficient density to discriminate EEG activity on the scalp and to complete accurate source analysis. The current study compares techniques for averaging electrode locations from 86 participants with the 128 channel "Geodesic Sensor Net" (GSN; EGI, Inc.), 38 participants with the 128 channel "Hydrocel Geodesic Sensor Net" (HGSN; EGI, Inc.), and 174 participants with the 81 channels in the 10-10 configurations. A point-set registration between the participants and an average MRI template resulted in an average configuration showing small standard errors, which could be transformed back accurately into the participants' original electrode space. Average electrode locations are available for the GSN (86 participants), Hydrocel-GSN (38 participants), and 10-10 and 10-5 systems (174 participants). PMID:25234713

  5. Light-cone averages in a swiss-cheese universe

    CERN Document Server

    Marra, Valerio; Matarrese, Sabino

    2007-01-01

    We analyze a toy swiss-cheese cosmological model to study the averaging problem. In our model, the cheese is the EdS model and the holes are constructed from a LTB solution. We study the propagation of photons in the swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the the expansion scalar is unaffected by the inhomogeneities. This is because of spherical symmetry. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the concordance model. Although the sole source in the swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we ...

  6. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  7. Fuel optimum low-thrust elliptic transfer using numerical averaging

    Science.gov (United States)

    Tarzi, Zahi; Speyer, Jason; Wirz, Richard

    2013-05-01

    Low-thrust electric propulsion is increasingly being used for spacecraft missions primarily due to its high propellant efficiency. As a result, a simple and fast method for low-thrust trajectory optimization is of great value for preliminary mission planning. However, few low-thrust trajectory tools are appropriate for preliminary mission design studies. The method presented in this paper provides quick and accurate solutions for a wide range of transfers by using numerical orbital averaging to improve solution convergence and include orbital perturbations. Thus, preliminary trajectories can be obtained for transfers which involve many revolutions about the primary body. This method considers minimum fuel transfers using first-order averaging to obtain the fuel optimum rates of change of the equinoctial orbital elements in terms of each other and the Lagrange multipliers. Constraints on thrust and power, as well as minimum periapsis, are implemented and the equations are averaged numerically using a Gausian quadrature. The use of numerical averaging allows for more complex orbital perturbations to be added in the future without great difficulty. The effects of zonal gravity harmonics, solar radiation pressure, and thrust limitations due to shadowing are included in this study. The solution to a transfer which minimizes the square of the thrust magnitude is used as a preliminary guess for the minimum fuel problem, thus allowing for faster convergence to a wider range of problems. Results from this model are shown to provide a reduction in propellant mass required over previous minimum fuel solutions.

  8. The Averaged Fokker - Planck equation in tokamak plasma

    International Nuclear Information System (INIS)

    In this paper the numerical code which has been developed to solve averaged Fokker-Planck equation, and its applications for studying the time evolution of the electron distribution function in tokamak device of medium size and performances are discussed. The electron collisions and DC electric field effects are analysed in details

  9. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    1995-01-01

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that Fouri

  10. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  11. Determination of average activating thermal neutron flux in bulk samples

    International Nuclear Information System (INIS)

    A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared

  12. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    International Nuclear Information System (INIS)

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-barP, the average, U-bar, the effective, Ueff or the maximum peak, UP tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-barp voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak kPPV,kVp and the average kPPV,Uav conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-barp and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  13. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  14. Construction of average adult Japanese voxel phantoms for dose assessment

    International Nuclear Information System (INIS)

    The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)

  15. High-average-power diode-pumped Yb: YAG lasers

    International Nuclear Information System (INIS)

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods

  16. High-average-power diode-pumped Yb: YAG lasers

    Energy Technology Data Exchange (ETDEWEB)

    Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B

    1999-10-01

    A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M{sup 2} = 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M{sup 2} value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M{sup 2} < 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods.

  17. The stability of a zonally averaged thermohaline circulation model

    CERN Document Server

    Schmidt, G A

    1995-01-01

    A combination of analytical and numerical techniques are used to efficiently determine the qualitative and quantitative behaviour of a one-basin zonally averaged thermohaline circulation ocean model. In contrast to earlier studies which use time stepping to find the steady solutions, the steady state equations are first solved directly to obtain the multiple equilibria under identical mixed boundary conditions. This approach is based on the differentiability of the governing equations and especially the convection scheme. A linear stability analysis is then performed, in which the normal modes and corresponding eigenvalues are found for the various equilibrium states. Resonant periodic solutions superimposed on these states are predicted for various types of forcing. The results are used to gain insight into the solutions obtained by Mysak, Stocker and Huang in a previous numerical study in which the eddy diffusivities were varied in a randomly forced one-basin zonally averaged model. Resonant stable oscillat...

  18. Detrending moving average algorithm: Frequency response and scaling performances

    Science.gov (United States)

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed.

  19. SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    Directory of Open Access Journals (Sweden)

    VIGH MELINDA

    2015-03-01

    Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.

  20. Disk-averaged Spectra & light-curves of Earth

    CERN Document Server

    Tinetti, G; Crisp, D; Fong, W; Kiang, N; Fishbein, E; Velusamy, T; Bosc, E; Turnbull, M

    2005-01-01

    We are using computer models to explore the observational sensitivity to changes in atmospheric and surface properties, and the detectability of biosignatures, in the globally averaged spectra and light-curves of the Earth. Using AIRS (Atmospheric Infrared Sounder) data, as input for atmospheric and surface properties, we have generated spatially resolved high-resolution synthetic spectra using the SMART radiative transfer model, for a variety of conditions, from the UV to the far-IR (beyond the range of current Earth-based satellite data). We have then averaged over the visible disk for a number of different viewing geometries to quantify the sensitivity to surface types and atmospheric features as a function of viewing geometry, and spatial and spectral resolution. These results have been processed with an instrument simulator to improve our understanding of the detectable characteristics of Earth-like planets as viewed by the first generation extrasolar terrestrial planet detection and characterization mis...

  1. Refined similarity hypothesis using three-dimensional local averages

    Science.gov (United States)

    Iyer, Kartik P.; Sreenivasan, Katepalli R.; Yeung, P. K.

    2015-12-01

    The refined similarity hypotheses of Kolmogorov, regarded as an important ingredient of intermittent turbulence, has been tested in the past using one-dimensional data and plausible surrogates of energy dissipation. We employ data from direct numerical simulations, at the microscale Reynolds number Rλ˜650 , on a periodic box of 40963 grid points to test the hypotheses using three-dimensional averages. In particular, we study the small-scale properties of the stochastic variable V =Δ u (r ) /(rɛr) 1 /3 , where Δ u (r ) is the longitudinal velocity increment and ɛr is the dissipation rate averaged over a three-dimensional volume of linear size r . We show that V is universal in the inertial subrange. In the dissipation range, the statistics of V are shown to depend solely on a local Reynolds number.

  2. Nonlocal imaging by conditional averaging of random reference measurements

    CERN Document Server

    Luo, Kai-Hong; Zheng, Wei-Mou; Wu, Ling-An; 10.1088/0256-307X/29/7/074216

    2013-01-01

    We report the nonlocal imaging of an object by conditional averaging of the random exposure frames of a reference detector, which only sees the freely propagating field from a thermal light source. A bucket detector, synchronized with the reference detector, records the intensity fluctuations of an identical beam passing through the object mask. These fluctuations are sorted according to their values relative to the mean, then the reference data in the corresponding time-bins for a given fluctuation range are averaged, to produce either positive or negative images. Since no correlation calculations are involved, this correspondence imaging technique challenges our former interpretations of "ghost" imaging. Compared with conventional correlation imaging or compressed sensing schemes, both the number of exposures and computation time are greatly reduced, while the visibility is much improved. A simple statistical model is presented to explain the phenomenon.

  3. Gaze-direction-based MEG averaging during audiovisual speech perception

    Directory of Open Access Journals (Sweden)

    Satu Lamminmäki

    2010-03-01

    Full Text Available To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG signals and subject’s gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent and /aka/ (incongruent in synchrony, repeated once every 3 s. Subjects (N = 10 were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’ was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.

  4. Detrending moving average algorithm: Frequency response and scaling performances.

    Science.gov (United States)

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed. PMID:27415389

  5. Correct averaging in transmission radiography: Analysis of the inverse problem

    Science.gov (United States)

    Wagner, Michael; Hampel, Uwe; Bieberle, Martina

    2016-05-01

    Transmission radiometry is frequently used in industrial measurement processes as a means to assess the thickness or composition of a material. A common problem encountered in such applications is the so-called dynamic bias error, which results from averaging beam intensities over time while the material distribution changes. We recently reported on a method to overcome the associated measurement error by solving an inverse problem, which in principle restores the exact average attenuation by considering the Poisson statistics of the underlying particle or photon emission process. In this paper we present a detailed analysis of the inverse problem and its optimal regularized numerical solution. As a result we derive an optimal parameter configuration for the inverse problem.

  6. Detrending Moving Average Algorithm: Frequency Response and Scaling Performances

    CERN Document Server

    Carbone, Anna

    2016-01-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) either over time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent and finite scale range behavior will be discussed.

  7. Refined similarity hypothesis using 3D local averages

    CERN Document Server

    Iyer, Kartik P; Yeung, P K

    2015-01-01

    The refined similarity hypotheses of Kolmogorov, regarded as an important ingredient of intermittent turbulence, has been tested in the past using one-dimensional data and plausible surrogates of energy dissipation. We employ data from direct numerical simulations, at the microscale Reynolds number $R_\\lambda \\sim 650$, on a periodic box of $4096^3$ grid points to test the hypotheses using 3D averages. In particular, we study the small-scale properties of the stochastic variable $V = \\Delta u(r)/(r \\epsilon_r)^{1/3}$, where $\\Delta u(r)$ is the longitudinal velocity increment and $\\epsilon_r$ is the dissipation rate averaged over a three-dimensional volume of linear size $r$. We show that $V$ is universal in the inertial subrange. In the dissipation range, the statistics of $V$ are shown to depend solely on a local Reynolds number.

  8. Database of average-power damage thresholds at 1064 nm

    International Nuclear Information System (INIS)

    We have completed a database of average-power, laser-induced, damage thresholds at 1064 nm on a variety of materials. Measurements were made with a newly constructed laser to provide design input for moderate and high average-power laser projects. The measurements were conducted with 16-ns pulses at pulse-repetition frequencies ranging from 6 to 120 Hz. Samples were typically irradiated for time ranging from a fraction of a second up to 5 minutes (36,000 shots). We tested seven categories of samples which included antireflective coatings, high reflectors, polarizers, single and multiple layers of the same material, bare and overcoated metal surfaces, bare polished surfaces, and bulk materials. The measured damage threshold ranged from 2 for some metals to > 46 J/cm2 for a bare polished glass substrate. 4 refs., 7 figs., 1 tab

  9. Evolutionary Prisoner's Dilemma Game Based on Pursuing Higher Average Payoff

    Institute of Scientific and Technical Information of China (English)

    LI Yu-Jian; WANG Bing-Hong; YANG Han-Xin; LING Xiang; CHEN Xiao-Jie; JIANG Rui

    2009-01-01

    We investigate the prisoner's dilemma game based on a new rule: players will change their current strategies to opposite strategies with some probability if their neighbours' average payoffs are higher than theirs. Compared with the cases on regular lattices (RL) and Newman-Watts small world network (NW), cooperation can be best enhanced on the scale-free Barabasi-Albert network (BA). It is found that cooperators are dispersive on RL network, which is different from previously reported results that cooperators will form large clusters to resist the invasion of defectors. Cooperative behaviours on the BA network are discussed in detail. It is found that large-degree individuals have lower cooperation level and gain higher average payoffs than that of small-degree individuals. In addition, we find that small-degree individuals more frequently change strategies than do large-degree individuals.

  10. The B-dot Earth Average Magnetic Field

    Science.gov (United States)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  11. Nonlocal Imaging by Conditional Averaging of Random Reference Measurements

    International Nuclear Information System (INIS)

    We report the nonlocal imaging of an object by conditional averaging of the random exposure frames of a reference detector, which only sees the freely propagating field from a thermal light source. A bucket detector, synchronized with the reference detector, records the intensity fluctuations of an identical beam passing through the object mask. These fluctuations are sorted according to their values relative to the mean, then the reference data in the corresponding time-bins for a given fluctuation range are averaged, to produce either positive or negative images. Since no correlation calculations are involved, this correspondence imaging technique challenges our former interpretations of 'ghost' imaging. Compared with conventional correlation imaging or compressed sensing schemes, both the number of exposures and computation time are greatly reduced, while the visibility is much improved. A simple statistical model is presented to explain the phenomenon. (express letters)

  12. FUNDAMENTALS OF TRANSMISSION FLUCTUATION SPECTROMETRY WITH VARIABLE SPATIAL AVERAGING

    Institute of Scientific and Technical Information of China (English)

    Jianqi Shen; Ulrich Riebel; Marcus Breitenstein; Udo Kr(a)uter

    2003-01-01

    Transmission signal of radiation in suspension of particles performed with a high spatial and temporal resolution shows significant fluctuations, which are related to the physical properties of the particles and the process of spatial and temporal averaging. Exploiting this connection, it is possible to calculate the parti cie size distribution (PSD)and particle concentration. This paper provides an approach of transmission fluctuation spectrometry (TFS) with variable spatial averaging. The transmission fluctuations are expressed in terms of the expectancy of transmission square (ETS)and are obtained as a spectrum, which is a function of the variable beam diameter. The reversal point and the depth of the spectrum contain the information of particle size and particle concentration, respectively.

  13. High average power supercontinuum generation in a fluoroindate fiber

    Science.gov (United States)

    Swiderski, J.; Théberge, F.; Michalska, M.; Mathieu, P.; Vincent, D.

    2014-01-01

    We report the first demonstration of Watt-level supercontinuum (SC) generation in a step-index fluoroindate (InF3) fiber pumped by a 1.55 μm fiber master-oscillator power amplifier (MOPA) system. The SC is generated in two steps: first ˜1 ns amplified laser diode pulses are broken up into soliton-like sub-pulses leading to initial spectrum extension and then launched into a fluoride fiber to obtain further spectral broadening. The pump MOPA system can operate at a changeable repetition frequency delivering up to 19.2 W of average power at 2 MHz. When the 8-m long InF3 fiber was pumped with 7.54 W at 420 kHz, output average SC power as high as 2.09 W with 27.8% of slope efficiency was recorded. The achieved SC spectrum spread from 1 to 3.05 μm.

  14. Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks

    CERN Document Server

    Jafarizadeh, Saber

    2010-01-01

    Distributed consensus has appeared as one of the most important and primary problems in the context of distributed computation and it has received renewed interest in the field of sensor networks (due to recent advances in wireless communications), where solving fastest distributed consensus averaging problem over networks with different topologies is one of the primary problems in this issue. Here in this work analytical solution for the problem of fastest distributed consensus averaging algorithm over Chain of Rhombus networks is provided, where the solution procedure consists of stratification of associated connectivity graph of the network and semidefinite programming, particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Also characteristic polynomial together with its roots corresponding to eigenvalues of weight matrix including SLEM of network is determined inductively. Moreover t...

  15. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given. PMID:23455291

  16. Partial Averaged Navier-Stokes approach for cavitating flow

    Science.gov (United States)

    Zhang, L.; Zhang, Y. N.

    2015-01-01

    Partial Averaged Navier Stokes (PANS) is a numerical approach developed for studying practical engineering problems (e.g. cavitating flow inside hydroturbines) with a resonance cost and accuracy. One of the advantages of PANS is that it is suitable for any filter width, leading a bridging method from traditional Reynolds Averaged Navier-Stokes (RANS) to direct numerical simulations by choosing appropriate parameters. Comparing with RANS, the PANS model will inherit many physical nature from parent RANS but further resolve more scales of motion in great details, leading to PANS superior to RANS. As an important step for PANS approach, one need to identify appropriate physical filter-width control parameters e.g. ratios of unresolved-to-total kinetic energy and dissipation. In present paper, recent studies of cavitating flow based on PANS approach are introduced with a focus on the influences of filter-width control parameters on the simulation results.

  17. Resonance Averaged Photoionization Cross Sections for Astrophysical Models

    CERN Document Server

    Bautista, M A; Pradhan, A K

    1997-01-01

    We present ground state photoionization cross sections of atoms and ions averaged over resonance structures for photoionization modeling of astrophysical sources. The detailed cross sections calculated in the close-coupling approximation using the R-matrix method, with resonances delineated at thousands of energies, are taken from the Opacity Project database TOPbase and the Iron Project, including new data for the low ionization stages of iron Fe I--V. The resonance-averaged cross sections are obtained by convolving the detailed cross sections with a Gaussian distribution over the autoionizing resonances. This procedure is expected to minimize errors in the derived ionization rates that could result from small uncertainties in computed positions of resonances, while preserving the overall resonant contribution to the cross sections in the important near threshold regions. The detailed photoionization cross sections at low photon energies are complemented by new relativistic distorted-wave calculations for Z1...

  18. Sample size for estimating average productive traits of pigeon pea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2016-04-01

    Full Text Available ABSTRACT: The objectives of this study were to determine the sample size, in terms of number of plants, needed to estimate the average values of productive traits of the pigeon pea and to determine whether the sample size needed varies between traits and between crop years. Separate uniformity trials were conducted in 2011/2012 and 2012/2013. In each trial, 360 plants were demarcated, and the fresh and dry masses of roots, stems, and leaves and of shoots and the total plant were evaluated during blossoming for 10 productive traits. Descriptive statistics were calculated, normality and randomness were checked, and the sample size was calculated. There was variability in the sample size between the productive traits and crop years of the pigeon pea culture. To estimate the averages of the productive traits with a 20% maximum estimation error and 95% confidence level, 70 plants are sufficient.

  19. Bayesian Model Averaging in the Instrumental Variable Regression Model

    OpenAIRE

    Gary Koop; Robert Leon Gonzalez; Rodney Strachan

    2011-01-01

    This paper considers the instrumental variable regression model when there is uncertainly about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainly can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very fl...

  20. Gas cooled disk amplifier approach to solid state average power

    International Nuclear Information System (INIS)

    Disk amplifiers have been used on almost all solid state laser systems of high energy, and, in principle, one simply has to cool the device to operate it at average power. To achieve the desired waste heat removal, gas is flowed across the disk surface. The authors show the basic gas flow geometry. They computationally and experimentally characterize the flow and its optical implications over regimes which far exceed the envisioned operating requirements of a working amplifier

  1. High average power solid state laser power conditioning system

    International Nuclear Information System (INIS)

    The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers

  2. Parametric and Nonparametric Frequentist Model Selection and Model Averaging

    OpenAIRE

    Aman Ullah; Huansha Wang

    2013-01-01

    This paper presents recent developments in model selection and model averaging for parametric and nonparametric models. While there is extensive literature on model selection under parametric settings, we present recently developed results in the context of nonparametric models. In applications, estimation and inference are often conducted under the selected model without considering the uncertainty from the selection process. This often leads to inefficiency in results and misleading confide...

  3. Domain Averaged Fermi hole Orbitals for Extended Systems

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.I.; Kohout, M.; Ponec, Robert

    Praha: Institute of Chemical Process Fundamentals of the ASCR. v. v. i, 2011, s. 7-8. [Prague Workshop on Theoretical Chemistry. Praha (CZ), 26.09.2011-29.09.2011] R&D Projects: GA ČR GA203/09/0118 Institutional research plan: CEZ:AV0Z40720504 Keywords : domain averaged fermi holes * bonding in solids Subject RIV: CF - Physical ; Theoretical Chemistry

  4. 3-Paths in Graphs with Bounded Average Degree

    Directory of Open Access Journals (Sweden)

    Jendrol′ Stanislav

    2016-05-01

    Full Text Available In this paper we study the existence of unavoidable paths on three vertices in sparse graphs. A path uvw on three vertices u, v, and w is of type (i, j, k if the degree of u (respectively v, w is at most i (respectively j, k. We prove that every graph with minimum degree at least 2 and average degree strictly less than m contains a path of one of the types

  5. Averaging for non-periodic fully nonlinear equations

    Directory of Open Access Journals (Sweden)

    Claudio Marchi

    2003-09-01

    Full Text Available This paper studies the averaging problem for some fully nonlinear equations of degenerate parabolic type with a Hamiltonian not necessarily periodic in the fast variable. Our aim is to point out a sufficient condition on the Hamiltonian to pass to the limit in the starting equation. Also, we investigate when this condition is not completely fulfilled and discuss some examples concerning deterministic and stochastic optimal control problems.

  6. Crime pays if you are just an average hacker

    OpenAIRE

    Shim, Woohyun; Allodi, Luca; Massacci, Fabio

    2013-01-01

    This study investigates the e ects of incentive and deterrence strategies that might turn a security researcher into a malware writer, or vice versa. By using a simple game theoretic model, we illustrate how hackers maximize their expected utility. Furthermore, our simulation models show how hackers' malicious activities are a ected by changes in strategies employed by defenders. Our results indicate that, despite the manipulation of strategies, average-skilled hackers have incentives to part...

  7. Average saturated fatty acids daily intake in Sarajevo University students

    Directory of Open Access Journals (Sweden)

    Amra Catovic

    2014-12-01

    Full Text Available Introduction: There are wide variations in diet patterns among population subgroups. Macronutrients content analyses have become necessary in dietary assessment. The purpose of this study is to analyze dietary saturated fatty acids intake in students, detect differences between men and women, and compare with nourish status and nutrition recommendations.Methods: A cross-sectional survey of 60 graduate students was performed during the spring 2013, at the Sarajevo University. Food-frequency questionnaire was conducted during seven days. Body mass index was used to assess students' nourish status. Statistical analyses were performed using the Statistical Package for Social Sciences software (version 13.0.Results: Mean age of males was 26.00±2.72, and of females was 27.01±3.93 years. The prevalence of overweight was more common among males compared to females (55.56% vs. 6.06%. Median of total fat average intake for men and women was 76.32(70.15;114.41 and 69.41(63.23;86.94 g/d, respectively. Median of saturated fatty acids average intake for men and women was 28.86(22.41;36.42 and 24.29(20.53;31.60 g/d, respectively. There was significant difference in average intake of total fat between genders (Mann-Whitney U test: p=0.04. Macronutrient data were related to requirement of reference person. Total fat intake was beyond recommended limits in 37.04% of males and 54.55% of females. Saturated fatty acids intake was beyond the upper limit in 55.56% of males and 51.52% of females.Conclusion: Diet pattern of the average student is not in accordance with the recommendations of saturated fatty acids contribution as a percentage of energy.

  8. Path Dependent Option Pricing: the path integral partial averaging method

    OpenAIRE

    Andrew Matacz

    2000-01-01

    In this paper I develop a new computational method for pricing path dependent options. Using the path integral representation of the option price, I show that in general it is possible to perform analytically a partial averaging over the underlying risk-neutral diffusion process. This result greatly eases the computational burden placed on the subsequent numerical evaluation. For short-medium term options it leads to a general approximation formula that only requires the evaluation of a one d...

  9. Dollar Cost Averaging - The Role of Cognitive Error

    OpenAIRE

    Hayley, S.

    2010-01-01

    Dollar Cost Averaging (DCA) has been shown to be mean-variance inefficient, yet it remains a very popular strategy. Recent research has attempted to explain its popularity by assuming more complex risk preferences. This paper rejects such explanations by demonstrating that DCA is sub-optimal regardless of preferences over terminal wealth. Instead, this paper identifies the cognitive error in the argument that is normally put forward in favor of the strategy. This gives us a simpler explanatio...

  10. Finding large average submatrices in high dimensional data

    OpenAIRE

    Shabalin, Andrey A.; Weigman, Victor J.; Perou, Charles M.; Nobel, Andrew B

    2009-01-01

    The search for sample-variable associations is an important problem in the exploratory analysis of high dimensional data. Biclustering methods search for sample-variable associations in the form of distinguished submatrices of the data matrix. (The rows and columns of a submatrix need not be contiguous.) In this paper we propose and evaluate a statistically motivated biclustering procedure (LAS) that finds large average submatrices within a given real-valued data matrix. ...

  11. Time Averaged VHE Spectrum of Mrk 421 in 2005

    CERN Document Server

    Daniel, M K

    2008-01-01

    The blazar Mrk421 was observed independently, but contemporaneously, in 2005 at TeV energies by MAGIC, the Whipple 10m telescope, and by a single VERITAS telescope during the construction phase of operations. A comparison of the time averaged spectra, in what was a relatively quiescent state, demonstrates the level of agreement between instruments. In addition, the increased sensitivity of the new generation instruments, and ever decreasing energy thresholds, questions how best to compare new observational data with archival results.

  12. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49. ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  13. A Cluster-Size Averaging Model for Strongly Discontinuous Percolation

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2016-05-01

    We propose a network percolation model, called cut-off model, which exhibits strongly discontinuous transition by an averaging effect of cluster sizes. In this model, a randomly selected bond is added if the size of the cluster formed by the bond is less than a times of the mean cluster size. It is shown that the model is strongly discontinuous when a is a finite constant.

  14. Forecasting the Price of Gold Using Dynamic Model Averaging

    OpenAIRE

    Goodness Aye; Rangan Gupta; Shawkat Hammoudeh; Won Joong Kim

    2014-01-01

    We develop models for examining possible predictors of the return on gold that embrace six global factors (business cycle, nominal, interest rate, commodity, exchange rate and stock price factors) and two uncertainty indices (the Kansas City Fed’s financial stress index and the U.S. Economic uncertainty index). Specifically, by comparing with other alternative models, we show that the dynamic model averaging (DMA) and dynamic model selection (DMS) models outperform not only a linear model (su...

  15. Characterizations of Sobolev spaces via averages on balls

    Czech Academy of Sciences Publication Activity Database

    Dai, F.; Gogatishvili, Amiran; Yang, D.; Yuan, W.

    2015-01-01

    Roč. 128, November (2015), s. 86-99. ISSN 0362-546X R&D Projects: GA ČR GA13-14743S Institutional support: RVO:67985840 Keywords : Sobolev space * average on ball * difference * Euclidean space * space of homogeneous type Subject RIV: BA - General Math ematics Impact factor: 1.327, year: 2014 http://www.sciencedirect.com/science/article/pii/S0362546X15002618

  16. Night trend in average brightness of aurora discrete forms

    International Nuclear Information System (INIS)

    Using the method of imposed epochs according to the data of visual observations for many years of aurorae at the Tixi and Norilsk stations, a systematic decrease in the average brightness of aurora discrete forms in diurnal course with the increase in time elapsed after the sunset at the altitude of 200 km above the station is detected. It is assumed that the tendency is caused by the change in electric conductivity of ionospheric link of auroral current systems

  17. The inhomogeneous Universe : its average expansion and cosmic variance

    OpenAIRE

    Wiegand, Alexander

    2012-01-01

    Despite its global homogeneity and isotropy, the local matter distribution in the late Universe is manifestly inhomogeneous. Understanding the various effects resulting from these inhomogeneities is one of the most important tasks of modern cosmology. In this thesis, we investigate two aspects of the influence of local structure: firstly, to what extent do local structures modify the average expansion of spatial regions with a given size, and secondly, how strongly does the presence of struct...

  18. Registration of 3D Face Scans with Average Face Models

    OpenAIRE

    Salah, Albert Ali; Alyuz, N.; Akarun, L.

    2008-01-01

    The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a costly one-to-all registration approach, which requires the registration of each facial surface to all faces in the gallery. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. We propose ...

  19. Averaging analysis of a point process adaptive algorithm

    OpenAIRE

    Solo, Victor

    2004-01-01

    Motivated by a problem in neural encoding, we introduce an adaptive (or real-time) parameter estimation algorithm driven by a counting process. Despite the long history of adaptive algorithms, this kind of algorithm is relatively new. We develop a finite-time averaging analysis which is nonstandard partly because of the point process setting and partly because we have sought to avoid requiring mixing conditions. This is significant since mixing conditions often place rest...

  20. State-space average modelling of 18-pulse diode rectifier

    OpenAIRE

    Griffo, Antonio; Wang, J B; Howe, D.

    2008-01-01

    The paper presents an averaged-value model of the direct symmetric topology of 18-pulse autotransformer AC-DC rectifiers. The model captures the key features of the dynamic characteristics of the rectifiers, while being time invariant and computationally efficient. The developed models, validated by comparison of the resultant transient and steady state behaviours with those obtained from detailed simulations can, therefore, be used for stability assessment of electric power syste...

  1. Averaging cross section data so we can fit it

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D. [Brookhaven National Lab. (BNL), Upton, NY (United States). NNDC

    2014-10-23

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  2. The role of the harmonic vector average in motion integration.

    Science.gov (United States)

    Johnston, Alan; Scarfe, Peter

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA. PMID:24155716

  3. Averaging cross section data so we can fit it

    International Nuclear Information System (INIS)

    The 56Fe cross section we are interested in have a lot of fluctuations. We would like to fit the average of the cross section with cross sections calculated within EMPIRE. EMPIRE is a Hauser-Feshbach theory based nuclear reaction code, requires cross sections to be smoothed using a Lorentzian profile. The plan is to fit EMPIRE to these cross sections in the fast region (say above 500 keV).

  4. Characterizing individual painDETECT symptoms by average pain severity

    Science.gov (United States)

    Sadosky, Alesia; Koduru, Vijaya; Bienen, E Jay; Cappelleri, Joseph C

    2016-01-01

    Background painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure), a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe), but their ability to discriminate individual item severity has not been evaluated. Methods Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624). Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level. Results A probability >50% for a better outcome (less severe pain) was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain) and highest probability was 76.4% (on cold/heat for mild vs severe pain). The pain radiation item was significant (P<0.05) and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ. Conclusion painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain-severity levels can serve as proxies to determine treatment effects, thus indicating probabilities for more favorable outcomes on pain symptoms.

  5. Average Contrastive Divergence for Training Restricted Boltzmann Machines

    OpenAIRE

    Xuesi Ma; Xiaojie Wang

    2016-01-01

    This paper studies contrastive divergence (CD) learning algorithm and proposes a new algorithm for training restricted Boltzmann machines (RBMs). We derive that CD is a biased estimator of the log-likelihood gradient method and make an analysis of the bias. Meanwhile, we propose a new learning algorithm called average contrastive divergence (ACD) for training RBMs. It is an improved CD algorithm, and it is different from the traditional CD algorithm. Finally, we obtain some experimental resul...

  6. Average energy efficiency contours for single carrier AWGN MAC

    OpenAIRE

    Akbari A; Imran M.A.; Hoshyar R.; Tafazolli R.

    2011-01-01

    Energy efficiency has become increasingly important in wireless communications, with significant environmental and financial benefits. This paper studies the achievable capacity region of a single carrier uplink channel consisting of two transmitters and a single receiver, and uses average energy efficiency contours to find the optimal rate pair based on four different targets: Maximum energy efficiency, a trade-off between maximum energy efficiency and rate fairness, achieving energy efficie...

  7. Averaging analysis for discrete time and sampled data adaptive systems

    Science.gov (United States)

    Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.

    1986-01-01

    Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.

  8. Marginal versus average beta of equity under corporate taxation

    OpenAIRE

    Lund, Diderik

    2009-01-01

    Even for fully equity-financed firms there may be substantial effects of taxation on the after-tax cost of capital. Among the few studies of these effects, even fewer identify all effects correctly. When marginal investment is taxed together with inframarginal, marginal beta differs from average if there are investmentrelated deductions like depreciation. To calculate asset betas, one should not only "unlever" observed equity betas, but "untax" and "unaverage" them. Risky tax claims are value...

  9. Targeted Cancer Screening in Average-Risk Individuals.

    Science.gov (United States)

    Marcus, Pamela M; Freedman, Andrew N; Khoury, Muin J

    2015-11-01

    Targeted cancer screening refers to use of disease risk information to identify those most likely to benefit from screening. Researchers have begun to explore the possibility of refining screening regimens for average-risk individuals using genetic and non-genetic risk factors and previous screening experience. Average-risk individuals are those not known to be at substantially elevated risk, including those without known inherited predisposition, without comorbidities known to increase cancer risk, and without previous diagnosis of cancer or pre-cancer. In this paper, we describe the goals of targeted cancer screening in average-risk individuals, present factors on which cancer screening has been targeted, discuss inclusion of targeting in screening guidelines issued by major U.S. professional organizations, and present evidence to support or question such inclusion. Screening guidelines for average-risk individuals currently target age; smoking (lung cancer only); and, in some instances, race; family history of cancer; and previous negative screening history (cervical cancer only). No guidelines include common genomic polymorphisms. RCTs suggest that targeting certain ages and smoking histories reduces disease-specific cancer mortality, although some guidelines extend ages and smoking histories based on statistical modeling. Guidelines that are based on modestly elevated disease risk typically have either no or little evidence of an ability to affect a mortality benefit. In time, targeted cancer screening is likely to include genetic factors and past screening experience as well as non-genetic factors other than age, smoking, and race, but it is of utmost importance that clinical implementation be evidence-based. PMID:26165196

  10. Average dynamics of a finite set of coupled phase oscillators

    Energy Technology Data Exchange (ETDEWEB)

    Dima, Germán C., E-mail: gdima@df.uba.ar; Mindlin, Gabriel B. [Laboratorio de Sistemas Dinámicos, IFIBA y Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellón 1, Ciudad Universitaria, Buenos Aires (Argentina)

    2014-06-15

    We study the solutions of a dynamical system describing the average activity of an infinitely large set of driven coupled excitable units. We compared their topological organization with that reconstructed from the numerical integration of finite sets. In this way, we present a strategy to establish the pertinence of approximating the dynamics of finite sets of coupled nonlinear units by the dynamics of its infinitely large surrogate.

  11. Model characteristics of average skill boxers’ competition functioning

    OpenAIRE

    Martsiv V.P.

    2015-01-01

    Purpose: analysis of competition functioning of average skill boxers. Material: 28 fights of boxers-students have been analyzed. The following coefficients have been determined: effectiveness of punches, reliability of defense. The fights were conducted by formula: 3 rounds (3 minutes - every round). Results: models characteristics of boxers for stage of specialized basic training have been worked out. Correlations between indicators of specialized and general exercises have been determined. ...

  12. Light-cone averages in a Swiss-cheese universe

    International Nuclear Information System (INIS)

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ΛCDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w0 and wa follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model

  13. Average resonance parameters of germanium and selenium nuclei

    International Nuclear Information System (INIS)

    Full sets of average resonance parameters S0, S1, R0', R1', S1,3/2 for germanium and selenium nuclei with natural isotope content are determined. Parameters are received from the analysis of experimental neutron elastic scattering cross sections at energy region up to 440 keV with the help of the method developed by the authors. The analysis of recommended parameters and some literature data is fulfilled as well.

  14. Average resonance parameters of tellurium and neodymium nuclei

    International Nuclear Information System (INIS)

    Complete sets of average resonance parameters S0, S1, R''0, R''1, and S1,3/2 for tellurium and neodymium nuclei with natural isotope contents have been determined by analyzing the experimental differential cross-sections of neutron elastic scattering in the energy range lower than 440 keV. The data obtained, the recommended parameter values, and some literature data have been analyzed.

  15. The role of the harmonic vector average in motion integration

    OpenAIRE

    Alan eJohnston; Peter eScarfe

    2013-01-01

    The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition t...

  16. Industry-grade high average power femtosecond light source

    Science.gov (United States)

    Heckl, O. H.; Weiler, S.; Fleischhaker, R.; Gebs, R.; Budnicki, A.; Wolf, M.; Kleinbauer, J.; Russ, S.; Kumkar, M.; Sutter, D. H.

    2014-03-01

    Ultrashort pulses are capable of processing practically any material with negligible heat affected zone. Typical pulse durations for industrial applications are situated in the low picosecond-regime. Pulse durations of 5 ps or below are a well established compromise between the electron-phonon interaction time of most materials and the need for pulses long enough to suppress detrimental effects such as nonlinear interaction with the ablated plasma plume. However, sub-picosecond pulses can further increase the ablation efficiency for certain materials, depending on the available average power, pulse energy and peak fluence. Based on the well established TruMicro 5000 platform (first release in 2007, third generation in 2011) an Yb:YAG disk amplifier in combination with a broadband seed laser was used to scale the output power for industrial femtosecond-light sources: We report on a subpicosecond amplifier that delivers a maximum of 160 W of average output power at pulse durations of 750 fs. Optimizing the system for maximum peak power allowed for pulse energies of 850 μJ at pulse durations of 650 fs. Based on this study and the approved design of the TruMicro 5000 product-series, industrygrade, high average power femtosecond-light sources are now available for 24/7 operation. Since their release in May 2013 we were able to increase the average output power of the TruMicro 5000 FemtoEdition from 40 W to 80 W while maintaining pulse durations around 800 fs. First studies on metals reveal a drastic increase of processing speed for some micro processing applications.

  17. Probability density function transformation using seeded localized averaging

    International Nuclear Information System (INIS)

    Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)

  18. Averaging kernels for DOAS total-column satellite retrievals

    Directory of Open Access Journals (Sweden)

    H. J. Eskes

    2003-01-01

    Full Text Available The Differential Optical Absorption Spectroscopy (DOAS method is used extensively to retrieve total column amounts of trace gases based on UV-visible measurements of satellite spectrometers, such as ERS-2 GOME. In practice the sensitivity of the instrument to the tracer density is strongly height dependent, especially in the troposphere. The resulting tracer profile dependence may introduce large systematic errors in the retrieved columns that are difficult to quantify without proper additional information, as provided by the averaging kernel (AK. In this paper we discuss the DOAS retrieval method in the context of the general retrieval theory as developed by Rodgers. An expression is derived for the DOAS AK for optically thin absorbers. It is shown that the comparison with 3D chemistry-transport models and independent profile measurements, based on averaging kernels, is no longer influenced by errors resulting from a priori profile assumptions. The availability of averaging kernel information as part of the total column retrieval product is important for the interpretation of the observations, and for applications like chemical data assimilation and detailed satellite validation studies.

  19. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  20. Role of spatial averaging in multicellular gradient sensing

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation–global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation–global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  1. The National Average is a D: Who is to Blame?

    Directory of Open Access Journals (Sweden)

    Collie-Patterson, Janet M.

    2008-01-01

    Full Text Available The publishing of the Bahamas General Certificate of Secondary Education (RGCSE 2005 examination results sparked much debate about the national average being a D. Much of the debate was focused on the teacher and the school whilst very little was said about the other contributors to achievement in education. In her 1999 study of 1,036 students and 52 teachers from public and private schools in New Providence, Collie-Patterson found the students' characteristics consisting of student's prior ability, attitude toward school, socioeconomic status and parental involvement make the largest contribution (60% to mathematics achievement. Taken individually, the effect size indicated that student's prior ability made the largest contribution (48% to mathematics achievement. The set of teaches' characteristics, including professional development, teaching experience, and educational background, were significantly related to mathematics achievement and contributed only 8% to students' mathematics achievement. The set of classroom characteristics contributed 35% to mathematics achievement and the set of schools' characteristics contributed 12% to mathematics achievement. The purpose of this paper is to analyze the factors that could potentially influence the student performances in mathematics which greatly affects the national average due to the large number of students taking the mathematics examination and the low grade point average of that examination.

  2. Resonance averaged channel radiative neutron capture cross sections

    International Nuclear Information System (INIS)

    In order to apply Lane amd Lynn's channel capture model in calculations with a realistic optical model potential, we have derived an approximate wave function for the entrance channel in the neutron-nucleus reaction, based on the intermediate interaction model. It is valid in the exterior region as well as the region near the nuclear surface, ans is expressed in terms of the wave function and reactance matrix of the optical model and of the near-resonance parameters. With this formalism the averaged channel radiative neutron capture cross section in the resonance region is written as the sum of three terms. The first two terms correspond to contribution of the optical model real and imaginary parts respectively, and together can be regarded as the radiative capture of the shape elastic wave. The third term is a fluctuation term, corresponding to the radiative capture of the compound elastic wave in the exterior region. On applying this theory in the resonance region, we obtain an expression for the average valence radiative width similar to that of Lane and Mughabghab. We have investigated the magnitude and energy dependence of the three terms as a function of the neutron incident energy. Calculated results for 98Mo and 55Mn show that the averaged channel radiative capture cross section in the giant resonance region of the neutron strength function may account for a considerable fraction of the total (n, γ) cross section; at lower neutron energies a large part of this channel capture arises from the fluctuation term. We have also calculated the partial capture cross section in 98Mo and 55Mn at 2.4 keV and 24 keV, respectively, and compared the 98Mo results with the experimental data. (orig.)

  3. Multifractal detrending moving-average cross-correlation analysis

    Science.gov (United States)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the

  4. Human facial beauty : Averageness, symmetry, and parasite resistance.

    Science.gov (United States)

    Thornhill, R; Gangestad, S W

    1993-09-01

    It is hypothesized that human faces judged to be attractive by people possess two features-averageness and symmetry-that promoted adaptive mate selection in human evolutionary history by way of production of offspring with parasite resistance. Facial composites made by combining individual faces are judged to be attractive, and more attractive than the majority of individual faces. The composites possess both symmetry and averageness of features. Facial averageness may reflect high individual protein heterozygosity and thus an array of proteins to which parasites must adapt. Heterozygosity may be an important defense of long-lived hosts against parasites when it occurs in portions of the genome that do not code for the essential features of complex adaptations. In this case heterozygosity can create a hostile microenvironment for parasites without disrupting adaptation. Facial bilateral symmetry is hypothesized to affect positive beauty judgments because symmetry is a certification of overall phenotypic quality and developmental health, which may be importantly influenced by parasites. Certain secondary sexual traits are influenced by testosterone, a hormone that reduces immunocompetence. Symmetry and size of the secondary sexual traits of the face (e.g., cheek bones) are expected to correlate positively and advertise immunocompetence honestly and therefore to affect positive beauty judgments. Facial attractiveness is predicted to correlate with attractive, nonfacial secondary sexual traits; other predictions from the view that parasite-driven selection led to the evolution of psychological adaptations of human beauty perception are discussed. The view that human physical attractiveness and judgments about human physical attractiveness evolved in the context of parasite-driven selection leads to the hypothesis that both adults and children have a species-typical adaptation to the problem of identifying and favoring healthy individuals and avoiding parasite

  5. Measurement of the average lifetime of hadrons containing bottom quarks

    International Nuclear Information System (INIS)

    A measurement of the average lifetime of hadrons containing bottom quarks is presented. The b hadrons are produced in e+e- annihilation at 29 GeV, and the lifetime is determined from the impact parameters of high-transverse-momentum electrons produced in the decay of the b hadrons. A b lifetime of tau/sub b/ = 1.17/sup +0.27//sub -0.22/(stat)/sup +0.17//sub 0.16/(sys) psec is determined from a maximum-likelihood fit to the impact parameters. Particular care has been taken to describe the experimental resolution correctly in the fit

  6. Jackknife model averaging of the current account determinants

    Directory of Open Access Journals (Sweden)

    Urošević Branko

    2012-01-01

    Full Text Available This paper investigates the short to medium-term empirical relationships between the current account balances and a broad set of macroeconomic determinants in Serbia and selected CEE countries. Using novel model averaging techniques we focus the analysis to individual country’s data only. The results suggest that the model tracks the current account movements over the past decade quite well and captures its relative volatility. Signs and magnitudes of different coefficients indicate significant heterogeneity among countries providing empirical support for the country-level analysis.

  7. Effect of random edge failure on the average path length

    International Nuclear Information System (INIS)

    We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent α > 2. (paper)

  8. Using Averaged Modeling for Capacitors Voltages Observer in NPC Inverter

    Directory of Open Access Journals (Sweden)

    Bassem Omri

    2012-01-01

    Full Text Available This paper developed an adaptive observer to estimate capacitors voltages of a three-level neutral-point-clamped (NPC inverter. A robust estimated method using one parameter is proposed, which eliminates the voltages sensors. An averaged modeling of the inverter was used to develop the observer. This kind of modeling allows a good trade-off between simulation cost and precision. Circuit model of the inverter (implemented in Simpower Matlab simulator associated to the observer algorithm was used to validate the proposed algorithm.

  9. An averaging method for nonlinear laminar Ekman layers

    DEFF Research Database (Denmark)

    Andersen, A.; Lautrup, B.; Bohr, T.

    2003-01-01

    We study steady laminar Ekman boundary layers in rotating systems using,an averaging method similar to the technique of von Karman and Pohlhausen. The method allows us to explore nonlinear corrections to the standard Ekman theory even at large Rossby numbers. We consider both the standard self......-similar ansatz for the velocity profile, which assumes that a single length scale describes the boundary layer structure, and a new non-self-similar ansatz in which the decay and the oscillations of the boundary layer are described by two different length scales. For both profiles we calculate the up-flow in a...

  10. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  11. Weighted Average Consensus-Based Unscented Kalman Filtering.

    Science.gov (United States)

    Li, Wangyan; Wei, Guoliang; Han, Fei; Liu, Yurong

    2016-02-01

    In this paper, we are devoted to investigate the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter (UKF) framework. The communication status among sensors is represented by a connected undirected graph. Moreover, a weighted average consensus-based UKF algorithm is developed for the purpose of estimating the true state of interest, and its estimation error is bounded in mean square which has been proven in the following section. Finally, the effectiveness of the proposed consensus-based UKF algorithm is validated through a simulation example. PMID:26168453

  12. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  13. Small Bandwidth Asymptotics for Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors are...... errors developed in this paper coincide (approximately) with the nominal coverage rates across a nontrivial range of bandwidths....... "robust" in the sense that they accommodate (but do not require) bandwidths that are smaller than those for which conventional standard errors are valid. Moreover, the results of a Monte Carlo experiment suggest that the finite sample coverage rates of con…dence intervals constructed using the standard...

  14. RELATIONSHIP BETWEEN J-INTEGRAL AND FRACTURE SURFACE AVERAGE PROFILE

    Institute of Scientific and Technical Information of China (English)

    Y.G. Cao; S.F. Xue; K.Tanaka

    2007-01-01

    To investigate the causes that led to the formation of cracks in materials, a novel method that only considered the fracture surfaces for determining the fracture toughness parameters of J-integral for plain strain was proposed. The principle of the fracture-surface topography analysis (FRASTA) was used. In FRASTA, the fracture surfaces were scanned by laser microscope and the elevation data was recorded for analysis. The relationship between J-integral and fracture surface average profile for plain strain was deduced. It was also verified that the J-integral determined by the novel method and by the compliance method matches each other well.

  15. Concentration fluctuations and averaging time in vapor clouds

    CERN Document Server

    Wilson, David J

    2010-01-01

    This book contributes to more reliable and realistic predictions by focusing on sampling times from a few seconds to a few hours. Its objectives include developing clear definitions of statistical terms, such as plume sampling time, concentration averaging time, receptor exposure time, and other terms often confused with each other or incorrectly specified in hazard assessments; identifying and quantifying situations for which there is no adequate knowledge to predict concentration fluctuations in the near-field, close to sources, and far downwind where dispersion is dominated by atmospheric t

  16. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function of the...

  17. Averaged hole mobility model of biaxially strained Si

    Institute of Scientific and Technical Information of China (English)

    Song Jianjun; Zhu He; Yang Jinyong; Zhang Heming; Xuan Rongxi; Hu Huiyong

    2013-01-01

    We aim to establisha model of the averaged hole mobility of strained Si grown on (001),(101),and (111) relaxed Si1-xGex substrates.The results obtained from our calculation show that their hole mobility values corresponding to strained Si (001),(101) and (111) increase by at most about three,two and one times,respectively,in comparison with the unstrained Si.The results can provide a valuable reference to the understanding and design of strained Si-based device physics.

  18. Revisiting the solar tachocline: Average properties and temporal variations

    OpenAIRE

    Antia, H. M.; Basu, Sarbani

    2011-01-01

    The tachocline is believed to be the region where the solar dynamo operates. With over a solar cycle's worth of data available from the MDI and GONG instruments, we are in a position to investigate not merely the average structure of the solar tachocline, but also its time variations. We determine the properties of the tachocline as a function of time by fitting a two-dimensional model that takes latitudinal variations of the tachocline properties into account. We confirm that if we consider ...

  19. Scale anomalies imply violation of the averaged null energy condition

    CERN Document Server

    Visser, M

    1994-01-01

    Considerable interest has recently been expressed regarding the issue of whether or not quantum field theory on a fixed but curved background spacetime satisfies the averaged null energy condition (ANEC). A comment by Wald and Yurtsever [Phys. Rev. D43, 403 (1991)] indicates that in general the answer is no. In this note I explore this issue in more detail, and succeed in characterizing a broad class of spacetimes in which the ANEC is guaranteed to be violated. Finally, I add some comments regarding ANEC violation in Schwarzschild spacetime.

  20. Control of average spacing of OMCVD grown gold nanoparticles

    Science.gov (United States)

    Rezaee, Asad

    Metallic nanostructures and their applications is a rapidly expanding field. Nobel metals such as silver and gold have historically been used to demonstrate plasmon effects due to their strong resonances, which occur in the visible part of the electromagnetic spectrum. Localized surface plasmon resonance (LSPR) produces an enhanced electromagnetic field at the interface between a gold nanoparticle (Au NP) and the surrounding dielectric. This enhanced field can be used for metal-dielectric interfacesensitive optical interactions that form a powerful basis for optical sensing. In addition to the surrounding material, the LSPR spectral position and width depend on the size, shape, and average spacing between these particles. Au NP LSPR based sensors depict their highest sensitivity with optimized parameters and usually operate by investigating absorption peak: shifts. The absorption peak: of randomly deposited Au NPs on surfaces is mostly broad. As a result, the absorption peak: shifts, upon binding of a material onto Au NPs might not be very clear for further analysis. Therefore, novel methods based on three well-known techniques, self-assembly, ion irradiation, and organo-meta1lic chemical vapour deposition (OMCVD) are introduced to control the average-spacing between Au NPs. In addition to covalently binding and other advantages of OMCVD grown Au NPs, interesting optical features due to their non-spherical shapes are presented. The first step towards the average-spacing control is to uniformly form self-assembled monolayers (SAMs) of octadecyltrichlorosilane (OTS) as resists for OMCVD Au NPs. The formation and optimization of the OTS SAMs are extensively studied. The optimized resist SAMs are ion-irradiated by a focused ion beam (Fill) and ions generated by a Tandem accelerator. The irradiated areas are refilled with 3-mercaptopropyl-trimethoxysilane (MPTS) to provide nucleation sites for the OMCVD Au NP growth. Each step during sample preparation is monitored by

  1. Model averaging for semiparametric additive partial linear models

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    To improve the prediction accuracy of semiparametric additive partial linear models(APLM) and the coverage probability of confidence intervals of the parameters of interest,we explore a focused information criterion for model selection among ALPM after we estimate the nonparametric functions by the polynomial spline smoothing,and introduce a general model average estimator.The major advantage of the proposed procedures is that iterative backfitting implementation is avoided,which thus results in gains in computational simplicity.The resulting estimators are shown to be asymptotically normal.A simulation study and a real data analysis are presented for illustrations.

  2. A Note on the Weighted Average Cost of Capital WACC

    OpenAIRE

    Ignacio Velez-Pareja; Joseph Tham

    2000-01-01

    Most finance textbooks (See Benninga and Sarig, 1997, Brealey, Myers and Marcus, 1996, Copeland, Koller and Murrin, 1994, Damodaran, 1996, Gallagher and Andrew, 2000, Van Horne, 1998, Weston and Copeland, 1992) present the Weighted Average Cost of Capital WACC calculation as: WACC = d(1-T)D% + eE% (1) Where d is the cost of debt before taxes, T is the tax rate, D% is the percentage of debt on total value, e is the cost of equity and E% is the percentage of equity on total value. All of them p...

  3. Optical Parametric Amplification for High Peak and Average Power

    Energy Technology Data Exchange (ETDEWEB)

    Jovanovic, I

    2001-11-26

    Optical parametric amplification is an established broadband amplification technology based on a second-order nonlinear process of difference-frequency generation (DFG). When used in chirped pulse amplification (CPA), the technology has been termed optical parametric chirped pulse amplification (OPCPA). OPCPA holds a potential for producing unprecedented levels of peak and average power in optical pulses through its scalable ultrashort pulse amplification capability and the absence of quantum defect, respectively. The theory of three-wave parametric interactions is presented, followed by a description of the numerical model developed for nanosecond pulses. Spectral, temperature and angular characteristics of OPCPA are calculated, with an estimate of pulse contrast. An OPCPA system centered at 1054 nm, based on a commercial tabletop Q-switched pump laser, was developed as the front end for a large Nd-glass petawatt-class short-pulse laser. The system does not utilize electro-optic modulators or multi-pass amplification. The obtained overall 6% efficiency is the highest to date in OPCPA that uses a tabletop commercial pump laser. The first compression of pulses amplified in highly nondegenerate OPCPA is reported, with the obtained pulse width of 60 fs. This represents the shortest pulse to date produced in OPCPA. Optical parametric amplification in {beta}-barium borate was combined with laser amplification in Ti:sapphire to produce the first hybrid CPA system, with an overall conversion efficiency of 15%. Hybrid CPA combines the benefits of high gain in OPCPA with high conversion efficiency in Ti:sapphire to allow significant simplification of future tabletop multi-terawatt sources. Preliminary modeling of average power limits in OPCPA and pump laser design are presented, and an approach based on cascaded DFG is proposed to increase the average power beyond the single-crystal limit. Angular and beam quality effects in optical parametric amplification are modeled

  4. Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics

    Science.gov (United States)

    Anderson, W.

    2015-12-01

    Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of

  5. Analysis of nonlinear systems using ARMA [autoregressive moving average] models

    International Nuclear Information System (INIS)

    While many vibration systems exhibit primarily linear behavior, a significant percentage of the systems encountered in vibration and model testing are mildly to severely nonlinear. Analysis methods for such nonlinear systems are not yet well developed and the response of such systems is not accurately predicted by linear models. Nonlinear ARMA (autoregressive moving average) models are one method for the analysis and response prediction of nonlinear vibratory systems. In this paper we review the background of linear and nonlinear ARMA models, and illustrate the application of these models to nonlinear vibration systems. We conclude by summarizing the advantages and disadvantages of ARMA models and emphasizing prospects for future development. 14 refs., 11 figs

  6. Interpreting multiple risk scales for sex offenders: evidence for averaging.

    Science.gov (United States)

    Lehmann, Robert J B; Hanson, R Karl; Babchishin, Kelly M; Gallasch-Nemitz, Franziska; Biedermann, Jürgen; Dahle, Klaus-Peter

    2013-09-01

    This study tested 3 decision rules for combining actuarial risk instruments for sex offenders into an overall evaluation of risk. Based on a 9-year follow-up of 940 adult male sex offenders, we found that Rapid Risk Assessment for Sex Offender Recidivism (RRASOR), Static-99R, and Static-2002R predicted sexual, violent, and general recidivism and provided incremental information for the prediction of all 3 outcomes. Consistent with previous findings, the incremental effect of RRASOR was positive for sexual recidivism but negative for violent and general recidivism. Averaging risk ratios was a promising approach to combining these risk scales, showing good calibration between predicted (E) and observed (O) recidivism rates (E/O index = 0.93, 95% CI [0.79, 1.09]) and good discrimination (area under the curve = 0.73, 95% CI [0.69, 0.77]) for sexual recidivism. As expected, choosing the lowest (least risky) risk tool resulted in underestimated sexual recidivism rates (E/O = 0.67, 95% CI [0.57, 0.79]) and choosing the highest (riskiest) resulted in overestimated risk (E/O = 1.37, 95% CI [1.17, 1.60]). For the prediction of violent and general recidivism, the combination rules provided similar or lower discrimination compared with relying solely on the Static-99R or Static-2002R. The current results support an averaging approach and underscore the importance of understanding the constructs assessed by violence risk measures. PMID:23730829

  7. Vibrationally averaged dipole moments of methane and benzene isotopologues

    Science.gov (United States)

    Arapiraca, A. F. C.; Mohallem, J. R.

    2016-04-01

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C6H3D3 is about twice as large as the measured dipole moment of C6H5D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.

  8. Two-Stage Bayesian Model Averaging in Endogenous Variable Models.

    Science.gov (United States)

    Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E

    2014-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  9. Average gluon and quark jet multiplicities at higher orders

    Energy Technology Data Exchange (ETDEWEB)

    Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics

    2013-05-15

    We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.

  10. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  11. Average Fe K-alpha emission from distant AGN

    CERN Document Server

    Corral, A; Carrera, F J; Barcons, X; Mateos, S; Ebrero, J; Krumpe, M; Schwope, A; Tedds, J A; Watson, M G

    2008-01-01

    One of the most important parameters in the XRB (X-ray background) synthesis models is the average efficiency of accretion onto SMBH (super-massive black holes). This can be inferred from the shape of broad relativistic Fe lines seen in X-ray spectra of AGN (active galactic nuclei). Several studies have tried to measure the mean Fe emission properties of AGN at different depths with very different results. We compute the mean Fe emission from a large and representative sample of AGN X-ray spectra up to redshift ~ 3.5. We developed a method of computing the rest-frame X-ray average spectrum and applied it to a large sample (more than 600 objects) of type 1 AGN from two complementary medium sensitivity surveys based on XMM-Newton data, the AXIS and XWAS samples. This method makes use of medium-to-low quality spectra without needing to fit complex models to the individual spectra but with computing a mean spectrum for the whole sample. Extensive quality tests were performed by comparing real to simulated data, a...

  12. Prompt fission neutron spectra and average prompt neutron multiplicities

    International Nuclear Information System (INIS)

    We present a new method for calculating the prompt fission neutron spectrum N(E) and average prompt neutron multiplicity anti nu/sub p/ as functions of the fissioning nucleus and its excitation energy. The method is based on standard nuclear evaporation theory and takes into account (1) the motion of the fission fragments, (2) the distribution of fission-fragment residual nuclear temperature, (3) the energy dependence of the cross section sigma/sub c/ for the inverse process of compound-nucleus formation, and (4) the possibility of multiple-chance fission. We use a triangular distribution in residual nuclear temperature based on the Fermi-gas model. This leads to closed expressions for N(E) and anti nu/sub p/ when sigma/sub c/ is assumed constant and readily computed quadratures when the energy dependence of sigma/sub c/ is determined from an optical model. Neutron spectra and average multiplicities calculated with an energy-dependent cross section agree well with experimental data for the neutron-induced fission of 235U and the spontaneous fission of 252Cf. For the latter case, there are some significant inconsistencies between the experimental spectra that need to be resolved. 29 references

  13. Rainfall Estimation Over Tropical Oceans. 1; Area Average Rain Rate

    Science.gov (United States)

    Cuddapah, Prabhakara; Cadeddu, Maria; Meneghini, R.; Short, David A.; Yoo, Jung-Moon; Dalu, G.; Schols, J. L.; Weinman, J. A.

    1997-01-01

    Multichannel dual polarization microwave radiometer SSM/I observations over oceans do not contain sufficient information to differentiate quantitatively the rain from other hydrometeors on a scale comparable to the radiometer field of view (approx. 30 km). For this reason we have developed a method to retrieve average rain rate over a mesoscale grid box of approx. 300 x 300 sq km area over the TOGA COARE region where simultaneous radiometer and radar observations are available for four months (Nov. 92 to Feb. 93). The rain area in the grid box, inferred from the scattering depression due to hydrometeors in the 85 Ghz brightness temperature, constitutes a key parameter in this method. Then the spectral and polarization information contained in all the channels of the SSM/I is utilized to deduce a second parameter. This is the ratio S/E of scattering index S, and emission index E calculated from the SSM/I data. The rain rate retrieved from this method over the mesoscale area can reproduce the radar observed rain rate with a correlation coefficient of about 0.85. Furthermore monthly total rainfall estimated from this method over that area has an average error of about 15%.

  14. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n0)ln(n-n0) + b(n-n0) + c where a, b and c are constants depending on K and n0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length ne(K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g>

  15. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author)

  16. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings. PMID:25807251

  17. Face averages enhance user recognition for smartphone security.

    Directory of Open Access Journals (Sweden)

    David J Robertson

    Full Text Available Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy. In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1 and for real faces (Experiment 2: users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  18. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  19. Loss of lifetime due to radiation exposure-averaging problems.

    Science.gov (United States)

    Raicević, J J; Merkle, M; Ehrhardt, J; Ninković, M M

    1997-04-01

    A new method is presented for assessing a years of life lost (YLL) due to stochastic effects caused by the exposure to ionizing radiation. The widely accepted method from the literature uses a ratio of means of two quantities, defining in fact the loss of life as a derived quantity. We start from the real stochastic nature of the quantity (YLL), which enables us to obtain its mean values in a consistent way, using the standard averaging procedures, based on the corresponding joint probability density functions needed in this problem. Our method is mathematically different and produces lower values of average YLL. In this paper we also found certain similarities with the concept of loss of life expectancy among exposure induced deaths (LLE-EID), which is accepted in the recently published UNSCEAR report, where the same quantity is defined as years of life lost per radiation induced case (YLC). Using the same data base, the YLL and the LLE-EID are calculated and compared for the simplest exposure case-the discrete exposure at age a. It is found that LLE-EID overestimates the YLL, and that the magnitude of this overestimation reaches more than 15%, which depends on the effect under consideration. PMID:9119679

  20. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  1. Average of peak-to-average ratio (PAR) of IS95 and CDMA2000 systems-single carrier

    OpenAIRE

    Lau, VKN

    2001-01-01

    Peak-to-average ratio (PAR) of a signal is an important parameter. It determines the input backoff factor of the amplifier to avoid clipping and spectral regrowth. We analyze and compose the PAR of the downlink signal for IS95 and the CDMA2000 single-carrier systems. It is found that the PAR of the transmitted signal depends on the Walsh code assignment. Furthermore, we found that the PAR of CDMA2000 signal is always lower than the IS95 signal. Finally, PAR control by Walsh code selection is ...

  2. Estimation of the average correlation coefficient for stratified bivariate data.

    Science.gov (United States)

    Rubenstein, L M; Davis, C S

    1999-03-15

    If the relationship between two ordered categorical variables X and Y is influenced by a third categorical variable with K levels, the Cochran-Mantel-Haenszel (CMH) correlation statistic QC is a useful stratum-adjusted summary statistic for testing the null hypothesis of no association between X and Y. Although motivated by and developed for the case of K I x J contingency tables, the correlation statistic QC is also applicable when X and Y are continuous variables. In this paper we derive a corresponding estimator of the average correlation coefficient for K I x J tables. We also study two estimates of the variance of the average correlation coefficient. The first is a restricted variance based on the variances of the observed cell frequencies under the null hypothesis of no association. The second is an unrestricted variance based on an asymptotic variance derived by Brown and Benedetti. The estimator of the average correlation coefficient works well in tables with balanced and unbalanced margins, for equal and unequal stratum-specific sample sizes, when correlation coefficients are constant over strata, and when correlation coefficients vary across strata. When the correlation coefficients are zero, close to zero, or the cell frequencies are small, the confidence intervals based on the restricted variance are preferred. For larger correlations and larger cell frequencies, the unrestricted confidence intervals give superior performance. We also apply the CMH statistic and proposed estimators to continuous non-normal data sampled from bivariate gamma distributions. We compare our methods to statistics for data sampled from normal distributions. The size and power of the CMH and normal theory statistics are comparable. When the stratum-specific sample sizes are small and the distributions are skewed, the proposed estimator is superior to the normal theory estimator. When the correlation coefficient is zero or close to zero, the restricted confidence intervals

  3. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the

  4. Risk-Sensitive and Average Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    Karviná: Silesian University in Opava, School of Busines Administration in Karviná, 2012 - (Ramík, J.; Stavárek, D.), s. 799-804 ISBN 978-80-7248-779-0. [30th International Conference Mathematical Methods in Economics 2012. Karviná (CZ), 11.09.2012-13.09.2012] R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Institutional support: RVO:67985556 Keywords : dynamic programming * stochastic models * risk analysis and management Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2012/E/Sladky-risk-sensitive and average optimality in markov decision processes .pdf

  5. Data Point Averaging for Computational Fluid Dynamics Data

    Science.gov (United States)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  6. Measurement of the Average $\\phi$ Multiplicity in $B$ Meson Decay

    CERN Document Server

    Aubert, Bernard; Boutigny, D; Gaillard, J M; Hicheur, A; Karyotakis, Yu; Lees, J P; Robbe, P; Tisserand, V; Zghiche, A; Palano, A; Pompili, A; Chen, J C; Qi, N D; Rong, G; Wang, P; Zhu, Y S; Eigen, G; Ofte, I; Stugu, B; Abrams, G S; Borgland, A W; Breon, A B; Brown, D N; Button-Shafer, J; Cahn, R N; Charles, E; Day, C T; Gill, M S; Gritsan, A V; Groysman, Y; Jacobsen, R G; Kadel, R W; Kadyk, J; Kerth, L T; Kolomensky, Yu G; Kukartsev, G; Le Clerc, C; Levi, M E; Lynch, G; Mir, L M; Oddone, P J; Orimoto, T J; Pripstein, M; Roe, N A; Romosan, A; Ronan, Michael T; Shelkov, V G; Telnov, A V; Wenzel, W A; Ford, K; Harrison, T J; Hawkes, C M; Knowles, D J; Morgan, S E; Penny, R C; Watson, A T; Watson, N K; Goetzen, K; Held, T; Koch, H; Lewandowski, B; Pelizaeus, M; Peters, K; Schmücker, H; Steinke, M; Boyd, J T; Chevalier, N; Cottingham, W N; Kelly, M P; Latham, T E; MacKay, C; Wilson, F F; Abe, K; Çuhadar-Dönszelmann, T; Hearty, C; Mattison, T S; McKenna, J A; Thiessen, D; Kyberd, P; McKemey, A K; Teodorescu, L; Blinov, V E; Bukin, A D; Golubev, V B; Ivanchenko, V N; Kravchenko, E A; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Yushkov, A N; Best, D; Bruinsma, M; Chao, M; Kirkby, D; Lankford, A J; Mandelkern, M A; Mommsen, R K; Röthel, W; Stoker, D P; Buchanan, C; Hartfiel, B L; Gary, J W; Layter, J; Shen, B C; Wang, K; Del Re, D; Hadavand, H K; Hill, E J; MacFarlane, D B; Paar, H P; Rahatlou, S; Sharma, V; Berryhill, J W; Campagnari, C; Dahmes, B; Kuznetsova, N; Levy, S L; Long, O; Lu, A; Mazur, M A; Richman, J D; Rozen, Y; Verkerke, W; Beck, T W; Beringer, J; Eisner, A M; Heusch, C A; Lockman, W S; Schalk, T; Schmitz, R E; Schumm, B A; Seiden, A; Turri, M; Walkowiak, W; Williams, D C; Wilson, M G; Albert, J; Chen, E; Dubois-Felsmann, G P; Dvoretskii, A; Erwin, R J; Hitlin, D G; Narsky, I; Piatenko, T; Porter, F C; Ryd, A; Samuel, A; Yang, S; Jayatilleke, S M; Mancinelli, G; Meadows, B T; Sokoloff, M D; Abe, T; Blanc, F; Bloom, P; Chen, S; Clark, P J; Ford, W T; Nauenberg, U; Olivas, A; Rankin, P; Roy, J; Smith, J G; Van Hoek, W C; Zhang, L; Harton, J L; Hu, T; Soffer, A; Toki, W H; Wilson, R J; Zhang, J; Altenburg, D; Brandt, T; Brose, J; Colberg, T; Dickopp, M; Dubitzky, R S; Hauke, A; Lacker, H M; Maly, E; Müller-Pfefferkorn, R; Nogowski, R; Otto, S; Schubert, J; Schubert, Klaus R; Schwierz, R; Spaan, B; Wilden, L; Bernard, D; Bonneaud, G R; Brochard, F; Cohen-Tanugi, J; Grenier, P; Thiebaux, C; Vasileiadis, G; Verderi, M; Khan, A; Lavin, D; Muheim, F; Playfer, S; Swain, J E; Andreotti, M; Azzolini, V; Bettoni, D; Bozzi, C; Calabrese, R; Cibinetto, G; Luppi, E; Negrini, M; Piemontese, L; Sarti, A; Treadwell, E; Anulli, F; Baldini-Ferroli, R; Biasini, M; Calcaterra, A; De Sangro, R; Falciai, D; Finocchiaro, G; Patteri, P; Peruzzi, I M; Piccolo, M; Pioppi, M; Zallo, A; Buzzo, A; Capra, R; Contri, R; Crosetti, G; Lo Vetere, M; Macri, M; Monge, M R; Passaggio, S; Patrignani, C; Robutti, E; Santroni, A; Tosi, S; Bailey, S; Morii, M; Won, E; Bhimji, W; Bowerman, D A; Dauncey, P D; Egede, U; Eschrich, I; Gaillard, J R; Morton, G W; Nash, J A; Sanders, P; Taylor, G P; Grenier, G J; Lee, S J; Mallik, U; Cochran, J; Crawley, H B; Lamsa, J; Meyer, W T; Prell, S; Rosenberg, E I; Yi, J; Davier, M; Grosdidier, G; Höcker, A; Laplace, S; Le Diberder, F R; Lepeltier, V; Lutz, A M; Petersen, T C; Plaszczynski, S; Schune, M H; Tantot, L; Wormser, G; Brigljevic, V; Cheng, C H; Lange, D J; Simani, M C; Wright, D M; Bevan, A J; Coleman, J P; Fry, J R; Gabathuler, Erwin; Gamet, R; Kay, M; Parry, R J; Payne, D J; Sloane, R J; Touramanis, C; Back, J J; Cormack, C M; Harrison, P F; Shorthouse, H W; Vidal, P B; Brown, C L; Cowan, G; Flack, R L; Flächer, H U; George, S; Green, M G; Kurup, A; Marker, C E; McMahon, T R; Ricciardi, S; Salvatore, F; Vaitsas, G; Winter, M A; Brown, D; Davis, C L; Allison, J; Barlow, N R; Barlow, R J; Hart, P A; Hodgkinson, M C; Jackson, F; Lafferty, G D; Lyon, A J; Weatherall, J H; Williams, J C; Farbin, A; Jawahery, A; Kovalskyi, D; Lae, C K; Lillard, V; Roberts, D A; Blaylock, G; Dallapiccola, C; Flood, K T; Hertzbach, S S; Kofler, R; Koptchev, V B; Moore, T B; Saremi, S; Stängle, H; Willocq, S; Cowan, R; Sciolla, G; Taylor, F; Yamamoto, R K; Mangeol, D J J; Patel, P M; Robertson, S H; Lazzaro, A; Palombo, F; Bauer, J M; Cremaldi, L M; Eschenburg, V; Godang, R; Kroeger, R; Reidy, J; Sanders, D A; Summers, D J; Zhao, H W; Brunet, S; Cote-Ahern, D; Taras, P; Nicholson, H; Cartaro, C; Cavallo, N; De Nardo, Gallieno; Fabozzi, F; Gatto, C; Lista, L; Paolucci, P; Piccolo, D; Sciacca, C; Baak, M A; Raven, G; LoSecco, J M; Gabriel, T A; Brau, B; Gan, K K; Honscheid, K; Hufnagel, D; Kagan, H; Kass, R; Pulliam, T; Wong, Q K; Brau, J E; Frey, R; Potter, C T; Sinev, N B; Strom, D; Torrence, E; Colecchia, F; Dorigo, A; Galeazzi, F; Margoni, M; Morandin, M; Posocco, M; Rotondo, M; Simonetto, F; Stroili, R; Tiozzo, G; Voci, C; Benayoun, M; Briand, H; Chauveau, J; David, P; La Vaissière, C de; Del Buono, L; Hamon, O; John, M J J; Leruste, P; Ocariz, J; Pivk, M; Roos, L; Stark, J; T'Jampens, S; Therin, G; Manfredi, P F; Re, V; Behera, P K; Gladney, L; Guo, Q H; Panetta, J; Angelini, C; Batignani, G; Bettarini, S; Bondioli, M; Bucci, F; Calderini, G; Carpinelli, M; Del Gamba, V; Forti, F; Giorgi, M A; Lusiani, A; Marchiori, G; Martínez-Vidal, F; Morganti, M; Neri, N; Paoloni, E; Rama, M; Rizzo, G; Sandrelli, F; Walsh, J; Haire, M; Judd, D; Paick, K; Wagoner, D E; Danielson, N; Elmer, P; Lü, C; Miftakov, V; Olsen, J; Smith, A J S; Tanaka, H A; Varnes, E W; Bellini, F; Cavoto, G; Faccini, R; Ferrarotto, F; Ferroni, F; Gaspero, M; Mazzoni, M A; Morganti, S; Pierini, M; Piredda, G; Safai-Tehrani, F; Voena, C; Christ, S; Wagner, G; Waldi, R; Adye, T; De Groot, N; Franek, B J; Geddes, N I; Gopal, G P; Olaiya, E O; Xella, S M; Aleksan, Roy; Emery, S; Gaidot, A; Ganzhur, S F; Giraud, P F; Hamel de Monchenault, G; Monchenault; Kozanecki, Witold; Langer, M; Legendre, M; London, G W; Mayer, B; Schott, G; Vasseur, G; Yéche, C; Zito, M; Purohit, M V; Weidemann, A W; Yumiceva, F X; Aston, D; Bartoldus, R; Berger, N; Boyarski, A M; Buchmüller, O L; Convery, M R; Coupal, D P; Dong, D; Dorfan, J; Dujmic, D; Dunwoodie, W M; Field, R C; Glanzman, T; Gowdy, S J; Graugès-Pous, E; Hadig, T; Halyo, V; Hrynóva, T; Innes, W R; Jessop, C P; Kelsey, M H; Kim, P; Kocian, M L; Langenegger, U; Leith, D W G S; Libby, J; Luitz, S; Lüth, V; Lynch, H L; Marsiske, H; Messner, R; Müller, D R; O'Grady, C P; Ozcan, V E; Perazzo, A; Perl, M; Petrak, S; Ratcliff, B N; Roodman, A; Salnikov, A A; Schindler, R H; Schwiening, J; Simi, G; Snyder, A; Soha, A; Stelzer, J; Su, D; Sullivan, M K; Vavra, J; Wagner, S R; Weaver, M; Weinstein, A J R; Wisniewski, W J; Wright, D H; Young, C C; Burchat, Patricia R; Edwards, A J; Meyer, T I; Petersen, B A; Roat, C; Ahmed, M; Ahmed, S; Alam, M S; Ernst, J A; Saeed, M A; Saleem, M; Wappler, F R; Bugg, W; Krishnamurthy, M; Spanier, S M; Eckmann, R; Kim, H; Ritchie, J L; Schwitters, R F; Izen, J M; Kitayama, I; Lou, X C; Ye, S; Bianchi, F; Bóna, M; Gallo, F; Gamba, D; Borean, C; Bosisio, L; Della Ricca, G; Dittongo, S; Grancagnolo, S; Lanceri, L; Poropat, P; Vitale, L; Vuagnin, G; Panvini, R S; Banerjee, Sw; Brown, C M; Fortin, D; Jackson, P D; Kowalewski, R V; Roney, J M; Band, H R; Dasu, S; Datta, M; Eichenbaum, A M; Johnson, J R; Kutter, P E; Li, H; Liu, R; Di Lodovico, F; Mihályi, A; Mohapatra, A K; Pan, Y; Prepost, R; Sekula, S J; Von Wimmersperg-Töller, J H; Wu, J; Wu Sau Lan; Yu, Z; Neal, H

    2003-01-01

    We present a measurement of the average multiplicity of $\\phi$ mesons in $B^0$, $\\kern 0.18em\\bar{\\kern -0.18em B}{}^0$ and $B^\\pm$ meson decays. Using $17.6 fb^{-1}$ of data taken at the $\\Upsilon{(4S)}\\xspace$ resonance by the {\\slshape B\\kern-0.1em{\\smaller A}\\kern-0.1em B\\kern-0.1em{\\smaller A\\kern-0.2em R}} detector at the PEP-II $e^+e^-\\xspace$ storage ring at the Stanford Linear Accelerator Center, we reconstruct $\\phi$ mesons in the $K^+K^-$ decay mode and measure ${\\cal{B}}(B\\to \\phi X) = (3.41\\pm0.06\\pm0.12)%$. This is significantly more precise than any previous measurement.

  7. Average weighted receiving time in recursive weighted Koch networks

    Indian Academy of Sciences (India)

    DAI MEIFENG; YE DANDAN; LI XINGYI; HOU JIE

    2016-06-01

    Motivated by the empirical observation in airport networks and metabolic networks, we introduce the model of the recursive weighted Koch networks created by the recursive division method. As a fundamental dynamical process, random walks have received considerable interest in the scientific community. Then, we study the recursive weighted Koch networks on random walk i.e., the walker, at each step, starting from its current node, moves uniformly to any of itsneighbours. In order to study the model more conveniently, we use recursive division method again to calculate the sum of the mean weighted first-passing times for all nodes to absorption at the trap located in the merging node. It is showed that in a large network, the average weighted receiving time grows sublinearly with the network order.

  8. A note on computing average state occupation times

    Directory of Open Access Journals (Sweden)

    Jan Beyersmann

    2014-05-01

    Full Text Available Objective: This review discusses how biometricians would probably compute or estimate expected waiting times, if they had the data. Methods: Our framework is a time-inhomogeneous Markov multistate model, where all transition hazards are allowed to be time-varying. We assume that the cumulative transition hazards are given. That is, they are either known, as in a simulation, determined by expert guesses, or obtained via some method of statistical estimation. Our basic tool is product integration, which transforms the transition hazards into the matrix of transition probabilities. Product integration enjoys a rich mathematical theory, which has successfully been used to study probabilistic and statistical aspects of multistate models. Our emphasis will be on practical implementation of product integration, which allows us to numerically approximate the transition probabilities. Average state occupation times and other quantities of interest may then be derived from the transition probabilities.

  9. Time-Average Calculation using FEM in a CANDU Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Eun Hyun; Park, Joo Hwan; Song, Yong Man; Lee Chung Chan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2012-05-15

    To get a much accurate result and to be sure about the calculated reactor physics value, new code system which is appropriate to the CANDU reactor and has high fidelity is required. This study here is to understand and analyze the existing code system, WIMS-RFSP. Because the FEM codes used here can calculate multiplication factor, group flux, channel power easily with cross section data from WIMS and geometrical data from GMSH, the results of FEM are good examples to compare with RFSP results. With the comparison process itself and numerical experiments, it is expected that the basis of new code system become abundant. Time-average module is mainly discussed with regular process in RFSP

  10. Time-Average Calculation using FEM in a CANDU Reactor

    International Nuclear Information System (INIS)

    To get a much accurate result and to be sure about the calculated reactor physics value, new code system which is appropriate to the CANDU reactor and has high fidelity is required. This study here is to understand and analyze the existing code system, WIMS-RFSP. Because the FEM codes used here can calculate multiplication factor, group flux, channel power easily with cross section data from WIMS and geometrical data from GMSH, the results of FEM are good examples to compare with RFSP results. With the comparison process itself and numerical experiments, it is expected that the basis of new code system become abundant. Time-average module is mainly discussed with regular process in RFSP

  11. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  12. Quantum gravity unification via transfinite arithmetic and geometrical averaging

    International Nuclear Information System (INIS)

    In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε(∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε(∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68

  13. Auto-Parametric Resonance in Cyclindrical Shells Using Geometric Averaging

    Science.gov (United States)

    MCROBIE, F. A.; POPOV, A. A.; THOMPSON, J. M. T.

    1999-10-01

    A study is presented of internal auto-parametric instabilities in the free non-linear vibrations of a cylindrical shell, focussed on two modes (a concertina mode and a chequerboard mode) whose non-linear interaction breaks the in-out symmetry of the linear vibration theory: the two mode interaction leads to preferred vibration patterns with larger deflection inwards than outwards, and at internal resonance, significant energy transfer occurs between the modes. A Rayleigh-Ritz discretization of the von Kármán-Donnell equations leads to the Hamiltonian and transformation into action-angle co-ordinates followed by averaging provides readily a geometric description of the modal interaction. It was established that the interaction should be most pronounced when there are slightly less than 2√N square chequerboard panels circumferentially, where N is the ratio of shell radius to thickness.

  14. Average dimension and magnetic structure of the distant Venus magnetotail

    Science.gov (United States)

    Saunders, M. A.; Russell, C. T.

    1986-01-01

    The first major statistical investigation of the far wake of an unmagnetized object embedded in the solar wind is reported. The investigation is based on Pioneer Venus Orbiter magnetometer data from 70 crossings of the Venus wake at altitudes between 5 and 11 Venus radii during reasonably steady IMF conditions. It is found that Venus has a well-developed-tail, flaring with altitude and possibly broader in the direction parallel to the IMF cross-flow component. Tail lobe field polarities and the direction of the cross-tail field are consistent with tail accretion from the solar wind. Average values for the cross-tail field (2 nT) and the distant tail flux (3 MWb) indicate that most distant tail field lines close across the center of the tail and are not rooted in the Venus ionosphere. The findings are illustrated in a three-dimensional schematic.

  15. Average Stopping Set Weight Distribution of Redundant Random Matrix Ensembles

    CERN Document Server

    Wadayama, Tadashi

    2007-01-01

    In this paper, redundant random matrix ensembles (abbreviated as redundant random ensembles) are defined and their stopping set (SS) weight distributions are analyzed. A redundant random ensemble consists of a set of binary matrices with linearly dependent rows. These linearly dependent rows (redundant rows) significantly reduce the number of stopping sets of small size. An upper and lower bound on the average SS weight distribution of the redundant random ensembles are shown. From these bounds, the trade-off between the number of redundant rows (corresponding to decoding complexity of BP on BEC) and the critical exponent of the asymptotic growth rate of SS weight distribution (corresponding to decoding performance) can be derived. It is shown that, in some cases, a dense matrix with linearly dependent rows yields asymptotically (i.e., in the regime of small erasure probability) better performance than regular LDPC matrices with comparable parameters.

  16. Plasma dynamics and a significant error of macroscopic averaging

    CERN Document Server

    Szalek, M A

    2005-01-01

    The methods of macroscopic averaging used to derive the macroscopic Maxwell equations from electron theory are methodologically incorrect and lead in some cases to a substantial error. For instance, these methods do not take into account the existence of a macroscopic electromagnetic field EB, HB generated by carriers of electric charge moving in a thin layer adjacent to the boundary of the physical region containing these carriers. If this boundary is impenetrable for charged particles, then in its immediate vicinity all carriers are accelerated towards the inside of the region. The existence of the privileged direction of acceleration results in the generation of the macroscopic field EB, HB. The contributions to this field from individual accelerated particles are described with a sufficient accuracy by the Lienard-Wiechert formulas. In some cases the intensity of the field EB, HB is significant not only for deuteron plasma prepared for a controlled thermonuclear fusion reaction but also for electron plasm...

  17. Averaged multivalued solutions and time discretization for conservation laws

    International Nuclear Information System (INIS)

    It is noted that the correct shock solutions can be approximated by averaging in some sense the multivalued solution given by the method of characteristics for the nonlinear scalar conservation law (NSCL). A time discretization for the NSCL equation based on this principle is considered. An equivalent analytical formulation is shown to lead quite easily to a convergence result, and a third formulation is introduced which can be generalized for the systems of conservation laws. Various numerical schemes are constructed from the proposed time discretization. The first family of schemes is obtained by using a spatial grid and projecting the results of the time discretization. Many known schemes are then recognized (mainly schemes by Osher, Roe, and LeVeque). A second way to discretize leads to a particle scheme without space grid, which is very efficient (at least in the scalar case). Finally, a close relationship between the proposed method and the Boltzmann type schemes is established. 14 references

  18. Extracting Credible Dependencies for Averaged One-Dependence Estimator Analysis

    Directory of Open Access Journals (Sweden)

    LiMin Wang

    2014-01-01

    Full Text Available Of the numerous proposals to improve the accuracy of naive Bayes (NB by weakening the conditional independence assumption, averaged one-dependence estimator (AODE demonstrates remarkable zero-one loss performance. However, indiscriminate superparent attributes will bring both considerable computational cost and negative effect on classification accuracy. In this paper, to extract the most credible dependencies we present a new type of seminaive Bayesian operation, which selects superparent attributes by building maximum weighted spanning tree and removes highly correlated children attributes by functional dependency and canonical cover analysis. Our extensive experimental comparison on UCI data sets shows that this operation efficiently identifies possible superparent attributes at training time and eliminates redundant children attributes at classification time.

  19. Suicide attempts, platelet monoamine oxidase and the average evoked response

    International Nuclear Information System (INIS)

    The relationship between suicides and suicide attempts and two biological measures, platelet monoamine oxidase levels (MAO) and average evoked response (AER) augmenting was examined in 79 off-medication psychiatric patients and in 68 college student volunteers chosen from the upper and lower deciles of MAO activity levels. In the patient sample, male individuals with low MAO and AER augmenting, a pattern previously associated with bipolar affective disorders, showed a significantly increased incidence of suicide attempts in comparison with either non-augmenting low MAO or high MAO patients. Within the normal volunteer group, all male low MAO probands with a family history of suicide or suicide attempts were AER augmenters themselves. Four completed suicides were found among relatives of low MAO probands whereas no high MAO proband had a relative who committed suicide. These findings suggest that the combination of low platelet MAO activity and AER augmenting may be associated with a possible genetic vulnerability to psychiatric disorders. (author)

  20. Quantitative metagenomic analyses based on average genome size normalization

    DEFF Research Database (Denmark)

    Frank, Jeremy Alexander; Sørensen, Søren Johannes

    2011-01-01

    Over the past quarter-century, microbiologists have used DNA sequence information to aid in the characterization of microbial communities. During the last decade, this has expanded from single genes to microbial community genomics, or metagenomics, in which the gene content of an environment can...... provide not just a census of the community members but direct information on metabolic capabilities and potential interactions among community members. Here we introduce a method for the quantitative characterization and comparison of microbial communities based on the normalization of metagenomic data...... by estimating average genome sizes. This normalization can relieve comparative biases introduced by differences in community structure, number of sequencing reads, and sequencing read lengths between different metagenomes. We demonstrate the utility of this approach by comparing metagenomes from two different...

  1. Spatial Games Based on Pursuing the Highest Average Payoff

    Institute of Scientific and Technical Information of China (English)

    YANG Han-Xin; WANG Bing-Hong; WANG Wen-Xu; RONG Zhi-Hai

    2008-01-01

    We propose a strategy updating mechanism based on pursuing the highest average payoff to investigate the prisoner's dilemma game and the snowdrift game. We apply the new rule to investigate cooperative behaviours on regular, small-world, scale-free networks, and find spatial structure can maintain cooperation for the prisoner's dilemma game. In the snowdrift game, spatial structure can inhibit or promote cooperative behaviour which depends on payoff parameter. We further study cooperative behaviour on scale-free network in detail. Interestingly, non-monotonous behaviours observed on scale-free network with middle-degree individuals have the lowest cooperation level. We also find that large-degree individuals change their strategies more frequently for both games.

  2. Graph-balancing algorithms for average consensus over directed networks

    Science.gov (United States)

    Fan, Yuan; Han, Runzhe; Qiu, Jianbin

    2016-01-01

    Consensus strategies find extensive applications in coordination of robot groups and decision-making of agents. Since balanced graph plays an important role in the average consensus problem and many other coordination problems for directed communication networks, this work explores the conditions and algorithms for the digraph balancing problem. Based on the analysis of graph cycles, we prove that a digraph can be balanced if and only if the null space of its incidence matrix contains positive vectors. Then, based on this result and the corresponding analysis, two weight balance algorithms have been proposed, and the conditions for obtaining a unique balanced solution and a set of analytical results on weight balance problems have been introduced. Then, we point out the relationship between the weight balance problem and the features of the corresponding underlying Markov chain. Finally, two numerical examples are presented to verify the proposed algorithms.

  3. COMPTEL Time-Averaged All-Sky Point Source Analysis

    CERN Document Server

    Collmar, W; Strong, A W; Blömen, H; Hermsen, W; McConnell, M; Ryan, J; Bennett, K

    1999-01-01

    We use all COMPTEL data from the beginning of the CGRO mission (April '91) upto the end of CGRO Cycle 6 (November '97) to carry out all-sky point sourceanalyses in the four standard COMPTEL energy bands for different time periods.We apply our standard maximum-likelihood method to generate all-skysignificance and flux maps for point sources by subtracting off the diffuseemission components via model fitting. In addition, fluxes of known sourceshave been determined for individual CGRO Phases/Cycles to generate lightcurveswith a time resolution of the order of one year. The goal of the analysis is toderive quantitative results -- significances, fluxes, light curves -- of ourbrightest and most significant sources such as 3C 273, and to search foradditional new COMPTEL sources, showing up in time-averaged maps only.

  4. Effects of polynomial trends on detrending moving average analysis

    CERN Document Server

    Shao, Ying-Hui; Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2015-01-01

    The detrending moving average (DMA) algorithm is one of the best performing methods to quantify the long-term correlations in nonstationary time series. Many long-term correlated time series in real systems contain various trends. We investigate the effects of polynomial trends on the scaling behaviors and the performances of three widely used DMA methods including backward algorithm (BDMA), centered algorithm (CDMA) and forward algorithm (FDMA). We derive a general framework for polynomial trends and obtain analytical results for constant shifts and linear trends. We find that the behavior of the CDMA method is not influenced by constant shifts. In contrast, linear trends cause a crossover in the CDMA fluctuation functions. We also find that constant shifts and linear trends cause crossovers in the fluctuation functions obtained from the BDMA and FDMA methods. When a crossover exists, the scaling behavior at small scales comes from the intrinsic time series while that at large scales is dominated by the cons...

  5. On the physical effects of consistent cosmological averaging

    CERN Document Server

    Brown, Iain A; Herman, D Leigh; Latta, Joey

    2013-01-01

    We use cosmological perturbation theory to study the backreaction effects of a self-consistent and well-defined cosmological averaging on the dynamics and the evolution of the Universe. Working with a perturbed Friedman-Lemaitre-Robertson-Walker Einstein-de Sitter cosmological solution in a comoving volume-preserving gauge, we compute the expressions for the expansion scalar and deceleration parameter to second order, which we use to characterize the backreaction. We find that the fractional shift in the Hubble parameter with respect to the input background cosmological model is Delta~10^{-5}, which leads to an effective energy density of the order of a few times 10^{-5}. In addition, we find that an appropriate measure of the fractional shift in the deceleration parameter Q is very large.

  6. ORDERED WEIGHTED AVERAGING AGGREGATION METHOD FOR PORTFOLIO SELECTION

    Institute of Scientific and Technical Information of China (English)

    LIU Shancun; QIU Wanhua

    2004-01-01

    Portfolio management is a typical decision making problem under incomplete,sometimes unknown, informationThis paper considers the portfolio selection problemsunder a general setting of uncertain states without probabilityThe investor's preferenceis based on his optimum degree about the nature, and his attitude can be described by anOrdered Weighted Averaging Aggregation functionWe construct the OWA portfolio selec-tion model, which is a nonlinear programming problemThe problem can be equivalentlytransformed into a mixed integer linear programmingA numerical example is given andthe solutions imply that the investor's strategies depend not only on his optimum degreebut also on his preference weight vectorThe general game-theoretical portfolio selectionmethod, max-min method and competitive ratio method are all the special settings of thismodel.

  7. Voter dynamics on an adaptive network with finite average connectivity

    Science.gov (United States)

    Mukhopadhyay, Abhishek; Schmittmann, Beate

    2009-03-01

    We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.

  8. Local versus average field failure criterion in amorphous polymers

    International Nuclear Information System (INIS)

    There is extensive work developing laws that predict yielding in amorphous polymers, ranging from the pioneer experimental work of Sternstein et al (1968 Appl. Polym. Symp. 7 175–99) to the novel molecular dynamics simulations of Jaramillo et al (2012 Phys. Rev. B 85 024114). While atomistic models render damage criteria in terms of local values of the stress and strain fields, experiments provide yield conditions in terms of the average values of these fields. Unfortunately, it is not possible to compare these results due to the differences in time and length scales. Here, we use a micromechanical phase-field damage model with parameters calculated from atomistic simulations to connect atomistic and macroscopic scale experiments. The phase-field damage model is used to study failure in composite materials. We find that the yield criterion should be described in terms of local stress and strains fields and cannot be extended directly from applied stress field values to determine yield conditions. (paper)

  9. Order-Optimal Consensus through Randomized Path Averaging

    CERN Document Server

    Benezit, F; Thiran, P; Vetterli, M

    2008-01-01

    Gossip algorithms have recently received significant attention, mainly because they constitute simple and robust message-passing schemes for distributed information processing over networks. However for many topologies that are realistic for wireless ad-hoc and sensor networks (like grids and random geometric graphs), the standard nearest-neighbor gossip converges as slowly as flooding ($O(n^2)$ messages). A recently proposed algorithm called geographic gossip improves gossip efficiency by a $\\sqrt{n}$ factor, by exploiting geographic information to enable multi-hop long distance communications. In this paper we prove that a variation of geographic gossip that averages along routed paths, improves efficiency by an additional $\\sqrt{n}$ factor and is order optimal ($O(n)$ messages) for grids and random geometric graphs. We develop a general technique (travel agency method) based on Markov chain mixing time inequalities, which can give bounds on the performance of randomized message-passing algorithms operating...

  10. Ultrafast green laser exceeding 400 W of average power

    Science.gov (United States)

    Gronloh, Bastian; Russbueldt, Peter; Jungbluth, Bernd; Hoffmann, Hans-Dieter

    2014-05-01

    We present the world's first laser at 515 nm with sub-picosecond pulses and an average power of 445 W. To realize this beam source we utilize an Yb:YAG-based infrared laser consisting of a fiber MOPA system as a seed source, a rod-type pre-amplifier and two Innoslab power amplifier stages. The infrared system delivers up to 930 W of average power at repetition rates between 10 and 50 MHz and with pulse durations around 800 fs. The beam quality in the infrared is M2 = 1.1 and 1.5 in fast and slow axis. As a frequency doubler we chose a Type-I critically phase-matched Lithium Triborate (LBO) crystal in a single-pass configuration. To preserve the infrared beam quality and pulse duration, the conversion was carefully modeled using numerical calculations. These take dispersion-related and thermal effects into account, thus enabling us to provide precise predictions of the properties of the frequency-doubled beam. To be able to model the influence of thermal dephasing correctly and to choose appropriate crystals accordingly, we performed extensive absorption measurements of all crystals used for conversion experiments. These measurements provide the input data for the thermal FEM analysis and calculation. We used a Photothermal Commonpath Interferometer (PCI) to obtain space-resolved absorption data in the bulk and at the surfaces of the LBO crystals. The absorption was measured at 1030 nm as well as at 515 nm in order to take into account the different absorption behavior at both occurring wavelengths.

  11. The imprint of stratospheric transport on column-averaged methane

    Directory of Open Access Journals (Sweden)

    A. Ostler

    2015-07-01

    Full Text Available Model simulations of column-averaged methane mixing ratios (XCH4 are extensively used for inverse estimates of methane (CH4 emissions from atmospheric measurements. Our study shows that virtually all chemical transport models (CTM used for this purpose are affected by stratospheric model-transport errors. We quantify the impact of such model transport errors on the simulation of stratospheric CH4 concentrations via an a posteriori correction method. This approach compares measurements of the mean age of air with modeled age and expresses the difference in terms of a correction to modeled stratospheric CH4 mixing ratios. We find age differences up to ~ 3 years yield to a bias in simulated CH4 of up to 250 parts per billion (ppb. Comparisons between model simulations and ground-based XCH4 observations from the Total Carbon Column Network (TCCON reveal that stratospheric model-transport errors cause biases in XCH4 of ~ 20 ppb in the midlatitudes and ~ 27 ppb in the arctic region. Improved overall as well as seasonal model-observation agreement in XCH4 suggests that the proposed, age-of-air-based stratospheric correction is reasonable. The latitudinal model bias in XCH4 is supposed to reduce the accuracy of inverse estimates using satellite-derived XCH4 data. Therefore, we provide an estimate of the impact of stratospheric model-transport errors in terms of CH4 flux errors. Using a one-box approximation, we show that average model errors in stratospheric transport correspond to an overestimation of CH4 emissions by ~ 40 % (~ 7 Tg yr−1 for the arctic, ~ 5 % (~ 7 Tg yr−1 for the northern, and ~ 60 % (~ 7 Tg yr−1 for the southern hemispheric mid-latitude region. We conclude that an improved modeling of stratospheric transport is highly desirable for the joint use with atmospheric XCH4 observations in atmospheric inversions.

  12. Average glandular dose in digital mammography and breast tomosynthesis

    International Nuclear Information System (INIS)

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  13. Average glandular dose in digital mammography and breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie

    2012-10-15

    Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.

  14. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  15. Measurement of the average lifetime of hadrons containing bottom quarks

    International Nuclear Information System (INIS)

    This thesis reports a measurement of the average lifetime of hadrons containing bottom quarks. It is based on data taken with the DELCO detector at the PEP e+e- storage ring at a center of mass energy of 29 GeV. The decays of hadrons containing bottom quarks are tagged in hadronic events by the presence of electrons with a large component of momentum transverse to the event axis. Such electrons are identified in the DELCO detector by an atmospheric pressure Cherenkov counter assisted by a lead/scintillator electromagnetic shower counter. The lifetime measured is 1.17 psec, consistent with previous measurements. This measurement, in conjunction with a limit on the non-charm branching ratio in b-decay obtained by other experiments, can be used to constrain the magnitude of the V/sub cb/ element of the Kobayashi-Maskawa matrix to the range 0.042 (+0.005 or -0.004 (stat.), +0.004 or -0.002 (sys.)), where the errors reflect the uncertainty on tau/sub b/ only and not the uncertainties in the calculations which relate the b-lifetime and the element of the Kobayashi-Maskawa matrix

  16. Average radiation exposure values for three diagnostic radiographic examinations

    International Nuclear Information System (INIS)

    National surveys of more than 600 facilities that performed chest, lumbosacral spine, and abdominal examinations were conducted as a part of the Nationwide Evaluation of X-Ray Trends program. Radiation exposures were measured with use of a set of standard phantoms developed by the Center for Devices and Radiological Health of the Food and Drug Administration, U.S. Public Health Service. X-ray equipment parameters, film processing data, and data regarding techniques used were collected. There were no differences in overall posteroanterior chest exposures between hospitals and private practices. Seventy-six percent of hospitals used grids, compared with 33% of private practices. In general, hospitals favored a high tube voltage technique, and private facilities favored a low tube voltage technique. Forty-one percent of private practices and 17% of hospitals underprocessed their film. Underprocessing in hospitals increased from 17% in 1984 to 33% in 1987. Average exposure values for these examinations may be useful as guidelines in meeting some of the new requirements of the Joint Commission on Accreditation of Healthcare Organizations

  17. Orientation-averaged optical properties of natural aerosol aggregates

    International Nuclear Information System (INIS)

    Orientation-averaged optical properties of natural aerosol aggregates were analyzed by using discrete dipole approximation (DDA) for the effective radius in the range of 0.01 to 2 μm with the corresponding size parameter from 0.1 to 23 for the wavelength of 0.55 μm. Effects of the composition and morphology on the optical properties were also investigated. The composition show small influence on the extinction-efficiency factor in Mie scattering region, scattering- and backscattering-efficiency factors. The extinction-efficiency factor with the size parameter from 9 to 23 and asymmetry factor with the size parameter below 2.3 are almost independent of the natural aerosol composition. The extinction-, absorption, scattering-, and backscattering-efficiency factors with the size parameter below 0.7 are irrespective of the aggregate morphology. The intrinsic symmetry and discontinuity of the normal direction of the particle surface have obvious effects on the scattering properties for the size parameter above 4.6. Furthermore, the scattering phase functions of natural aerosol aggregates are enhanced at the backscattering direction (opposition effect) for large size parameters in the range of Mie scattering. (authors)

  18. Average glandular dose conversion coefficients for segmented breast voxel models

    International Nuclear Information System (INIS)

    For 8 voxel models of a compressed breast (4-7 cm thickness and two orientations for each thickness) and 14 radiation qualities commonly used in mammography (HVL 0.28-0.50 mm Al), tissue dose conversion coefficients were calculated for a focus-to-film distance of 60 cm using Monte Carlo methods. The voxel models were segmented from a high-resolution (slice thickness of 1 mm) computed tomography data set of an ablated breast specimen fixated while being compressed. The contents of glandular tissues amounted to 2.6%, and were asymmetrically distributed with regard to the midplane of the model. The calculated tissue dose conversion coefficients were compared with the recent literature values. These earlier tissue dose conversion coefficients were also calculated using Monte Carlo methods and breast models of various thickness, but these consist of homogeneous mixtures of glandular and adipose tissues embedded in 5 mm pure adipose tissue both at the entrance and exit sides. The results show that the new glandular tissue dose conversion coefficients agree well with the literature values for those cases where the glandular tissue is predominantly concentrated in the upper part of the model. In the opposite case, they were lower by up to 40%. These findings reveal a basic problem in patient dosimetry for mammography: glandular dose is not only governed by the average breast composition, which could be derived from the breast thickness, but also by the local distribution of glandular tissue within the breast, which is not known. (authors)

  19. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  20. A simple depth-averaged model for dry granular flow

    Science.gov (United States)

    Hung, Chi-Yao; Stark, Colin P.; Capart, Herve

    Granular flow over an erodible bed is an important phenomenon in both industrial and geophysical settings. Here we develop a depth-averaged theory for dry erosive flows using balance equations for mass, momentum and (crucially) kinetic energy. We assume a linearized GDR-Midi rheology for granular deformation and Coulomb friction along the sidewalls. The theory predicts the kinematic behavior of channelized flows under a variety of conditions, which we test in two sets of experiments: (1) a linear chute, where abrupt changes in tilt drive unsteady uniform flows; (2) a rotating drum, to explore steady non-uniform flow. The theoretical predictions match the experimental results well in all cases, without the need to tune parameters or invoke an ad hoc equation for entrainment at the base of the flow. Here we focus on the drum problem. A dimensionless rotation rate (related to Froude number) characterizes flow geometry and accounts not just for spin rate, drum radius and gravity, but also for grain size, wall friction and channel width. By incorporating Coriolis force the theory can treat behavior under centrifuge-induced enhanced gravity. We identify asymptotic flow regimes at low and high dimensionless rotation rates that exhibit distinct power-law scaling behaviors.

  1. Using Bayesian model averaging to estimate terrestrial evapotranspiration in China

    Science.gov (United States)

    Chen, Yang; Yuan, Wenping; Xia, Jiangzhou; Fisher, Joshua B.; Dong, Wenjie; Zhang, Xiaotong; Liang, Shunlin; Ye, Aizhong; Cai, Wenwen; Feng, Jinming

    2015-09-01

    Evapotranspiration (ET) is critical to terrestrial ecosystems as it links the water, carbon, and surface energy exchanges. Numerous ET models were developed for the ET estimations, but there are large model uncertainties. In this study, a Bayesian Model Averaging (BMA) method was used to merge eight satellite-based models, including five empirical and three process-based models, for improving the accuracy of ET estimates. At twenty-three eddy covariance flux towers, we examined the model performance on all possible combinations of eight models and found that an ensemble with four models (BMA_Best) showed the best model performance. The BMA_Best method can outperform the best of eight models, and the Kling-Gupta efficiency (KGE) value increased by 4% compared with the model with the highest KGE, and decreased RMSE by 4%. Although the correlation coefficient of BMA_Best is less than the best single model, the bias of BMA_Best is the smallest compared with the eight models. Moreover, based on the water balance principle over the river basin scale, the validation indicated the BMA_Best estimates can explain 86% variations. In general, the results showed BMA estimates will be very useful for future studies to characterize the regional water availability over long-time series.

  2. Average accelerator simulation Truebeam using phase space in IAEA format

    International Nuclear Information System (INIS)

    In this paper is used a computational code of radiation transport simulation based on Monte Carlo technique, in order to model a linear accelerator of treatment by Radiotherapy. This work is the initial step of future proposals which aim to study several treatment of patient by Radiotherapy, employing computational modeling in cooperation with the institutions UESC, IPEN, UFRJ e COI. The Chosen simulation code is GATE/Geant4. The average accelerator is TrueBeam of Varian Company. The geometric modeling was based in technical manuals, and radiation sources on the phase space for photons, provided by manufacturer in the IAEA (International Atomic Energy Agency) format. The simulations were carried out in equal conditions to experimental measurements. Were studied photons beams of 6MV, with 10 per 10 cm of field, focusing on a water phantom. For validation were compared dose curves in depth, lateral profiles in different depths of the simulated results and experimental data. The final modeling of this accelerator will be used in future works involving treatments and real patients. (author)

  3. An evaluation of the average DMF in hemodialyzed patients

    Directory of Open Access Journals (Sweden)

    Arami S. Assistant Professor

    2003-07-01

    Full Text Available Statement of Problem: Rapid increases in the population of hemodialyzed patients induce the dentists to acquire a complete understanding of the special therapeutic considerations for such patients. Purpose: The goal of this research was to study the amount of DMF in hemodialyzed patients, age ranging from 12-20 years, in the city of Tehran."nMaterials and Methods: In this cross- sectional and analytic- descriptive research, 50 kidney patients (27 mail and 23 females, with the age range of 12-20 years were selected. They had referred to one of the following hospitals for hemodialysis: Imam Khomeini, Children Medical Center Fayyazbakhsh, Haft-e-Tir, Ashrafi Esfahani, Labafinejad and Hasheminejad. The data, based on clinical examination, patient's answers, patient's medical files, parents replies, were collected and analyzed by Chi- Square test. Results: The average DMF, for. patients under study was 2.46, comparing to the normal subjects of the society, no significant difference was observed. Factors such as sex, Mother's education, oral hygiene and the number of daily brushing did not show any statistically significant difference about this index. The results also showed a 38% prevalence of severe gingivitis and 32% of moderate gingivitis. Conclusion: This restricted study emphasizes the necessity to use proper preventive methods and to improve the patient's and parents' knowledge about oral and dental health.

  4. Resolution improvement by 3D particle averaging in localization microscopy

    International Nuclear Information System (INIS)

    Inspired by recent developments in localization microscopy that applied averaging of identical particles in 2D for increasing the resolution even further, we discuss considerations for alignment (registration) methods for particles in general and for 3D in particular. We detail that traditional techniques for particle registration from cryo electron microscopy based on cross-correlation are not suitable, as the underlying image formation process is fundamentally different. We argue that only localizations, i.e. a set of coordinates with associated uncertainties, are recorded and not a continuous intensity distribution. We present a method that owes to this fact and that is inspired by the field of statistical pattern recognition. In particular we suggest to use an adapted version of the Bhattacharyya distance as a merit function for registration. We evaluate the method in simulations and demonstrate it on 3D super-resolution data of Alexa 647 labelled to the Nup133 protein in the nuclear pore complex of Hela cells. From the simulations we find suggestions that for successful registration the localization uncertainty must be smaller than the distance between labeling sites on a particle. These suggestions are supported by theoretical considerations concerning the attainable resolution in localization microscopy and its scaling behavior as a function of labeling density and localization precision. (paper)

  5. Potential of high-average-power solid state lasers

    International Nuclear Information System (INIS)

    We discuss the possibility of extending solid state laser technology to high average power and of improving the efficiency of such lasers sufficiently to make them reasonable candidates for a number of demanding applications. A variety of new design concepts, materials, and techniques have emerged over the past decade that, collectively, suggest that the traditional technical limitations on power (a few hundred watts or less) and efficiency (less than 1%) can be removed. The core idea is configuring the laser medium in relatively thin, large-area plates, rather than using the traditional low-aspect-ratio rods or blocks. This presents a large surface area for cooling, and assures that deposited heat is relatively close to a cooled surface. It also minimizes the laser volume distorted by edge effects. The feasibility of such configurations is supported by recent developments in materials, fabrication processes, and optical pumps. Two types of lasers can, in principle, utilize this sheet-like gain configuration in such a way that phase and gain profiles are uniformly sampled and, to first order, yield high-quality (undistorted) beams. The zig-zag laser does this with a single plate, and should be capable of power levels up to several kilowatts. The disk laser is designed around a large number of plates, and should be capable of scaling to arbitrarily high power levels

  6. The Kernel Adaptive Autoregressive-Moving-Average Algorithm.

    Science.gov (United States)

    Li, Kan; Príncipe, José C

    2016-02-01

    In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that the KAARMA algorithm outperforms equivalent input-space recurrent architectures using first- and second-order RNNs, demonstrating its potential as an effective learning solution for the identification and synthesis of deterministic finite automata. PMID:25935049

  7. Statistical properties of the gyro-averaged standard map

    Science.gov (United States)

    da Fonseca, Julio D.; Sokolov, Igor M.; Del-Castillo-Negrete, Diego; Caldas, Ibere L.

    2015-11-01

    A statistical study of the gyro-averaged standard map (GSM) is presented. The GSM is an area preserving map model proposed in as a simplified description of finite Larmor radius (FLR) effects on ExB chaotic transport in magnetized plasmas with zonal flows perturbed by drift waves. The GSM's effective perturbation parameter, gamma, is proportional to the zero-order Bessel function of the particle's Larmor radius. In the limit of zero Larmor radius, the GSM reduces to the standard, Chirikov-Taylor map. We consider plasmas in thermal equilibrium and assume a Larmor radius' probability density function (pdf) resulting from a Maxwell-Boltzmann distribution. Since the particles have in general different Larmor radii, each orbit is computed using a different perturbation parameter, gamma. We present analytical and numerical computations of the pdf of gamma for a Maxwellian distribution. We also compute the pdf of global chaos, which gives the probability that a particle with a given Larmor radius exhibits global chaos, i.e. the probability that Kolmogorov-Arnold-Moser (KAM) transport barriers do not exist.

  8. Spatially-Averaged Diffusivities for Pollutant Transport in Vegetated Flows

    Science.gov (United States)

    Huang, Jun; Zhang, Xiaofeng; Chua, Vivien P.

    2016-06-01

    Vegetation in wetlands can create complicated flow patterns and may provide many environmental benefits including water purification, flood protection and shoreline stabilization. The interaction between vegetation and flow has significant impacts on the transport of pollutants, nutrients and sediments. In this paper, we investigate pollutant transport in vegetated flows using the Delft3D-FLOW hydrodynamic software. The model simulates the transport of pollutants with the continuous release of a passive tracer at mid-depth and mid-width in the region where the flow is fully developed. The theoretical Gaussian plume profile is fitted to experimental data, and the lateral and vertical diffusivities are computed using the least squares method. In previous tracer studies conducted in the laboratory, the measurements were obtained at a single cross-section as experimental data is typically collected at one location. These diffusivities are then used to represent spatially-averaged values. With the numerical model, sensitivity analysis of lateral and vertical diffusivities along the longitudinal direction was performed at 8 cross-sections. Our results show that the lateral and vertical diffusivities increase with longitudinal distance from the injection point, due to the larger size of the dye cloud further downstream. A new method is proposed to compute diffusivities using a global minimum least squares method, which provides a more reliable estimate than the values obtained using the conventional method.

  9. MHD stability of torsatrons using the average method

    International Nuclear Information System (INIS)

    The stability of torsatrons is studied using the average method, or stellarator expansion. Attention is focused upon the Advanced Toroidal Fusion Device (ATF), an l = 2, 12 field period, moderate aspect ratio configuration which, through a combination of shear and toroidally induced magnetic well, is stable to ideal modes. Using the vertical field (VF) coil system of ATF it is possible to enhance this stability by shaping the plasma to control the rotational transform. The VF coils are also useful tools for exploring the stability boundaries of ATF. By shifting the plasma inward along the major radius, the magnetic well can be removed, leading to three types of long wavelength instabilities: (1) A free boundary ''edge mode'' occurs when the rotational transform at the plasma edge is just less than unity. This mode is stabilized by the placement of a conducting wall at 1.5 times the plasma radius. (2) A free boundary global kink mode is observed at high β. When either β is lowered or a conducting wall is placed at the plasma boundary, the global mode is suppressed, and (3) an interchange mode is observed instead. For this interchange mode, calculations of the second, third, etc., most unstable modes are used to understand the nature of the degeneracy breaking induced by toroidal effects. Thus, the ATF configuration is well chosen for the study of torsatron stability limits

  10. Analytic continuation by averaging Padé approximants

    Science.gov (United States)

    Schött, Johan; Locht, Inka L. M.; Lundin, Elin; Grânäs, Oscar; Eriksson, Olle; Di Marco, Igor

    2016-02-01

    The ill-posed analytic continuation problem for Green's functions and self-energies is investigated by revisiting the Padé approximants technique. We propose to remedy the well-known problems of the Padé approximants by performing an average of several continuations, obtained by varying the number of fitted input points and Padé coefficients independently. The suggested approach is then applied to several test cases, including Sm and Pr atomic self-energies, the Green's functions of the Hubbard model for a Bethe lattice and of the Haldane model for a nanoribbon, as well as two special test functions. The sensitivity to numerical noise and the dependence on the precision of the numerical libraries are analyzed in detail. The present approach is compared to a number of other techniques, i.e., the nonnegative least-squares method, the nonnegative Tikhonov method, and the maximum entropy method, and is shown to perform well for the chosen test cases. This conclusion holds even when the noise on the input data is increased to reach values typical for quantum Monte Carlo simulations. The ability of the algorithm to resolve fine structures is finally illustrated for two relevant test functions.

  11. Mean link versus average plaquette tadpoles in lattice NRQCD

    International Nuclear Information System (INIS)

    We compare mean-link and average plaquette tadpole renormalization schemes in the context of the quarkonium hyperfine splittings in lattice NRQCD. Simulations are done for the three quarkonium systems cc-bar, bc-bar, and bb-bar. The hyperfine splittings are computed both at leading and at next-to-leading order in the relativistic expansion. Results are obtained at a large number of lattice spacings. A number of features emerge, all of which favor tadpole renormalization using mean links. This includes much better scaling of the hyperfine splittings in the three quarkonium systems. We also find that relativistic corrections to the spin splittings are smaller with mean-link tadpoles, particularly for the cc-bar and bc-bar systems. We also see signs of a breakdown in the NRQCD expansion when the bare quark mass falls below about one in lattice units (with the bare quark masses turning out to be much larger with mean-link tadpoles)

  12. Perceptual Learning in Williams Syndrome: Looking Beyond Averages

    Science.gov (United States)

    Gervan, Patricia; Gombos, Ferenc; Kovacs, Ilona

    2012-01-01

    Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline) and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning. PMID:22792262

  13. Model characteristics of average skill boxers’ competition functioning

    Directory of Open Access Journals (Sweden)

    Martsiv V.P.

    2015-08-01

    Full Text Available Purpose: analysis of competition functioning of average skill boxers. Material: 28 fights of boxers-students have been analyzed. The following coefficients have been determined: effectiveness of punches, reliability of defense. The fights were conducted by formula: 3 rounds (3 minutes - every round. Results: models characteristics of boxers for stage of specialized basic training have been worked out. Correlations between indicators of specialized and general exercises have been determined. It has been established that sportsmanship of boxers manifests as increase of punches’ density in a fight. It has also been found that increase of coefficient of punches’ effectiveness results in expansion of arsenal of technical-tactic actions. Importance of consideration of standard specialized loads has been confirmed. Conclusions: we have recommended means to be applied in training process at this stage of training. On the base of our previous researches we have made recommendations on complex assessment of sportsmen-students’ skillfulness. Besides, we have shown approaches to improvement of different sides of sportsmen’s fitness.

  14. Tortuosity and the Averaging of Microvelocity Fields in Poroelasticity.

    Science.gov (United States)

    Souzanchi, M F; Cardoso, L; Cowin, S C

    2013-03-01

    The relationship between the macro- and microvelocity fields in a poroelastic representative volume element (RVE) has not being fully investigated. This relationship is considered to be a function of the tortuosity: a quantitative measure of the effect of the deviation of the pore fluid streamlines from straight (not tortuous) paths in fluid-saturated porous media. There are different expressions for tortuosity based on the deviation from straight pores, harmonic wave excitation, or from a kinetic energy loss analysis. The objective of the work presented is to determine the best expression for tortuosity of a multiply interconnected open pore architecture in an anisotropic porous media. The procedures for averaging the pore microvelocity over the RVE of poroelastic media by Coussy and by Biot were reviewed as part of this study, and the significant connection between these two procedures was established. Success was achieved in identifying the Coussy kinetic energy loss in the pore fluid approach as the most attractive expression for the tortuosity of porous media based on pore fluid viscosity, porosity, and the pore architecture. The fabric tensor, a 3D measure of the architecture of pore structure, was introduced in the expression of the tortuosity tensor for anisotropic porous media. Practical considerations for the measurement of the key parameters in the models of Coussy and Biot are discussed. In this study, we used cancellous bone as an example of interconnected pores and as a motivator for this study, but the results achieved are much more general and have a far broader application than just to cancellous bone. PMID:24891725

  15. A spatially-averaged mathematical model of kidney branching morphogenesis

    KAUST Repository

    Zubkov, V.S.

    2015-08-01

    © 2015 Published by Elsevier Ltd. Kidney development is initiated by the outgrowth of an epithelial ureteric bud into a population of mesenchymal cells. Reciprocal morphogenetic responses between these two populations generate a highly branched epithelial ureteric tree with the mesenchyme differentiating into nephrons, the functional units of the kidney. While we understand some of the mechanisms involved, current knowledge fails to explain the variability of organ sizes and nephron endowment in mice and humans. Here we present a spatially-averaged mathematical model of kidney morphogenesis in which the growth of the two key populations is described by a system of time-dependant ordinary differential equations. We assume that branching is symmetric and is invoked when the number of epithelial cells per tip reaches a threshold value. This process continues until the number of mesenchymal cells falls below a critical value that triggers cessation of branching. The mathematical model and its predictions are validated against experimentally quantified C57Bl6 mouse embryonic kidneys. Numerical simulations are performed to determine how the final number of branches changes as key system parameters are varied (such as the growth rate of tip cells, mesenchyme cells, or component cell population exit rate). Our results predict that the developing kidney responds differently to loss of cap and tip cells. They also indicate that the final number of kidney branches is less sensitive to changes in the growth rate of the ureteric tip cells than to changes in the growth rate of the mesenchymal cells. By inference, increasing the growth rate of mesenchymal cells should maximise branch number. Our model also provides a framework for predicting the branching outcome when ureteric tip or mesenchyme cells change behaviour in response to different genetic or environmental developmental stresses.

  16. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  17. Average and local structure of selected metal deuterides

    Energy Technology Data Exchange (ETDEWEB)

    Soerby, Magnus H.

    2005-07-01

    deuterides at 1 bar D2 and elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4

  18. LENOS and BELINA Facilities for Measuring Maxwellian Averaged Cross Section

    International Nuclear Information System (INIS)

    Full text: The Laboratori Nazionali di Legnaro is one of the 5 laboratories of Instituto Nazionale di Fisica Nucleare (INFN), Italy; the one devoted to nuclear physics. The Lab has 4 accelerators: a 14 MV tandem, a 7 MV Van de Graaff, a 2 MV electrostatic and a superconductive linac. The electrostatic accelerators are able to accelerate ions up to Li, while the tandem and linac can accelerate heavy ions up to Ni. The smaller energy machines, namely the 7 MV (CN accelerator) and the 2 MV (AN2000 accelerator), are mostly devoted to nuclear physics applications. Within the CN accelerator, the neutron beam line for astrophysics (BELINA) is under development. The BELINA beam line will be devoted to the measurement of Maxwellian averaged cross section at several stellar temperatures, using a new method to generate the Maxwell-Boltzmann neutron spectra developed within the framework of the LENOS project. BELINA well characterized neutron spectra can also be used for validation of evaluated data as requested by the IRDFF CRP. The proposed new method deals with the shaping of the proton beam energy distribution by inserting a thin layer of material between the beam line and the lithium target. The thickness of the foil, the foil material and the proton energy are chosen in order to produce quasi-Gaussian spectra of protons that, impinging directly on the lithium target, produce the desired MBNS (Maxwell Boltzmann Neutron Spectra). The lithium target is a low mass target cooled by thin layer of forced water all around the beam spot, necessary to sustain the high specific power delivered to the target in CW (activation measurements). The LENOS method is able to produce MBNS with tuneable neutron energy ranging from 25 to 60 keV with high accuracy. Higher neutron energies up to 100 keV can be achieved if some deviation from MBNS is accepted. Recently, we have developed an upgrade of the pulsing system of the CN accelerator. The system has been tested already and works well

  19. Average and local structure of selected metal deuterides

    International Nuclear Information System (INIS)

    elevated temperatures (373-573 K) is presented in Paper 1. Deuterium atoms occupy chiefly three types of tetrahedral interstitial sites; two coordinated by 4 Zr atoms and one coordinated by 3 Zr and 1 Ni atoms. The site preference is predominantly ruled by sample composition and less by temperature. On the other hand, the spatial deuterium distribution among the preferred sites is strongly temperature dependant as the long-range correlations break down on heating. The sample is fully decomposed into tetragonal ZrD2 and Zr7Ni10 at 873 K. Th2AlD4 was the only metal deuteride with reported D-D separation substantially below 2 Aa (1.79 Aa) prior to the discovery of RENiInD1.33. However, as being the first ternary deuteride ever studied by PND, the original structure solution was based on very low-resolution data. The present reinvestigation (Paper 2) shows that the site preference was correctly determined, but the deuterium atoms are slightly shifted compared to the earlier report, now yielding acceptable interatomic separations. Solely Th4 tetrahedra are occupied in various Th2Al deuterides. Th8Al4D11 (Th2AlD2.75) takes a superstructure with tripled c-axis due to deuterium ordering. Th2AlD2.3 is disordered and the average distance between partly occupied sites appears as just 1.55 Aa in Rietveld refinements. However, short-range order is expected to prevent D-D distances under 2 Aa. Paper 3 present the first Reverse Monte Carlo (RMC) study of a metal deuteride. RMC is used in combination with total neutron scattering to model short-range deuterium correlations in disordered c-VD0.77. A practically complete blocking of interstitial sites closer than 2 Aa from any occupied deuterium site is observed. The short-range correlations resemble those of the fully ordered low temperature phase c-VD0.75 at length scales up to about 3 Aa, i.e. for the first two coordination spheres. Paper 4 concerns RMC modelling of short-range deuterium correlations in ZrCr2D4 at ambient and low

  20. The Dow Jones Industrial Average: Issues of Downward Bias and Increased Volatility

    OpenAIRE

    Mueller, Paul A.; Raj A. Padmaraj; Ralph C. St. John

    1999-01-01

    Does the method of divisor adjustment used for stock splits in the Dow Jones Industrial Average (DJIA) cause a downward bias in the averageÕs level and does this method of adjustment cause increased volatility in the average? To investigate these issues, two averages are created using DJIA stocks. One average is adjusted for stock splits through adjustment in the divisor. This method is identical to the DJIA method of adjustment. The other average makes adjustment for stock splits by adjustin...

  1. Accurate prediction of unsteady and time-averaged pressure loads using a hybrid Reynolds-Averaged/large-eddy simulation technique

    Science.gov (United States)

    Bozinoski, Radoslav

    Significant research has been performed over the last several years on understanding the unsteady aerodynamics of various fluid flows. Much of this work has focused on quantifying the unsteady, three-dimensional flow field effects which have proven vital to the accurate prediction of many fluid and aerodynamic problems. Up until recently, engineers have predominantly relied on steady-state simulations to analyze the inherently three-dimensional ow structures that are prevalent in many of today's "real-world" problems. Increases in computational capacity and the development of efficient numerical methods can change this and allow for the solution of the unsteady Reynolds-Averaged Navier-Stokes (RANS) equations for practical three-dimensional aerodynamic applications. An integral part of this capability has been the performance and accuracy of the turbulence models coupled with advanced parallel computing techniques. This report begins with a brief literature survey of the role fully three-dimensional, unsteady, Navier-Stokes solvers have on the current state of numerical analysis. Next, the process of creating a baseline three-dimensional Multi-Block FLOw procedure called MBFLO3 is presented. Solutions for an inviscid circular arc bump, laminar at plate, laminar cylinder, and turbulent at plate are then presented. Results show good agreement with available experimental, numerical, and theoretical data. Scalability data for the parallel version of MBFLO3 is presented and shows efficiencies of 90% and higher for processes of no less than 100,000 computational grid points. Next, the description and implementation techniques used for several turbulence models are presented. Following the successful implementation of the URANS and DES procedures, the validation data for separated, non-reattaching flows over a NACA 0012 airfoil, wall-mounted hump, and a wing-body junction geometry are presented. Results for the NACA 0012 showed significant improvement in flow predictions

  2. 26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate” means the simple average of the daily exchange rates (determined by reference to a qualified source...

  3. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns

    DEFF Research Database (Denmark)

    Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour

    The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias...... correction term) as the (scaled) sum of squared pre-averaged returns, where the pre-averaging is done over all possible non-overlapping blocks of consecutive observations. Pre-averaging reduces the influence of the noise and allows for realized volatility estimation on the pre-averaged returns. The non...

  4. 47 CFR 64.1900 - Nondominant interexchange carrier certifications regarding geographic rate averaging and rate...

    Science.gov (United States)

    2010-10-01

    ... such services in compliance with its geographic rate averaging and rate integration obligations... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED)...

  5. Conceptual difficulties with the q-averages in non-extensive statistical mechanics

    Science.gov (United States)

    Abe, Sumiyoshi

    2012-11-01

    The q-average formalism of nonextensive statistical mechanics proposed in the literature is critically examined by considerations of several pedagogical examples. It is shown that there exist a number of difficulties with the concept of q-averages.

  6. Analysis on Change Characteristics of the Average Temperature in Sichuan in 50 Years

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The research aimed to analyze change characteristics of the average temperature in Sichuan in 50 years.[Method] By using average temperature data at 156 stations of Sichuan from 1961 to 2010,interannual and interdecadal evolution characteristics,regional and seasonal differences of the average temperature in Sichuan in 50 years were analyzed.[Result] Variations of the average temperatures in the whole province and each climatic region in 50 years all presented rise trends.Rise amplitude of the a...

  7. Why one-dimensional models fail in the diagnosis of average spectra from inhomogeneous stellar atmospheres

    OpenAIRE

    Uitenbroek, Han; Criscuoli, Serena

    2011-01-01

    We investigate the feasibility of representing a structured multi-dimensional stellar atmosphere with a single one-dimensional average stratification for the purpose of spectral diagnosis of the atmosphere's average spectrum. In particular we construct four different one-dimensional stratifications from a single snapshot of a magneto-hydrodynamic simulation of solar convection: one by averaging its properties over surfaces of constant height, and three different ones by averaging over surface...

  8. 40 CFR 80.1238 - How is a refinery's or importer's average benzene concentration determined?

    Science.gov (United States)

    2010-07-01

    ... average benzene concentration determined? 80.1238 Section 80.1238 Protection of Environment ENVIRONMENTAL... Benzene Gasoline Benzene Requirements § 80.1238 How is a refinery's or importer's average benzene concentration determined? (a) The average benzene concentration of gasoline produced at a refinery or...

  9. 47 CFR 64.1801 - Geographic rate averaging and rate integration.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration... CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Geographic Rate Averaging and Rate Integration § 64.1801 Geographic rate averaging and rate integration. (a) The rates charged...

  10. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Science.gov (United States)

    2010-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year...

  11. 40 CFR 89.111 - Averaging, banking, and trading of exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging, banking, and trading of... ENGINES Emission Standards and Certification Provisions § 89.111 Averaging, banking, and trading of exhaust emissions. Regulations regarding the availability of an averaging, banking, and trading...

  12. 40 CFR 91.103 - Averaging, banking, and trading of exhaust emission credits.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Averaging, banking, and trading of... Standards and Certification Provisions § 91.103 Averaging, banking, and trading of exhaust emission credits. Regulations regarding averaging, banking, and trading provisions along with applicable...

  13. Numerical examination of commutativity between Backus and Gazis et al. averages

    CERN Document Server

    Dalton, David R

    2016-01-01

    Dalton and Slawinski (2016) show that, in general, the Backus (1962) average and the Gazis et al. (1963) average do not commute. Herein, we examine the extent of this noncommutativity. We illustrate numerically that the extent of noncommutativity is a function of the strength of anisotropy. The averages nearly commute in the case of weak anisotropy.

  14. Isotropic averaging for cell-dynamical-system simulation of spinodal decomposition

    Indian Academy of Sciences (India)

    Anand Kumar

    2003-07-01

    Formulae have been developed for the isotropic averagings in two and three dimensions. Averagings are employed in the cell-dynamical-system simulation of spinodal decomposition for inter-cell coupling. The averagings used in earlier works on spinodal decomposition have been discussed.

  15. 47 CFR 65.305 - Calculation of the weighted average cost of capital.

    Science.gov (United States)

    2010-10-01

    ... Carriers § 65.305 Calculation of the weighted average cost of capital. (a) The composite weighted average... Commission determines to the contrary in a prescription proceeding, the composite weighted average cost of debt and cost of preferred stock is the composite weight computed in accordance with §...

  16. 42 CFR 414.904 - Average sales price as the basis for payment.

    Science.gov (United States)

    2010-10-01

    ... drug products. (2) Calculation of the average sales price. (i) For dates of service before April 1...) Calculation of the average sales price. (i) For dates of service before April 1, 2008, the average sales price... end-stage renal disease patient. (i) Effective for drugs and biologicals furnished in 2005,...

  17. From moving averages to anomalous diffusion: a Rényi-entropy approach

    International Nuclear Information System (INIS)

    Moving averages, also termed convolution filters, are widely applied in science and engineering at large. As moving averages transform inputs to outputs by convolution, they induce correlation. In effect, moving averages are perhaps the most fundamental and ubiquitous mechanism of transforming uncorrelated inputs to correlated outputs. In this paper we study the correlation structure of general moving averages, unveil the Rényi-entropy meaning of a moving-average's overall correlation, address the maximization of this overall correlation, and apply this overall correlation to the dispersion-measurement and to the classification of regular and anomalous diffusion transport processes. (fast track communication)

  18. New approximative orientation averaging of the water molecule interacting with the thermal neutron

    International Nuclear Information System (INIS)

    Orientation averaging is performed by exact orientation averaging (EOA→) and four approximate methods (two ell known and two new) and expression for the microscopic scattering kernel of thermal neutrons on water molecules are developed. Two well known approximate orientation averagings are Krieger-Nelkin's (KN) and Kappel-Young;s (KY). The results obtained by one pf the two newly proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA→. The biggest discrepancies between the EOA→ results and results of the approximate methods are obtained using well known KN approximate orientation averaging. (author)

  19. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    Science.gov (United States)

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. PMID:20488770

  20. On the relation between uncertainties of weighted frequency averages and the various types of Allan deviations

    CERN Document Server

    Benkler, Erik; Sterr, Uwe

    2015-01-01

    The power spectral density in Fourier frequency domain, and the different variants of the Allan deviation (ADEV) in dependence on the averaging time are well established tools to analyse the fluctuation properties and frequency instability of an oscillatory signal. It is often supposed that the statistical uncertainty of a measured average frequency is given by the ADEV at a well considered averaging time. However, this approach requires further mathematical justification and refinement, which has already been done regarding the original ADEV for certain noise types. Here we provide the necessary background to use the modified Allan deviation (modADEV) and other two-sample deviations to determine the uncertainty of weighted frequency averages. The type of two-sample deviation used to determine the uncertainty depends on the method used for determination of the average. We find that the modADEV, which is connected with $\\Lambda$-weighted averaging, and the two sample deviation associated to a linear phase regr...

  1. Moving average optimization in digital terrain model generation based on test multibeam echosounder data

    Science.gov (United States)

    Maleika, Wojciech

    2015-02-01

    The paper presents a new method of digital terrain model (DTM) estimation based on modified moving average interpolation. There are many methods that can be employed in DTM creation, such as kriging, inverse distance weighting, nearest neighbour and moving average. The moving average method is not as precise as the others; hence, it is not commonly comprised in scientific work. Considering the high accuracy, the relatively low time costs, and the huge amount of measurement data collected by multibeam echosounder, however, the moving average method is definitely one of the most promising approaches. In this study, several variants of this method are analysed. An optimization of the moving average method is proposed based on a new module of selecting neighbouring points during the interpolation process—the "growing radius" approach. Tests experiments performed on various multibeam echosounder datasets demonstrate the high potential of this modified moving average method for improved DTM generation.

  2. Scalability of components for kW-level average power few-cycle lasers.

    Science.gov (United States)

    Hädrich, Steffen; Rothhardt, Jan; Demmler, Stefan; Tschernajew, Maxim; Hoffmann, Armin; Krebs, Manuel; Liem, Andreas; de Vries, Oliver; Plötner, Marco; Fabian, Simone; Schreiber, Thomas; Limpert, Jens; Tünnermann, Andreas

    2016-03-01

    In this paper, the average power scalability of components that can be used for intense few-cycle lasers based on nonlinear compression of modern femtosecond solid-state lasers is investigated. The key components of such a setup, namely, the gas-filled waveguides, laser windows, chirped mirrors for pulse compression and low dispersion mirrors for beam collimation, focusing, and beam steering are tested under high-average-power operation using a kilowatt cw laser. We demonstrate the long-term stable transmission of kW-level average power through a hollow capillary and a Kagome-type photonic crystal fiber. In addition, we show that sapphire substrates significantly improve the average power capability of metal-coated mirrors. Ultimately, ultrabroadband dielectric mirrors show negligible heating up to 1 kW of average power. In summary, a technology for scaling of few-cycle lasers up to 1 kW of average power and beyond is presented. PMID:26974623

  3. Study about thoracic perimeter average performances in Romanian Hucul horse breed – Prislop bloodline

    Directory of Open Access Journals (Sweden)

    Marius Maftei

    2015-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 93 hucul horse from Prislop bloodline divided in 3 stallion families analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for thoracic perimeter was 148.55 cm. at 18 months, 160.44 at 30 months old and 167.77 cm. at 42 months old. We can observe a good growth rate from one age to another and a small differences between sexes. The average performances of the character are between characteristic limits of the breed.

  4. A Comparison Between Two Average Modelling Techniques of AC-AC Power Converters

    OpenAIRE

    Pawel Szczesniak

    2015-01-01

    In this paper, a comparative evaluation of two modelling tools for switching AC-AC power converters is presented. Both of them are based on average modelling techniques. The first approach is based on the circuit averaging technique and consists in the topological manipulations, applied to a converters states. The second approach makes use of state-space averaged model of the converter and is based on analytical manipulations using the different state representations of a converter. The two m...

  5. Original article Functioning of memory and attention processes in children with intelligence below average

    OpenAIRE

    Aneta Rita Borkowska; Anna Ozimek

    2014-01-01

    BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the au...

  6. On the average rate of return in a continuous time stochastic model

    OpenAIRE

    Gajek, Leslaw; Kaluszka, Marek

    2015-01-01

    In a discrete time stochastic model of a pension investment funds market Gajek and Kaluszka(2000a) have provided a definition of the average rate of return which satis?es a set of economic correctnes postulates. In this paper the average rate of return is defined for a continuous time stochastic model of the market. The prices of assets are modeled by the multidimensional geometrical Brownian motion. A martingale property of the average rate of return is proven.

  7. Forecasting Crop Basis Using Historical Averages Supplemented with Current Market Information

    OpenAIRE

    Taylor, Mykel R.; Dhuyvetter, Kevin C.; Kastens, Terry L.

    2006-01-01

    This research compares practical methods of forecasting basis, using current market information for wheat, soybeans, corn, and milo (grain sorghum) in Kansas. Though generally not statistically superior, an historical one-year average was optimal for corn, milo, and soybean harvest and post-harvest basis forecasts. A one-year average was also best for wheat post-harvest basis forecasts, whereas a five-year average was the best method for forecasting wheat harvest basis. Incorporating current ...

  8. Minimum wage and the average wage in France: a circular relationship?

    OpenAIRE

    Cette, Gilbert; Chouard, Valérie; Verdugo, Gregory

    2013-01-01

    International audience This paper investigates whether increases in the minimum wage in France have the same impact on the average wage when intended to preserve the purchasing power of the minimum wage as when intended to raise it. We find that the impact of the minimum wage on the average wage is strong, but differs depending on the indexation factor. We also find some empirical evidence of circularity between the average wage and the minimum wage.

  9. SPATIAL DISTRIBUTION OF THE AVERAGE RUNOFF IN THE IZA AND VIȘEU WATERSHEDS

    OpenAIRE

    HORVÁTH CS.; RÉTI KINGA-OLGA; BILAȘCO ȘT.; ROȘIAN GH.

    2015-01-01

    The average runoff represents the main parameter with which one can best evaluate an area’s water resources and it is also an important characteristic in al river runoff research. In this paper we choose a GIS methodology for assessing the spatial evolution of the average runoff, using validity curves we identifies three validity areas in which the runoff changes differently with altitude. The tree curves were charted using the average runoff values of 16 hydrometric stations from the area, e...

  10. Increasing Average Period Lengths by Switching of Robust Chaos Maps in Finite Precision

    OpenAIRE

    Nagaraj, Nithin; Shastry, Mahesh C; Vaidya, Prabhakar G

    2008-01-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38(7), 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits ($T$) of a dynamical system scales as a function of computer precision ($\\epsilon$) and the correlation dimension ($d$) of the chaotic attractor: $T \\sim \\epsilon^{-d/2}$. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications...

  11. On Adequacy of Two-point Averaging Schemes for Composites with Nonlinear Viscoelastic Phases

    OpenAIRE

    Zeman, J.; Valenta, R.; M. Šejnoha

    2004-01-01

    Finite element simulations on fibrous composites with nonlinear viscoelastic response of the matrix phase are performed to explain why so called two-point averaging schemes may fail to deliver a realistic macroscopic response. Nevertheless, the potential of two-point averaging schemes (the overall response estimated in terms of localized averages of a two-phase composite medium) has been put forward in number of studies either in its original format or modified to overcome the inherited stiff...

  12. The application of cost averaging techniques to robust control of the benchmark problem

    Science.gov (United States)

    Hagood, Nesbitt W.; Crawley, Edward F.

    1991-01-01

    A method is presented for the synthesis of robust controllers for linear time invariant systems with parameterized uncertainty structures. The method involves minimizing the average quadratic (H2) cost over the parameterized system. Bonded average cost implies stability over the set of systems. The average cost functional is minimized to derive robust fixed-order dynamic compensators. The robustness properties of these controllers are demonstrated on the sample problem.

  13. Influence on natural circulation nuclear thermal coupling average power under rolling motion

    International Nuclear Information System (INIS)

    By performing simulation computations of single-phase flow natural circulation considering nuclear thermal coupling under rolling motion conditions, the influence factors which have great effect on the average heating power of this circulation system were studied. The analysis results indicate that under rolling motion conditions, the average heating power which considers the nuclear thermal coupling effect is in direct proportion to the average flow rate and average heat transfer coefficient, while it has an inverse relationship with the ratio between temperature-feedback coefficient of moderator and that of fuel. The effect of rolling parameters on the average heating power is related with the ratio between temperature-feedback coefficient of moderator and that of fuel. When the effect of the variation of the average heat transfer coefficient on the reactivity plays a leading role in the process, the stronger the rolling motion is, the higher the average heating power is. However, when the effect of the variation for the average friction coefficient on the reactivity takes the lead, the stronger the rolling motion is, the lower the average heating power is. (authors)

  14. On the average capacity and bit error probability of wireless communication systems

    KAUST Repository

    Yilmaz, Ferkan

    2011-12-01

    Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.

  15. Semiclassical vibration-rotation transition probabilities for motion in molecular state averaged potentials.

    Science.gov (United States)

    Stallcop, J. R.

    1971-01-01

    Collision-induced vibration-rotation transition probabilities are calculated from a semiclassical three-dimensional model, in which the collision trajectory is determined by the classical motion in the interaction potential that is averaged over the molecular rotational state, and compared with those for which the motion is governed by a spherically averaged potential. For molecules that are in highly excited rotational states, thus dominating the vibrational relaxation rate at high temperature, it is found that the transition probability for rotational state averaging is smaller than that for spherical averaging. For typical collisions, the transition cross section is decreased by a factor of about 1.5 to 2.

  16. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    International Nuclear Information System (INIS)

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step

  17. Average player traits as predictors of cooperation in a repeated prisoner's dilemma

    OpenAIRE

    Al-Ubaydli, Omar; Jones, Garett; Weel, Jaap

    2014-01-01

    Many studies have looked at how individual player traits influence individual choice in the repeated prisoner’s dilemma, but few studies have looked at how the average traits of pairs of players influence the average choices of pairs. We consider cognitive ability, patience, risk tolerance, and the Big Five personality measures as predictors of individual and average group choices in a ten-round repeated prisoner’s dilemma. We find that a pair’s average cognitive ability measured by the Raven...

  18. Averages of B-Hadron Properties at the End of 2005

    Energy Technology Data Exchange (ETDEWEB)

    Barberio, E.; /Melbourne U.; Bizjak, I.; /Novosibirsk, IYF; Blyth, S.; /CERN; Cavoto, G.; /Rome U.; Chang, P.; /Taiwan, Natl. Taiwan U.; Dingfelder, J.; /SLAC; Eidelman, S.; /Novosibirsk, IYF; Gershon, T.; /WARWICK U.; Godang, R.; /Mississippi U.; Harr, R.; /Wayne State U.; Hocker, A; /CERN; Iijima, T.; /Nagoya U.; Kowalewski, R.; /Victoria U.; Lehner, F.; /Fermilab; Limosani, A.; /Novosibirsk, IYF; Lin, C.-J.; /Fermilab; Long, O.; /UC, Riverside; Luth, V.; /SLAC; Morii, M.; /Harvard U.; Prell, S.; /Iowa State U.; Schneider, O.; /LPHE,

    2006-09-27

    This article reports world averages for measurements on b-hadron properties obtained by the Heavy Flavor Averaging Group (HFAG) using the available results as of at the end of 2005. In the averaging, the input parameters used in the various analyses are adjusted (rescaled) to common values, and all known correlations are taken into account. The averages include lifetimes, neutral meson mixing parameters, parameters of semileptonic decays, branching fractions of B decays to final states with open charm, charmonium and no charm, and measurements related to CP asymmetries.

  19. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  20. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States)

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetric average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.