WorldWideScience

Sample records for advanced variance reduction

  1. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    International Nuclear Information System (INIS)

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficiently optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS)

  2. MCNP variance reduction overview

    International Nuclear Information System (INIS)

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code

  3. Variance reduction in MCMC

    OpenAIRE

    Mira Antonietta; Tenconi Paolo; Bressanini Dario

    2003-01-01

    We propose a general purpose variance reduction technique for MCMC estimators. The idea is obtained by combining standard variance reduction principles known for regular Monte Carlo simulations (Ripley, 1987) and the Zero-Variance principle introduced in the physics literature (Assaraf and Caffarel, 1999). The potential of the new idea is illustrated with some toy examples and an application to Bayesian estimation

  4. Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP

    International Nuclear Information System (INIS)

    The 'criticality' or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new 'functional Monte Carlo' (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of 'inactive' cycles during which the fission source 'converges', a series of 'active' cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due to the correlations

  5. A Hilbert Space Approach to Variance Reduction

    OpenAIRE

    Szechtman, Roberto

    2006-01-01

    Elsevier Handbooks in Operations Research and Management Science: Simulation, pp 259-289. In this chapter we explain variance reduction techniques from the Hilbert space standpoint, in the terminating simulation context. We use projection ideas to explain how variance is reduced, and to link different variance reduction techniques. Our focus is on the methods of control variates, conditional Monte Carlo, weighted Monte Carlo, stratification, and Latin hypercube sampling.

  6. Markov bridges, bisection and variance reduction

    DEFF Research Database (Denmark)

    Asmussen, Søren; Hobolth, Asger

    this paper we firstly consider the problem of generating sample paths from a continuous-time Markov chain conditioned on the endpoints using a new algorithm based on the idea of bisection. Secondly we study the potential of the bisection algorithm for variance reduction. In particular, examples are...

  7. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  8. Dimension reduction based on weighted variance estimate

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, we propose a new estimate for dimension reduction, called the weighted variance estimate (WVE), which includes Sliced Average Variance Estimate (SAVE) as a special case. Bootstrap method is used to select the best estimate from the WVE and to estimate the structure dimension. And this selected best estimate usually performs better than the existing methods such as Sliced Inverse Regression (SIR), SAVE, etc. Many methods such as SIR, SAVE, etc. usually put the same weight on each observation to estimate central subspace (CS). By introducing a weight function, WVE puts different weights on different observations according to distance of observations from CS. The weight function makes WVE have very good performance in general and complicated situations, for example, the distribution of regressor deviating severely from elliptical distribution which is the base of many methods, such as SIR, etc. And compared with many existing methods, WVE is insensitive to the distribution of the regressor. The consistency of the WVE is established. Simulations to compare the performances of WVE with other existing methods confirm the advantage of WVE.

  9. Variance Reduction Using Nonreversible Langevin Samplers

    Science.gov (United States)

    Duncan, A. B.; Lelièvre, T.; Pavliotis, G. A.

    2016-05-01

    A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers [30, 37, 61, 72], introducing an appropriately chosen nonreversible component to the dynamics is beneficial, both in terms of reducing the asymptotic variance and of speeding up convergence to the target distribution. In this paper we present a detailed study of the dependence of the asymptotic variance on the deviation from reversibility. Our theoretical findings are supported by numerical simulations.

  10. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  11. Variance reduction methods for simulation of densities on Wiener space

    OpenAIRE

    Kohatsu, Arturo; Pettersson, Roger

    2002-01-01

    We develop a general error analysis framework for the Monte Carlo simulation of densities for functionals in Wiener space. We also study variance reduction methods with the help of Malliavin derivatives. For this, we give some general heuristic principles which are applied to diffusion processes. A comparison with kernel density estimates is made.

  12. Methods for variance reduction in Monte Carlo simulations

    Science.gov (United States)

    Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.

  13. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  14. Stochastic Variance Reduction Methods for Saddle-Point Problems

    OpenAIRE

    Balamurugan, P.; Bach, Francis

    2016-01-01

    We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which is common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monot...

  15. Fringe biasing: A variance reduction technique for optically thick meshes

    International Nuclear Information System (INIS)

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  16. MC Estimator Variance Reduction with Antithetic and Common Random Fields

    Science.gov (United States)

    Guthke, P.; Bardossy, A.

    2011-12-01

    Monte Carlo methods are widely used to estimate the outcome of complex physical models. For physical models with spatial parameter uncertainty, it is common to apply spatial random functions to the uncertain variables, which can then be used to interpolate between known values or to simulate a number of equally likely realizations .The price, that has to be paid for such a stochastic approach, are many simulations of the physical model instead of just running one model with one 'best' input parameter set. The number of simulations is often limited because of computational constraints, so that a modeller has to make a compromise between the benefit in terms of an increased accuracy of the results and the effort in terms of a massively increased computational time. Our objective is, to reduce the estimator variance of dependent variables in Monte Carlo frameworks. Therefore, we adapt two variance reduction techniques (antithetic variates and common random numbers) to a sequential random field simulation scheme that uses copulas as spatial dependence functions. The proposed methodology leads to pairs of spatial random fields with special structural properties, that are advantageous in MC frameworks. Antithetic Random fields (ARF) exhibit a reversed structure on the large scale, while the dependence on the local scale is preserved. Common random fields (CRF) show the same large scale structures, but different spatial dependence on the local scale. The performances of the proposed methods are examined with two typical applications of stochastic hydrogeology. It is shown, that ARF have the property to massively reduce the number of simulation runs required for convergence in Monte Carlo frameworks while keeping the same accuracy in terms of estimator variance. Furthermore, in multi-model frameworks like in sensitivity analysis of the spatial structure, where more than one spatial dependence model is used, the influence of different dependence structures becomes obvious

  17. A comparison of variance reduction techniques for radar simulation

    Science.gov (United States)

    Divito, A.; Galati, G.; Iovino, D.

    Importance sampling and extreme value technique (EVT) and its generalization (G-EVT) were compared as to reduction of the variance of radar simulation estimates. Importance sampling has a greater potential for including a priori information in the simulation experiment, and subsequently to reduce the estimation errors. This feature is paid for by a lack of generality of the simulation procedure. The EVT technique is only valid when a probability tail should be estimated (false alarm problems) and requires, as the only a priori information, that the considered variate belongs to the exponential class. The G-EVT introducing a shape parameter to be estimated (when unknown), allows smaller estimation error to be attained than EVT. The G-EVT and, to a greater extent, the EVT, lead to a straightforward and general simulation procedure for probability tails estimations.

  18. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  19. Problems of variance reduction in the simulation of random variables

    International Nuclear Information System (INIS)

    The definition of the uniform linear generator is given and some of the mostly used tests to evaluate the uniformity and the independence of the obtained determinations are listed. The problem of calculating, through simulation, some moment W of a random variable function is taken into account. The Monte Carlo method enables the moment W to be estimated and the estimator variance to be obtained. Some techniques for the construction of other estimators of W with a reduced variance are introduced

  20. Experience with Monte Carlo variance reduction using adjoint solutions in HYPER neutronics analysis

    International Nuclear Information System (INIS)

    The variance reduction techniques using adjoint solutions are applied to the Monte Carlo calculation of the HYPER(HYbrid Power Extraction Reactor) core neutronics. The applied variance reduction techniques are the geometry splitting and the weight windows. The weight bounds and the cell importance needed for these techniques are generated from an adjoint discrete ordinate calculation by the two-dimensional TWODANT code. The flux distribution variances of the Monte Carlo calculations by these variance reduction techniques are compared with the results of the standard Monte Carlo calculations. It is shown that the variance reduction techniques using adjoint solutions to the HYPER core neutronics result in a decrease in the efficiency of the Monte Carlo calculation

  1. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    Science.gov (United States)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  2. Variance reduction techniques in the simulation of Markov processes

    International Nuclear Information System (INIS)

    We study a functional r of the stationary distribution of a homogeneous Markov chain. It is often difficult or impossible to perform the analytical calculation of r and so it is reasonable to estimate r by a simulation process. A consistent estimator r(n) of r is obtained with respect to a chain with a countable state space. Suitably modifying the estimator r(n) of r one obtains a new consistent estimator which has a smaller variance than r(n). The same is obtained in the case of finite state space

  3. Deflation as a Method of Variance Reduction for Estimating the Trace of a Matrix Inverse

    CERN Document Server

    Gambhir, Arjun Singh; Orginos, Kostas

    2016-01-01

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors are random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can b...

  4. Application of variance reduction techniques in Monte Carlo simulation of clinical electron linear accelerator

    International Nuclear Information System (INIS)

    Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.

  5. Verification of the history-score moment equations for weight-window variance reduction

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Clell J [Los Alamos National Laboratory; Sood, Avneet [Los Alamos National Laboratory; Booth, Thomas E [Los Alamos National Laboratory; Shultis, J. Kenneth [KANSAS STATE UNIV.

    2010-12-06

    The history-score moment equations that describe the moments of a Monte Carlo score distribution have been extended to weight-window variance reduction, The resulting equations have been solved deterministically to calculate the population variance of the Monte Carlo score distribution for a single tally, Results for one- and two-dimensional one-group problems are presented that predict the population variances to less than 1% deviation from the Monte Carlo for one-dimensional problems and between 1- 2% for two-dimensional problems,

  6. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    International Nuclear Information System (INIS)

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ''real'' particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ''black box''. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases

  7. Simulating individual-based models of bacterial chemotaxis with asymptotic variance reduction

    CERN Document Server

    Rousset, Mathias

    2011-01-01

    We discuss variance reduced simulations for an individual-based model of chemotaxis of bacteria with internal dynamics. The variance reduction is achieved via a coupling of this model with a simpler process in which the internal dynamics has been replaced by a direct gradient sensing of the chemoattractants concentrations. In the companion paper \\cite{limits}, we have rigorously shown, using a pathwise probabilistic technique, that both processes converge towards the same advection-diffusion process in the diffusive asymptotics. In this work, a direct coupling is achieved between paths of individual bacteria simulated by both models, by using the same sets of random numbers in both simulations. This coupling is used to construct a hybrid scheme with reduced variance. We first compute a deterministic solution of the kinetic density description of the direct gradient sensing model; the deviations due to the presence of internal dynamics are then evaluated via the coupled individual-based simulations. We show th...

  8. A ''local'' exponential transform method for global variance reduction in Monte Carlo transport problems

    International Nuclear Information System (INIS)

    Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ''Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ''local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation

  9. Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [eds.

    1998-03-01

    `MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)

  10. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Blakeman, Edward D [ORNL; Peplow, Douglas E. [ORNL; Wagner, John C [ORNL; Murphy, Brian D [ORNL; Mueller, Don [ORNL

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally files and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.

  11. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    International Nuclear Information System (INIS)

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method

  12. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Codina, F., E-mail: fvidal@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Nguyen, N.C., E-mail: cuongng@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Oxford (United Kingdom); Peraire, J., E-mail: peraire@mit.edu [Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2015-09-15

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.

  13. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 6. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Recently, it has been shown that the figure of merit (FOM) of Monte Carlo source-detector problems can be enhanced by using a variational rather than a direct functional to estimate the detector response. The direct functional, which is traditionally employed in Monte Carlo simulations, requires an estimate of the solution of the forward problem within the detector region. The variational functional is theoretically more accurate than the direct functional, but it requires estimates of the solutions of the forward and adjoint source-detector problems over the entire phase-space of the problem. In recent work, we have performed Monte Carlo simulations using the variational functional by (a) approximating the adjoint solution deterministically and representing this solution as a function in phase-space and (b) estimating the forward solution using Monte Carlo. We have called this general procedure variational variance reduction (VVR). The VVR method is more computationally expensive per history than traditional Monte Carlo because extra information must be tallied and processed. However, the variational functional yields a more accurate estimate of the detector response. Our simulations have shown that the VVR reduction in variance usually outweighs the increase in cost, resulting in an increased FOM. In recent work on source-detector problems, we have calculated the adjoint solution deterministically and represented this solution as a linear-in-angle, histogram-in-space function. This procedure has several advantages over previous implementations: (a) it requires much less adjoint information to be stored and (b) it is highly efficient for diffusive problems, due to the accurate linear-in-angle representation of the adjoint solution. (Traditional variance-reduction methods perform poorly for diffusive problems.) Here, we extend this VVR method to Monte Carlo criticality calculations, which are often diffusive and difficult for traditional variance-reduction methods

  14. Enhancement of high-energy distribution tail in Monte Carlo semiconductor simulations using a Variance Reduction Scheme

    Directory of Open Access Journals (Sweden)

    Vincenza Di Stefano

    2009-11-01

    Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.

  15. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    International Nuclear Information System (INIS)

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool

  16. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 3. Fission Source Algorithms and Monte Carlo Variances

    International Nuclear Information System (INIS)

    Nuclear criticality safety and other neutronics analyses usually require a converged fission source for accurate eigenvalues and spatial distributions. While convergence may be rapid for compact systems, it can be either slow or erratic (or both) if a system contains loosely coupled multiplying components. This work is aimed at understanding the influence of Monte Carlo fission source algorithms on estimated fission rate distributions in two simple cases. The results show that sampling of fission sites should be avoided to the extent possible and that eliminating unnecessary sampling can reduce reaction rate estimate variances substantially and accordingly reduce the computational effort for reaction rate estimation. The fundamental purpose of Monte Carlo neutronics is to simulate faithfully the effects of fission on the neutron population. The methods employed vary among codes, but they must not generate biases or underestimates of uncertainties, and they ought to be computationally efficient. For example, a code may produce a potential fission either when a neutron collides or at absorption. The site weight can be the probability either of producing a fission neutron or of causing fission and may be adjusted by keff or some similar constant to keep the site population roughly constant. Potential sites are somehow selected for the site bank, and the starting neutrons for the next generation are then picked from the bank, perhaps re-sampled in some way to control the neutron population. The daughter neutron may be emitted with weight nu-bar or one. Using the VIM code, we have analyzed the fission site behavior of a simple system consisting of two thick homogeneous slabs of aqueous fissile solution separated by a thick slab of water in a symmetrical arrangement using 2000 histories/generation. Yamamoto et al. reported fluctuations of 75% in the instantaneous fission site populations in each slab, which is much larger than one expects in a Monte Carlo calculation

  17. Variance reduction techniques for 14 MeV neutron streaming problem in rectangular annular bent duct

    Energy Technology Data Exchange (ETDEWEB)

    Ueki, Kotaro [Ship Research Inst., Mitaka, Tokyo (Japan)

    1998-03-01

    Monte Carlo method is the powerful technique for solving wide range of radiation transport problems. Its features are that it can solve the Boltzmann`s transport equation almost without approximation, and that the complexity of the systems to be treated rarely becomes a problem. However, the Monte Carlo calculation is always accompanied by statistical errors called variance. In shielding calculation, standard deviation or fractional standard deviation (FSD) is used frequently. The expression of the FSD is shown. Radiation shielding problems are roughly divided into transmission through deep layer and streaming problem. In the streaming problem, the large difference in the weight depending on the history of particles makes the FSD of Monte Carlo calculation worse. The streaming experiment in the 14 MeV neutron rectangular annular bent duct, which is the typical streaming bench mark experiment carried out of the OKTAVIAN of Osaka University, was analyzed by MCNP 4B, and the reduction of variance or FSD was attempted. The experimental system is shown. The analysis model by MCNP 4B, the input data and the results of analysis are reported, and the comparison with the experimental results was examined. (K.I.)

  18. Importance Sampling Variance Reduction for the Fokker-Planck Rarefied Gas Particle Method

    CERN Document Server

    Collyer, Benjamin; Lockerby, Duncan

    2015-01-01

    Models and methods that are able to accurately and efficiently predict the flows of low-speed rarefied gases are in high demand, due to the increasing ability to manufacture devices at micro and nano scales. One such model and method is a Fokker-Planck approximation to the Boltzmann equation, which can be solved numerically by a stochastic particle method. The stochastic nature of this method leads to noisy estimates of the thermodynamic quantities one wishes to sample when the signal is small in comparison to the thermal velocity of the gas. Recently, Gorji et al have proposed a method which is able to greatly reduce the variance of the estimators, by creating a correlated stochastic process which acts as a control variate for the noisy estimates. However, there are potential difficulties involved when the geometry of the problem is complex, as the method requires the density to be solved for independently. Importance sampling is a variance reduction technique that has already been shown to successfully redu...

  19. Application of fuzzy sets to estimate cost savings due to variance reduction

    Science.gov (United States)

    Munoz, Jairo; Ostwald, Phillip F.

    1993-12-01

    One common assumption of models to evaluate the cost of variation is that the quality characteristic can be approximated by a standard normal distribution. Such an assumption is invalid for three important cases: (a) when the random variable is always positive, (b) when manual intervention distorts random variation, and (c) when the variable of interest is evaluated by linguistic terms. This paper applies the Weibull distribution to address nonnormal situations and fuzzy logic theory to study the case of quality evaluated via lexical terms. The approach concentrates on the cost incurred by inspection to formulate a probabilistic-possibilistic model that determines cost savings due to variance reduction. The model is tested with actual data from a manual TIG welding process.

  20. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs

    International Nuclear Information System (INIS)

    A method based on a combination of the variance-reduction techniques of particle splitting and Russian roulette is presented. This method improves the efficiency of radiation transport through linear accelerator geometries simulated with the Monte Carlo method. The method named as ‘splitting-roulette’ was implemented on the Monte Carlo code PENELOPE and tested on an Elekta linac, although it is general enough to be implemented on any other general-purpose Monte Carlo radiation transport code and linac geometry. Splitting-roulette uses any of the following two modes of splitting: simple splitting and ‘selective splitting’. Selective splitting is a new splitting mode based on the angular distribution of bremsstrahlung photons implemented in the Monte Carlo code PENELOPE. Splitting-roulette improves the simulation efficiency of an Elekta SL25 linac by a factor of 45. (paper)

  1. Investigating the effectiveness of Variance Reduction Techniques in Manufacturing, Call Center and Cross-docking Discrete Event Simulation Models

    OpenAIRE

    Adewunmi, Adrian; Aickelin, Uwe

    2013-01-01

    Variance reduction techniques have been shown by others in the past to be a useful tool to reduce variance in Simulation studies. However, their application and success in the past has been mainly domain specific, with relatively little guidelines as to their general applicability, in particular for novices in this area. To facilitate their use, this study aims to investigate the robustness of individual techniques across a set of scenarios from different domains. Experimental results show th...

  2. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 5. New Zero-Variance Methods for Monte Carlo Criticality and Source-Detector Problems

    International Nuclear Information System (INIS)

    A zero-variance (ZV) Monte Carlo transport method is a theoretical construct that, if it could be implemented on a practical computer, would produce the exact result after any number of histories. Unfortunately, ZV methods are impractical; to implement them, one must have complete knowledge of a certain adjoint flux, and acquiring this knowledge is an infinitely greater task than solving the original criticality or source-detector problem. (In fact, the adjoint flux itself yields the desired result, with no need of a Monte Carlo simulation) Nevertheless, ZV methods are of practical interest because it is possible to approximate them in ways that yield efficient variance-reduction schemes. Such implementations must be done carefully. For example, one must not change the mean of the final answer) The goal of variance reduction is to estimate the true mean with greater efficiency. In this paper, we describe new ZV methods for Monte Carlo criticality and source-detector problems. These methods have the same requirements (and disadvantages) as described earlier. However, their implementation is very different. Thus, the concept of approximating them to obtain practical variance-reduction schemes opens new possibilities. In previous ZV methods, (a) a single characteristic parameter (the k-eigenvalue or a detector response) of a forward transport problem is sought; (b) the exact solution of an adjoint problem must be known for all points in phase-space; and (c) a non-analog process, defined in terms of the adjoint solution, transports forward Monte Carlo particles from the source to the detector (in criticality problems, from the fission region, where a generation n fission neutron is born, back to the fission region, where generation n+1 fission neutrons are born). In the non-analog transport process, Monte Carlo particles (a) are born in the source region with weight equal to the desired characteristic parameter, (b) move through the system by an altered transport

  3. Implementation of background scattering variance reduction on the RapidNano particle scanner

    OpenAIRE

    van der Walle, P.; Hannemann, S.; Eijk, D.(Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands); Mulckhuyse, W.F.W.; Donck, J.C.J. van der

    2014-01-01

    The background in simple dark field particle inspection shows a high scatter variance which cannot be distinguished from signals by small particles. According to our models, illumination from different azimuths can reduce the background variance. A multi-azimuth illumination has been successfully integrated on the Rapid Nano particle scanner. This illumination method reduces the variance of the background scattering on substrate roughness. It allows for a lower setting of the detection thresh...

  4. Application of variance reduction technique to nuclear transmutation system driven by accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)

  5. Validation of variance reduction techniques in Mediso (SPIRIT DH-V) SPECT system by Monte Carlo

    International Nuclear Information System (INIS)

    Monte Carlo simulation of nuclear medical imaging systems is a widely used method for reproducing their operation in a real clinical environment, There are several Single Photon Emission Tomography (SPECT) systems in Cuba. For this reason it is clearly necessary to introduce a reliable and fast simulation platform in order to obtain consistent image data. This data will reproduce the original measurements conditions. In order to fulfill these requirements Monte Carlo platform GAMOS (Geant4 Medicine Oriented Architecture for Applications) have been used. Due to the very size and complex configuration of parallel hole collimators in real clinical SPECT systems, Monte Carlo simulation usually consumes excessively high time and computing resources. main goal of the present work is to optimize the efficiency of calculation by means of new GAMOS functionality. There were developed and validated two GAMOS variance reduction techniques to speed up calculations. These procedures focus and limit transport of gamma quanta inside the collimator. The obtained results were asses experimentally in Mediso (SPIRIT DH-V) SPECT system. Main quality control parameters, such as sensitivity and spatial resolution were determined. Differences of 4.6% sensitivity and 8.7% spatial resolution were reported against manufacturer values. Simulation time was decreased up to 650 times. Using these techniques it was possible to perform several studies in almost 8 hours each. (Author)

  6. MCNP Variance Reduction technique application for the Development Of the Citrusdal Irradiation Facility

    Energy Technology Data Exchange (ETDEWEB)

    Makgae, R. [Pebble Bed Modular Reactor (PBMR), P.O. Box 9396, Centurion (South Africa)

    2008-07-01

    A private company, Citrus Research International (CIR) is intending to construct an insect irradiation facility for the irradiation of insect for pest management in south western region of South Africa. The facility will employ a Co-60 cylindrical source in the chamber. An adequate thickness for the concrete shielding walls and the ability of the labyrinth leading to the irradiation chamber, to attenuate radiation to dose rates that are acceptably low, were determined. Two methods of MCNP variance reduction techniques were applied to accommodate the two pathways of deep penetration to evaluate the radiological impact outside the 150 cm concrete walls and steaming of gamma photons through the labyrinth. The point-kernel based MicroShield software was used in the deep penetration calculations for the walls around the source room to test its accuracy and the results obtained are in good agreement with about 15-20% difference. The dose rate mapping due to radiation Streaming along the labyrinth to the facility entrance is also to be validated with the Attila code, which is a deterministic code that solves the Discrete Ordinates approximation. This file provides a template for writing papers for the conference. (authors)

  7. MCNP Variance Reduction technique application for the Development Of the Citrusdal Irradiation Facility

    International Nuclear Information System (INIS)

    A private company, Citrus Research International (CIR) is intending to construct an insect irradiation facility for the irradiation of insect for pest management in south western region of South Africa. The facility will employ a Co-60 cylindrical source in the chamber. An adequate thickness for the concrete shielding walls and the ability of the labyrinth leading to the irradiation chamber, to attenuate radiation to dose rates that are acceptably low, were determined. Two methods of MCNP variance reduction techniques were applied to accommodate the two pathways of deep penetration to evaluate the radiological impact outside the 150 cm concrete walls and steaming of gamma photons through the labyrinth. The point-kernel based MicroShield software was used in the deep penetration calculations for the walls around the source room to test its accuracy and the results obtained are in good agreement with about 15-20% difference. The dose rate mapping due to radiation Streaming along the labyrinth to the facility entrance is also to be validated with the Attila code, which is a deterministic code that solves the Discrete Ordinates approximation. This file provides a template for writing papers for the conference. (authors)

  8. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems

    Energy Technology Data Exchange (ETDEWEB)

    Somasundaram, E.; Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97332-5902 (United States)

    2013-07-01

    In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)

  9. Implementation of hybrid variance reduction methods in a multi group Monte Carlo code for deep shielding problems

    International Nuclear Information System (INIS)

    In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)

  10. Advanced digital signal processing and noise reduction

    CERN Document Server

    Vaseghi, Saeed V

    2008-01-01

    Digital signal processing plays a central role in the development of modern communication and information processing systems. The theory and application of signal processing is concerned with the identification, modelling and utilisation of patterns and structures in a signal process. The observation signals are often distorted, incomplete and noisy and therefore noise reduction, the removal of channel distortion, and replacement of lost samples are important parts of a signal processing system. The fourth edition of Advanced Digital Signal Processing and Noise Reduction updates an

  11. Advanced sludge reduction and phosphorous removal process

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An advanced sludge reduction process, i.e. sludge reduction and phosphorous removal process, was developed. The results show that excellent sludge reduction and biological phosphorous removal can be achieved perfectly in this system. When chemical oxygen demand ρ(COD) is 332 - 420 mg/L, concentration of ammonia ρ(NH3-N) is 30 - 40 mg/L and concentration of total phosphorous ρ(TP) is 6.0 - 9.0 mg/L in influent, the system still ensures ρ(COD)<23 mg/L, ρ(NH3-N)<3.2 mg/L and ρ(TP)<0.72 mg/L in effluent. Besides, when the concentration of dissolved oxygen ρ(DO) is around 1.0 mg/L, sludge production is less than 0. 140 g with the consumption of 1 g COD, and the phosphorous removal exceeds 91%. Also, 48.4% of total nitrogen is removed by simultaneous nitrification and denitrification.

  12. A 'local' exponential transform method for global variance reduction in Monte Carlo transport problems

    International Nuclear Information System (INIS)

    We develop a 'Local' Exponential Transform method which distributes the particles nearly uniformly across the system in Monte Carlo transport calculations. An exponential approximation to the continuous transport equation is used in each mesh cell to formulate biasing parameters. The biasing parameters, which resemble those of the conventional exponential transform, tend to produce a uniform sampling of the problem geometry when applied to a forward Monte Carlo calculation, and thus they help to minimize the maximum variance of the flux. Unlike the conventional exponential transform, the biasing parameters are spatially dependent, and are automatically determined from a forward diffusion calculation. We develop two versions of the forward Local Exponential Transform method, one with spatial biasing only, and one with spatial and angular biasing. The method is compared to conventional geometry splitting/Russian roulette for several sample one-group problems in X-Y geometry. The forward Local Exponential Transform method with angular biasing is found to produce better results than geometry splitting/Russian roulette in terms of minimizing the maximum variance of the flux. (orig.)

  13. A Development of Multi-response CADIS Method for the Optimization of Variance Reduction in Monte Carlo Simulation

    International Nuclear Information System (INIS)

    The variance reduction method can be classified to three technical categories that are source, collision, and transport biasing. All of the variance reduction techniques require specific parameters to control the transport probability. One of well-known methods to determine the optimized transport probability is called as the Consistent Adjoint Driven Importance Sampling (CADIS) method. The CADIS method uses adjoint function to reduce the error of the response. This method can give high variance reduction efficiency on the single response in any problem. However, the CADIS method cannot properly reduce individual relative error for the cases, which have more than two responses. In this study, a multi-response CADIS method was derived by considering each position of the responses. Using the multi-response CADIS method, a radiation transport problem was estimated by applying it into the source angular biasing. The results were compared with those of the CADIS approach and the analog MC method. In this study, a multi-response CADIS method was proposed for minimizing relative errors in various tally regions. To reduce all relative errors for various responses, a weight decision equation was derived. For the verification of the proposed method, a shielding problem was set and the MC simulations were pursued. The results with the proposed method were compared with those estimated by CADIS and analog MC methods. The analysis shows that the relative error of each tally region can be successfully and efficiently reduced for overall regions than the other methods. It can be utilized for accurate calculation of various radiation transport problems, as well as to save the calculation time. Therefore, it is expected that the proposed method can contribute the improvement of expandability in Monte Carlo simulation

  14. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    International Nuclear Information System (INIS)

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods

  15. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    Science.gov (United States)

    Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa

    2014-07-01

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  16. VR-BFDT: A variance reduction based binary fuzzy decision tree induction method for protein function prediction.

    Science.gov (United States)

    Golzari, Fahimeh; Jalili, Saeed

    2015-07-21

    In protein function prediction (PFP) problem, the goal is to predict function of numerous well-sequenced known proteins whose function is not still known precisely. PFP is one of the special and complex problems in machine learning domain in which a protein (regarded as instance) may have more than one function simultaneously. Furthermore, the functions (regarded as classes) are dependent and also are organized in a hierarchical structure in the form of a tree or directed acyclic graph. One of the common learning methods proposed for solving this problem is decision trees in which, by partitioning data into sharp boundaries sets, small changes in the attribute values of a new instance may cause incorrect change in predicted label of the instance and finally misclassification. In this paper, a Variance Reduction based Binary Fuzzy Decision Tree (VR-BFDT) algorithm is proposed to predict functions of the proteins. This algorithm just fuzzifies the decision boundaries instead of converting the numeric attributes into fuzzy linguistic terms. It has the ability of assigning multiple functions to each protein simultaneously and preserves the hierarchy consistency between functional classes. It uses the label variance reduction as splitting criterion to select the best "attribute-value" at each node of the decision tree. The experimental results show that the overall performance of the proposed algorithm is promising. PMID:25865524

  17. Analysis of latent variance reduction methods in phase space Monte Carlo calculations for 6, 10 and 18 MV photons by using MCNP code

    International Nuclear Information System (INIS)

    In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000. -- Highlights: ► The efficiency of APR and APRS methods was compared to two tallying methods. ► The APRS is more efficient than the APR method in track length estimator tallies. ► In the energy deposition tally, both methods have nearly the same efficiency. ► Variance reduction factors of these methods are position and energy dependent.

  18. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    Science.gov (United States)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

  19. Diversification in the driveway: mean-variance optimization for greenhouse gas emissions reduction from the next generation of vehicles

    International Nuclear Information System (INIS)

    Modern portfolio theory is applied to the problem of selecting which vehicle technologies and fuels to use in the next generation of vehicles. Selecting vehicles with the lowest lifetime cost is complicated by the fact that future prices are uncertain, just as selecting securities for an investment portfolio is complicated by the fact that future returns are uncertain. A quadratic program is developed based on modern portfolio theory, with the objective of minimizing the expected lifetime cost of the 'vehicle portfolio'. Constraints limit greenhouse gas emissions, as well as the variance of the cost. A case study is performed for light-duty passenger vehicles in the United States, drawing emissions and usage data from the US Environmental Protection Agency's MOVES and Department of Energy's GREET models, among other sources. Four vehicle technologies are considered: conventional gasoline, conventional diesel, grid-independent (non-plug-in) gasoline-electric hybrid, and flex fuel using E85. Results indicate that much of the uncertainty surrounding cost stems from fuel price fluctuations, and that fuel efficient vehicles can lower cost variance. Hybrids exhibit the lowest cost variances of the technologies considered, making them an arguably financially conservative choice.

  20. Targeted reduction of advanced glycation improves renal function in obesity

    DEFF Research Database (Denmark)

    Harcourt, Brooke E; Sourris, Karly C; Coughlan, Melinda T;

    2011-01-01

    Obesity is highly prevalent in Western populations and is considered a risk factor for the development of renal impairment. Interventions that reduce the tissue burden of advanced glycation end-products (AGEs) have shown promise in stemming the progression of chronic disease. Here we tested if...... function and an inflammatory profile (monocyte chemoattractant protein-1 (MCP-1) and macrophage migration inhibitory factor (MIF)) were improved following the low-AGE diet. Mechanisms of advanced glycation-related renal damage were investigated in a mouse model of obesity using the AGE......, and renal oxidative stress. Alagebrium treatment, however, resulted in decreased weight gain and improved glycemic control compared with wild-type mice on a high-fat Western diet. Thus, targeted reduction of the advanced glycation pathway improved renal function in obesity....

  1. Concerned items on variance reduction method of monte carlo calculation written in published literatures. A logic of monte carlo calculation=from experience to science

    International Nuclear Information System (INIS)

    In the fixed source problem such as a neutron deep penetration calculation with the Monte Carlo method, the application of the variance reduction method is most important for a high figure of merit (FOM) and the most reliable calculation. But, MCNP calculation inputs written in published literature are not to be best solution. The most concerned items are setting method for the lower weight bound on the weight window method and the exclusion radius for a point estimator. In those literatures, the lower weight bound is estimated by engineering judge or weight window generator in the MCNP. In the latter case, the lower weight bound is used with no turning process. Because of abnormal large lower weight bounds, many neutron are killed in no meaning by the Russian Roulette. The adjoint flux method for setting of lower weight bound should be adapted as a standard variance reduction method. The Monte Carlo calculation should be turned from the experience such as engineering judge to science such as adjoint method. (author)

  2. Fluid Mechanics, Drag Reduction and Advanced Configuration Aeronautics

    Science.gov (United States)

    Bushnell, Dennis M.

    2000-01-01

    This paper discusses Advanced Aircraft configurational approaches across the speed range, which are either enabled, or greatly enhanced, by clever Flow Control. Configurations considered include Channel Wings with circulation control for VTOL (but non-hovering) operation with high cruise speed, strut-braced CTOL transports with wingtip engines and extensive ('natural') laminar flow control, a midwing double fuselage CTOL approach utilizing several synergistic methods for drag-due-to-lift reduction, a supersonic strut-braced configuration with order of twice the L/D of current approaches and a very advanced, highly engine flow-path-integrated hypersonic cruise machine. This paper indicates both the promise of synergistic flow control approaches as enablers for 'Revolutions' in aircraft performance and fluid mechanic 'areas of ignorance' which impede their realization and provide 'target-rich' opportunities for Fluids Research.

  3. Cycle update : advanced fuels and technologies for emissions reduction

    Energy Technology Data Exchange (ETDEWEB)

    Smallwood, G. [National Research Council of Canada, Ottawa, ON (Canada)

    2009-07-01

    This paper provided a summary of key achievements of the Program of Energy Research and Development advanced fuels and technologies for emissions reduction (AFTER) program over the funding cycle from fiscal year 2005/2006 to 2008/2009. The purpose of the paper was to inform interested parties of recent advances in knowledge and in science and technology capacities in a concise manner. The paper discussed the high level research and development themes of the AFTER program through the following 4 overarching questions: how could advanced fuels and internal combustion engine designs influence emissions; how could emissions be reduced through the use of engine hardware including aftertreatment devices; how do real-world duty cycles and advanced technology vehicles operating on Canadian fuels compare with existing technologies, models and estimates; and what are the health risks associated with transportation-related emissions. It was concluded that the main issues regarding the use of biodiesel blends in current technology diesel engines are the lack of consistency in product quality; shorter shelf life of biodiesel due to poorer oxidative stability; and a need to develop characterization methods for the final oxygenated product because most standard methods are developed for hydrocarbons and are therefore inadequate. 2 tabs., 13 figs.

  4. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    International Nuclear Information System (INIS)

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs

  5. Potential for Landing Gear Noise Reduction on Advanced Aircraft Configurations

    Science.gov (United States)

    Thomas, Russell H.; Nickol, Craig L.; Burley, Casey L.; Guo, Yueping

    2016-01-01

    The potential of significantly reducing aircraft landing gear noise is explored for aircraft configurations with engines installed above the wings or the fuselage. An innovative concept is studied that does not alter the main gear assembly itself but does shorten the main strut and integrates the gear in pods whose interior surfaces are treated with acoustic liner. The concept is meant to achieve maximum noise reduction so that main landing gears can be eliminated as a major source of airframe noise. By applying this concept to an aircraft configuration with 2025 entry-into-service technology levels, it is shown that compared to noise levels of current technology, the main gear noise can be reduced by 10 EPNL dB, bringing the main gear noise close to a floor established by other components such as the nose gear. The assessment of the noise reduction potential accounts for design features for the advanced aircraft configuration and includes the effects of local flow velocity in and around the pods, gear noise reflection from the airframe, and reflection and attenuation from acoustic liner treatment on pod surfaces and doors. A technical roadmap for maturing this concept is discussed, and the possible drag increase at cruise due to the addition of the pods is identified as a challenge, which needs to be quantified and minimized possibly with the combination of detailed design and application of drag reduction technologies.

  6. Reduction of repository heat load using advanced fuel cycles

    International Nuclear Information System (INIS)

    With the geologic repository at Yucca Mountain already nearing capacity full before opening, advanced fuel cycles that introduce reprocessing, fast reactors, and temporary storage sites have the potential to allow the repository to support the current reactor fleet and future expansion. An uncertainty analysis methodology that combines Monte Carlo distribution sampling, reactor physics data simulation, and neural network interpolation methods enable investigation into the factor reduction of heat capacity by using the hybrid fuel cycle. Using a Super PRISM fast reactor with a conversion ratio of 0.75, burn ups reach up to 200 MWd/t that decrease the plutonium inventory by about 5 metric tons every 12 years. Using the long burn up allows the footprint of 1 single core loading of FR fuel to have an integral decay heat of about 2.5x105 MW*yr over a 1500 year period that replaces the footprint of about 6 full core loadings of LWR fuel for the number of years required to fuel the FR, which have an integral decay heat of about.3 MW*yr for the same time integral. This results in an increase of a factor of 4 in repository support capacity from implementing a single fast reactor in an equilibrium cycle. (authors)

  7. Advanced Acoustic Blankets for Improved Aircraft Interior Noise Reduction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this project advanced acoustic blankets for improved low frequency interior noise control in aircraft will be developed and demonstrated. The improved...

  8. [Advances in molecular mechanism of bacterial reduction of hexavalent chromium].

    Science.gov (United States)

    Li, Dou; Zhao, You-Cai; Song, Li-Yan; Yin, Ya-Jie; Wang, Yang-Qing; Xu, Zhong-Hui

    2014-04-01

    Cr(VI) has been causing serious environmental pollution due to its carcinogenicity, teratogenicity and strong migration. Reduction of Cr( VI) to Cr(III), a precipitation that is much less toxic, is an efficient strategy to control Cr pollution. Within the strategy, bacterial reduction of Cr(VI) to Cr(III) has been considered as one of the best bioremediation methods because of its efficiency, environment friendly, and low cost; however, the molecular mechanism remains large unknown. This review summarizes Cr(VI) reduction bacterial species and its application in pollution control, elaborates the pathways of Cr( VI) reduction and functional proteins involved, concludes the molecular mechanism of baterial reduction Cr(VI), and discusses the orientation of the future research. PMID:24946623

  9. Advanced Acoustic Blankets for Improved Aircraft Interior Noise Reduction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed Phase II research effort is to develop heterogeneous (HG) blankets for improved sound reduction in aircraft structures. Phase I...

  10. Variance Risk Premia

    OpenAIRE

    Peter Carr; Liuren Wu

    2004-01-01

    We propose a direct and robust method for quantifying the variance risk premium on financial assets. We theoretically and numerically show that the risk-neutral expected value of the return variance, also known as the variance swap rate, is well approximated by the value of a particular portfolio of options. Ignoring the small approximation error, the difference between the realized variance and this synthetic variance swap rate quantifies the variance risk premium. Using a large options data...

  11. Recent Advances in Electrical Resistance Preheating of Aluminum Reduction Cells

    Science.gov (United States)

    Ali, Mohamed Mahmoud; Kvande, Halvor

    2016-06-01

    ABSTRACT There are two mainpreheating methods that are used nowadays for aluminum reduction cells. One is based on electrical resistance preheating with a thin bed of small coke and/or graphite particles between the anodes and the cathode carbon blocks. The other is flame preheating, where two or more gas or oil burners are used. Electrical resistance preheating is the oldest method, but is still frequently used by different aluminum producers. Many improvements have been made to this method by different companies over the last decade. In this paper, important points pertaining to the preparation and preheating of these cells, as well as measurements made during the preheating process and evaluation of the performance of the preheating, are illustrated. The preheating times of these cells were found to be between 36 h and 96 h for cell currents between 176 kA and 406 kA, while the resistance bed thickness was between 13 mm and 60 mm. The average cathode surface temperature at the end of the preheating was usually between 800°C and 950°C. The effect of the preheating methods on cell life is unclear and no quantifiable conclusions can be drawn. Some works carried out in the mathematical modeling area are also discussed. It is concluded that there is a need for more studies with real situations for preheated cells on the basis of actual measurements. The expected development in electrical resistance preheating of aluminum reduction cells is also summarized.

  12. Simple Variance Swaps

    OpenAIRE

    Ian Martin

    2011-01-01

    The large asset price jumps that took place during 2008 and 2009 disrupted volatility derivatives markets and caused the single-name variance swap market to dry up completely. This paper defines and analyzes a simple variance swap, a relative of the variance swap that in several respects has more desirable properties. First, simple variance swaps are robust: they can be easily priced and hedged even if prices can jump. Second, simple variance swaps supply a more accurate measure of market-imp...

  13. Advanced Exploration Systems (AES) Logistics Reduction and Repurposing Project: Advanced Clothing Ground Study Final Report

    Science.gov (United States)

    Byrne, Vicky; Orndoff, Evelyne; Poritz, Darwin; Schlesinger, Thilini

    2013-01-01

    All human space missions require significant logistical mass and volume that will become an excessive burden for long duration missions beyond low Earth orbit. The goal of the Advanced Exploration Systems (AES) Logistics Reduction & Repurposing (LRR) project is to bring new ideas and technologies that will enable human presence in farther regions of space. The LRR project has five tasks: 1) Advanced Clothing System (ACS) to reduce clothing mass and volume, 2) Logistics to Living (L2L) to repurpose existing cargo, 3) Heat Melt Compactor (HMC) to reprocess materials in space, 4) Trash to Gas (TTG) to extract useful gases from trash, and 5) Systems Engineering and Integration (SE&I) to integrate these logistical components. The current International Space Station (ISS) crew wardrobe has already evolved not only to reduce some of the logistical burden but also to address crew preference. The ACS task is to find ways to further reduce this logistical burden while examining human response to different types of clothes. The ACS task has been broken into a series of studies on length of wear of various garments: 1) three small studies conducted through other NASA projects (MMSEV, DSH, HI-SEAS) focusing on length of wear of garments treated with an antimicrobial finish; 2) a ground study, which is the subject of this report, addressing both length of wear and subject perception of various types of garments worn during aerobic exercise; and 3) an ISS study replicating the ground study, and including every day clothing to collect information on perception in reduced gravity in which humans experience physiological changes. The goal of the ground study is first to measure how long people can wear the same exercise garment, depending on the type of fabric and the presence of antimicrobial treatment, and second to learn why. Human factors considerations included in the study consist of the Institutional Review Board approval, test protocol and participants' training, and a web

  14. Variance Effects in Cyclic Production Systems

    OpenAIRE

    Debashish Sarkar; Willard I. Zangwill

    1991-01-01

    Utilizing a cyclic queue system, this paper investigates the effect of variance on a multi-item production facility. The variance of setup time, service rate and arrival rate is shown to have a powerful and sometimes paradoxical influence. Reduction in setup time, for example, is usually presumed to reduce inventory. We demonstrate that inventory can blow up if setup time is cut. Another paradoxical effect of variance is on processing rate. Speeding up the processing rate should reduce the ma...

  15. Variance bounding Markov chains

    OpenAIRE

    Roberts, Gareth O.; Jeffrey S. Rosenthal

    2008-01-01

    We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L2 functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.

  16. Reduction of anthropogenic environmental influences by advanced and optimized technologies / Pollmann O.A.

    OpenAIRE

    Pollmann, Olaf Axel.

    2012-01-01

    Sustainable development and resource efficiency are the common global strategies of the 21st century. The actual global natural resource consumption of humankind went far over the limit and to cover this worldwide resource consumption the productivity of 1.5 earths is now necessary. The work “Reduction of anthropogenic environmental influences by advanced and optimized technologies” discussed the problem of advanced resource efficiencies with mining activities in South Afric...

  17. Experiment and mechanism investigation on advanced reburning for NOx reduction: influence of CO and temperature

    Institute of Scientific and Technical Information of China (English)

    WANG Zhi-hua; ZHOU Jun-hu; ZHANG Yan-wei; LU Zhi-min; FAN Jian-ren; CEN Ke-fa

    2005-01-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 ℃ to 1400 ℃ and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 ℃ and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 ℃~1100 ℃. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures.

  18. Experiment and mechanism investigation on advanced reburning for NOx reduction: influence of CO and temperature

    Science.gov (United States)

    Wang, Zhi-hua; Zhou, Jun-hu; Zhang, Yan-wei; Lu, Zhi-min; Fan, Jian-ren; Cen, Ke-fa

    2005-01-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 °C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 °C~1100 °C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NOx Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503

  19. Estimation of measurement variances

    International Nuclear Information System (INIS)

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  20. The Quantum Allan Variance

    OpenAIRE

    Chabuda, Krzysztof; Leroux, Ian; Demkowicz-Dobrzanski, Rafal

    2016-01-01

    In atomic clocks, the frequency of a local oscillator is stabilized based on the feedback signal obtained by periodically interrogating an atomic reference system. The instability of the clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbi...

  1. Chemical oxygen demand reduction in coffee wastewater through chemical flocculation and advanced oxidation processes

    Institute of Scientific and Technical Information of China (English)

    ZAYAS Pérez Teresa; GEISSLER Gunther; HERNANDEZ Fernando

    2007-01-01

    The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculatio and advanced oxidation processes(AOP)had been studied.The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H202,UVO3 and UV/H-H202/O3 processes was determined under acidic conditions.For each of these processes,different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater.Coffee wastewater is characterized by a high chemical oxygen demand(COD)and low total suspended solids.The outcomes of coffee wastewater reeatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD,color,and turbidity.It was found that a reductiOn in COD of 67%could be realized when the coffee wastewater was treated by chemical coagulation-flocculatlon witll lime and coagulant T-1.When coffee wastewater was treated by coagulation-flocculation in combination with UV/H202,a COD reduction of 86%was achieved,although only after prolonged UV irradiation.Of the three advanced oxidation processes considered,UV/H202,uv/03 and UV/H202/03,we found that the treatment with UV/H2O2/O3 was the most effective,with an efficiency of color,turbidity and further COD removal of 87%,when applied to the flocculated coffee wastewater.

  2. Chemical oxygen demand reduction in coffee wastewater through chemical flocculation and advanced oxidation processes.

    Science.gov (United States)

    Zayas Pérez, Teresa; Geissler, Gunther; Hernandez, Fernando

    2007-01-01

    The removal of the natural organic matter present in coffee processing wastewater through chemical coagulation-flocculation and advanced oxidation processes (AOP) had been studied. The effectiveness of the removal of natural organic matter using commercial flocculants and UV/H2O2, UV/O3 and UV/H2O2/O3 processes was determined under acidic conditions. For each of these processes, different operational conditions were explored to optimize the treatment efficiency of the coffee wastewater. Coffee wastewater is characterized by a high chemical oxygen demand (COD) and low total suspended solids. The outcomes of coffee wastewater treatment using coagulation-flocculation and photodegradation processes were assessed in terms of reduction of COD, color, and turbidity. It was found that a reduction in COD of 67% could be realized when the coffee wastewater was treated by chemical coagulation-flocculation with lime and coagulant T-1. When coffee wastewater was treated by coagulation-flocculation in combination with UV/H2O2, a COD reduction of 86% was achieved, although only after prolonged UV irradiation. Of the three advanced oxidation processes considered, UV/H2O2, UV/O3 and UV/H2O2/O3, we found that the treatment with UV/H2O2/O3 was the most effective, with an efficiency of color, turbidity and further COD removal of 87%, when applied to the flocculated coffee wastewater. PMID:17918591

  3. Recent Advances in Inorganic Heterogeneous Electrocatalysts for Reduction of Carbon Dioxide.

    Science.gov (United States)

    Zhu, Dong Dong; Liu, Jin Long; Qiao, Shi Zhang

    2016-05-01

    In view of the climate changes caused by the continuously rising levels of atmospheric CO2 , advanced technologies associated with CO2 conversion are highly desirable. In recent decades, electrochemical reduction of CO2 has been extensively studied since it can reduce CO2 to value-added chemicals and fuels. Considering the sluggish reaction kinetics of the CO2 molecule, efficient and robust electrocatalysts are required to promote this conversion reaction. Here, recent progress and opportunities in inorganic heterogeneous electrocatalysts for CO2 reduction are discussed, from the viewpoint of both experimental and computational aspects. Based on elemental composition, the inorganic catalysts presented here are classified into four groups: metals, transition-metal oxides, transition-metal chalcogenides, and carbon-based materials. However, despite encouraging accomplishments made in this area, substantial advances in CO2 electrolysis are still needed to meet the criteria for practical applications. Therefore, in the last part, several promising strategies, including surface engineering, chemical modification, nanostructured catalysts, and composite materials, are proposed to facilitate the future development of CO2 electroreduction. PMID:26996295

  4. Tumor Volume Reduction Rate After Preoperative Chemoradiotherapy as a Prognostic Factor in Locally Advanced Rectal Cancer

    International Nuclear Information System (INIS)

    Purpose: To investigate the prognostic significance of tumor volume reduction rate (TVRR) after preoperative chemoradiotherapy (CRT) in locally advanced rectal cancer (LARC). Methods and Materials: In total, 430 primary LARC (cT3–4) patients who were treated with preoperative CRT and curative radical surgery between May 2002 and March 2008 were analyzed retrospectively. Pre- and post-CRT tumor volumes were measured using three-dimensional region-of-interest MR volumetry. Tumor volume reduction rate was determined using the equation TVRR (%) = (pre-CRT tumor volume − post-CRT tumor volume) × 100/pre-CRT tumor volume. The median follow-up period was 64 months (range, 27–99 months) for survivors. Endpoints were disease-free survival (DFS) and overall survival (OS). Results: The median TVRR was 70.2% (mean, 64.7% ± 22.6%; range, 0–100%). Downstaging (ypT0–2N0M0) occurred in 183 patients (42.6%). The 5-year DFS and OS rates were 77.7% and 86.3%, respectively. In the analysis that included pre-CRT and post-CRT tumor volumes and TVRR as continuous variables, only TVRR was an independent prognostic factor. Tumor volume reduction rate was categorized according to a cutoff value of 45% and included with clinicopathologic factors in the multivariate analysis; ypN status, circumferential resection margin, and TVRR were significant prognostic factors for both DFS and OS. Conclusions: Tumor volume reduction rate was a significant prognostic factor in LARC patients receiving preoperative CRT. Tumor volume reduction rate data may be useful for tailoring surgery and postoperative adjuvant therapy after preoperative CRT.

  5. Variance of volume estimators

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří

    Jena : Friedrich-Schiller-Universität, 2007. s. 23-23. [Workshop on Stochastic Geometry, Stereology and Image Analysis /14./. 23.09.2007-28.09.2007, Neudietendorf] R&D Projects: GA AV ČR(CZ) IAA100110502 Institutional research plan: CEZ:AV0Z50110509 Keywords : spr2 * stereology * volume * variance Subject RIV: EA - Cell Biology

  6. Minimum variance geographic sampling

    Science.gov (United States)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  7. Conversations across Meaning Variance

    Science.gov (United States)

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  8. Naive Analysis of Variance

    Science.gov (United States)

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  9. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  10. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    The present study was undertaken to explore possibilities to judge survey quality on basis of a limited and restricted number of a-priori observations. Here, quality is defined as the ratio between survey and local variance (signal-to-noise ratio). The results indicate that the presented surveys do not permit such judgement; the discussion also suggests that the 5-fold local sampling strategies do not merit any sound judgement. As it stands, uncertainties in local determinations may largely obscure possibilities to judge survey quality. The results further imply that surveys will benefit from procedures, controls and approaches in sampling and sample handling, to assess both average, variance and the nature of the distribution of elemental concentrations in local sites. This reasoning is compatible with the idea of the site as a basic homogeneous survey unit, which is implicitly and conceptually underlying any survey performed. (author)

  11. Ambiguity Aversion and Variance Premium

    OpenAIRE

    Jianjun Miao; Bin Wei; Hao Zhou

    2012-01-01

    This paper offers an ambiguity-based interpretation of variance premium - the differ- ence between risk-neutral and objective expectations of market return variance - as a com- pounding effect of both belief distortion and variance differential regarding the uncertain economic regimes. Our approach endogenously generates variance premium without impos- ing exogenous stochastic volatility or jumps in consumption process. Such a framework can reasonably match the mean variance premium as well a...

  12. Multivariate variance ratio statistics

    OpenAIRE

    Hong, Seok Young; Linton, Oliver; Zhang, Hui Jun

    2014-01-01

    We propose several multivariate variance ratio statistics. We derive the asymptotic distribution of the statistics and scalar functions thereof under the null hypothesis that returns are unpredictable after a constant mean adjustment (i.e., under the Efficient Market Hypothesis). We do not impose the no leverage assumption of Lo and MacKinlay (1988) but our asymptotic standard errors are relatively simple and in particular do not require the selection of a band- width parameter. We extend the...

  13. Advancing Development and Greenhouse Gas Reductions in Vietnam's Wind Sector

    Energy Technology Data Exchange (ETDEWEB)

    Bilello, D.; Katz, J.; Esterly, S.; Ogonowski, M.

    2014-09-01

    Clean energy development is a key component of Vietnam's Green Growth Strategy, which establishes a target to reduce greenhouse gas (GHG) emissions from domestic energy activities by 20-30 percent by 2030 relative to a business-as-usual scenario. Vietnam has significant wind energy resources, which, if developed, could help the country reach this target while providing ancillary economic, social, and environmental benefits. Given Vietnam's ambitious clean energy goals and the relatively nascent state of wind energy development in the country, this paper seeks to fulfill two primary objectives: to distill timely and useful information to provincial-level planners, analysts, and project developers as they evaluate opportunities to develop local wind resources; and, to provide insights to policymakers on how coordinated efforts may help advance large-scale wind development, deliver near-term GHG emission reductions, and promote national objectives in the context of a low emission development framework.

  14. Materials selection of surface coatings in an advanced size reduction facility

    International Nuclear Information System (INIS)

    A materials selection test program was conducted to characterize optimum interior surface coatings for an advanced size reduction facility. The equipment to be processed by this facility consists of stainless steel apparatus (e.g., glove boxes, piping, and tanks) used for the chemical recovery of plutonium. Test results showed that a primary requirement for a satisfactory coating is ease of decontamination. A closely related concern is the resistance of paint films to nitric acid - plutonium environments. A vinyl copolymer base paint was the only coating, of eight paints tested, with properties that permitted satisfactory decontamination of plutonium and also performed equal to or better than the other paints in the chemical resistance, radiation stability, and impact tests

  15. DEMONSTRATION OF AN ADVANCED INTEGRATED CONTROL SYSTEM FOR SIMULTANEOUS EMISSIONS REDUCTION

    Energy Technology Data Exchange (ETDEWEB)

    Suzanne Shea; Randhir Sehgal; Ilga Celmins; Andrew Maxson

    2002-02-01

    The primary objective of the project titled ''Demonstration of an Advanced Integrated Control System for Simultaneous Emissions Reduction'' was to demonstrate at proof-of-concept scale the use of an online software package, the ''Plant Environmental and Cost Optimization System'' (PECOS), to optimize the operation of coal-fired power plants by economically controlling all emissions simultaneously. It combines physical models, neural networks, and fuzzy logic control to provide both optimal least-cost boiler setpoints to the boiler operators in the control room, as well as optimal coal blending recommendations designed to reduce fuel costs and fuel-related derates. The goal of the project was to demonstrate that use of PECOS would enable coal-fired power plants to make more economic use of U.S. coals while reducing emissions.

  16. DEMONSTRATION OF AN ADVANCED INTEGRATED CONTROL SYSTEM FOR SIMULTANEOUS EMISSIONS REDUCTION; FINAL

    International Nuclear Information System (INIS)

    The primary objective of the project titled ''Demonstration of an Advanced Integrated Control System for Simultaneous Emissions Reduction'' was to demonstrate at proof-of-concept scale the use of an online software package, the ''Plant Environmental and Cost Optimization System'' (PECOS), to optimize the operation of coal-fired power plants by economically controlling all emissions simultaneously. It combines physical models, neural networks, and fuzzy logic control to provide both optimal least-cost boiler setpoints to the boiler operators in the control room, as well as optimal coal blending recommendations designed to reduce fuel costs and fuel-related derates. The goal of the project was to demonstrate that use of PECOS would enable coal-fired power plants to make more economic use of U.S. coals while reducing emissions

  17. Biclustering with heterogeneous variance.

    Science.gov (United States)

    Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R

    2013-07-23

    In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637

  18. Local variances in biomonitoring

    International Nuclear Information System (INIS)

    The present study deals with the (larger-scaled) biomonitoring survey and specifically focuses on the sampling site. In most surveys, the sampling site is simply selected or defined as a spot of (geographical) dimensions which is small relative to the dimensions of the total survey area. Implicitly it is assumed that the sampling site is essentially homogeneous with respect to the investigated variation in survey parameters. As such, the sampling site is mostly regarded as 'the basic unit' of the survey. As a logical consequence, the local (sampling site) variance should also be seen as a basic and important characteristic of the survey. During the study, work is carried out to gain more knowledge of the local variance. Multiple sampling is carried out at a specific site (tree bark, mosses, soils), multi-elemental analyses are carried out by NAA, and local variances are investigated by conventional statistics, factor analytical techniques, and bootstrapping. Consequences of the outcomes are discussed in the context of sampling, sample handling and survey quality. (author)

  19. EPA RREL's mobile volume reduction unit advances soil washing at four Superfund sites

    International Nuclear Information System (INIS)

    Research testing of the US. Environmental Protection Agency (EPA) Risk Reduction Engineering Laboratory's (RREL) Volume Reduction Unit (VRU), produced data helping advance soil washing as a remedial technology for contaminated soils. Based on research at four Superfund sites, each with a different matrix of organic contaminants, EPA evaluated the soil technology and provided information to forecast realistic, full-scale remediation costs. Primarily a research tool, the VRU is RREL's mobile test unit for investigating the breadth of this technology. During a Superfund Innovative Technology Evaluation (SITE) Demonstration at Escambia Wood Treating Company Site, Pensacola, FL, the VRU treated soil contaminated with pentachlorophenol (PCP) and polynuclear aromatic hydrocarbon-laden creosote (PAH). At Montana Pole and Treatment Plant Site, Butte, MT, the VRU treated soil containing PCP mixed with diesel oil (measured as total petroleum hydrocarbons) and a trace of dioxin. At Dover Air Force Base Site, Dover, DE, the VRU treated soil containing JP-4 jet fuel, measured as TPHC. At Sand Creek Site, Commerce City, CO, the feed soil at this site was contaminated with two pesticides: heptachlor and dieldrin. Less than 10 percent of these pesticides remained in the treated coarse soil fractions

  20. Advanced Chemical Reduction of Reduced Graphene Oxide and Its Photocatalytic Activity in Degrading Reactive Black 5

    Directory of Open Access Journals (Sweden)

    Christelle Pau Ping Wong

    2015-10-01

    Full Text Available Textile industries consume large volumes of water for dye processing, leading to undesirable toxic dyes in water bodies. Dyestuffs are harmful to human health and aquatic life, and such illnesses as cholera, dysentery, hepatitis A, and hinder the photosynthetic activity of aquatic plants. To overcome this environmental problem, the advanced oxidation process is a promising technique to mineralize a wide range of dyes in water systems. In this work, reduced graphene oxide (rGO was prepared via an advanced chemical reduction route, and its photocatalytic activity was tested by photodegrading Reactive Black 5 (RB5 dye in aqueous solution. rGO was synthesized by dispersing the graphite oxide into the water to form a graphene oxide (GO solution followed by the addition of hydrazine. Graphite oxide was prepared using a modified Hummers’ method by using potassium permanganate and concentrated sulphuric acid. The resulted rGO nanoparticles were characterized using ultraviolet-visible spectrophotometry (UV-Vis, X-ray powder diffraction (XRD, Raman, and Scanning Electron Microscopy (SEM to further investigate their chemical properties. A characteristic peak of rGO-48 h (275 cm−1 was observed in the UV spectrum. Further, the appearance of a broad peak (002, centred at 2θ = 24.1°, in XRD showing that graphene oxide was reduced to rGO. Based on our results, it was found that the resulted rGO-48 h nanoparticles achieved 49% photodecolorization of RB5 under UV irradiation at pH 3 in 60 min. This was attributed to the high and efficient electron transport behaviors of rGO between aromatic regions of rGO and RB5 molecules.

  1. Reduction of antibiotic resistance genes in municipal wastewater effluent by advanced oxidation processes.

    Science.gov (United States)

    Zhang, Yingying; Zhuang, Yao; Geng, Jinju; Ren, Hongqiang; Xu, Ke; Ding, Lili

    2016-04-15

    This study investigated the reduction of antibiotic resistance genes (ARGs), intI1 and 16S rRNA genes, by advanced oxidation processes (AOPs), namely Fenton oxidation (Fe(2+)/H2O2) and UV/H2O2 process. The ARGs include sul1, tetX, and tetG from municipal wastewater effluent. The results indicated that the Fenton oxidation and UV/H2O2 process could reduce selected ARGs effectively. Oxidation by the Fenton process was slightly better than that of the UV/H2O2 method. Particularly, for the Fenton oxidation, under the optimal condition wherein Fe(2+)/H2O2 had a molar ratio of 0.1 and a H2O2 concentration of 0.01molL(-1) with a pH of 3.0 and reaction time of 2h, 2.58-3.79 logs of target genes were removed. Under the initial effluent pH condition (pH=7.0), the removal was 2.26-3.35 logs. For the UV/H2O2 process, when the pH was 3.5 with a H2O2 concentration of 0.01molL(-1) accompanied by 30min of UV irradiation, all ARGs could achieve a reduction of 2.8-3.5 logs, and 1.55-2.32 logs at a pH of 7.0. The Fenton oxidation and UV/H2O2 process followed the first-order reaction kinetic model. The removal of target genes was affected by many parameters, including initial Fe(2+)/H2O2 molar ratios, H2O2 concentration, solution pH, and reaction time. Among these factors, reagent concentrations and pH values are the most important factors during AOPs. PMID:26815295

  2. Spectral variance of aeroacoustic data

    Science.gov (United States)

    Rao, K. V.; Preisser, J. S.

    1981-01-01

    An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.

  3. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  4. Advances of Ag, Cu, and Ag-Cu alloy nanoparticles synthesized via chemical reduction route

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Kim Seah; Cheong, Kuan Yew, E-mail: cheong@eng.usm.my [Universiti Sains Malaysia, Electronic Materials Research Group, School of Materials and Mineral Resources Engineering (Malaysia)

    2013-04-15

    Silver (Ag) and copper (Cu) nanoparticles have shown great potential in variety applications due to their excellent electrical and thermal properties resulting high demand in the market. Decreasing in size to nanometer scale has shown distinct improvement in these inherent properties due to larger surface-to-volume ratio. Ag and Cu nanoparticles are also shown higher surface reactivity, and therefore being used to improve interfacial and catalytic process. Their melting points have also dramatically decreased compared with bulk and thus can be processed at relatively low temperature. Besides, regularly alloying Ag into Cu to create Ag-Cu alloy nanoparticles could be used to improve fast oxidizing property of Cu nanoparticles. There are varieties methods have been reported on the synthesis of Ag, Cu, and Ag-Cu alloy nanoparticles. This review aims to cover chemical reduction means for synthesis of those nanoparticles. Advances of this technique utilizing different reagents namely metal salt precursors, reducing agents, and stabilizers, as well as their effects on respective nanoparticles have been systematically reviewed. Other parameters such as pH and temperature that have been considered as an important factor influencing the quality of those nanoparticles have also been reviewed thoroughly.

  5. A Broadband Beamformer Using Controllable Constraints and Minimum Variance

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Benesty, Jacob; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom...

  6. Modeling the Variance of Variance Through a Constant Elasticity of Variance Generalized Autoregressive Conditional Heteroskedasticity Model

    OpenAIRE

    Saedi, Mehdi; Wolk, Jared

    2012-01-01

    This paper compares a standard GARCH model with a Constant Elasticity of Variance GARCH model across three major currency pairs and the S&P 500 index. We discuss the advantages and disadvantages of using a more sophisticated model designed to estimate the variance of variance instead of assuming it to be a linear function of the conditional variance. The current stochastic volatility and GARCH analogues rest upon this linear assumption. We are able to confirm through empirical estimation ...

  7. Accurate feedwater iron control for dose rate reduction by advanced resin cleaning system in Tokai-2

    International Nuclear Information System (INIS)

    Dose rate reduction of out-of-core piping is one of main issues in Boiling Water Nuclear Power Plant (BWR). Main source of the out-of-core piping dose rate is 60Co which adhered to the piping and it is influenced by feedwater iron concentration. A relationship between feedwater iron concentration and amount of iron and cobalt, 60Co which deposited on fuel surface had been evaluated at Tokai-2 (1,100 MWe BWR, operated by The Japan Atomic Power Company, commercial operation started on 1978). As the results, it was demonstrated that to keep the amount of deposited iron on fuel surface around 2000μg/cm2 to reduce Co radioactivation. And, when feedwater iron concentration is around 0.5 ppb, that was achieved. But, when feedwater iron becomes less than 0.5 ppb, soluble 60Co concentration in reactor coolant increases and that makes out-of-core piping dose rate increase. So, necessity to control feedwater iron is shown from these behaviors. At Tokai-2, condensate water iron is removed by only condensate demineralizer resin, because Tokai-2 has no condensate filter. That is, iron removal performance of condensate demineralizer resin affects feedwater iron concentration directly. And, iron removal performance of condensate demineralizer resin is caused by resin cleanness. The resin has been cleaned by a resin cleaning method named 'backwash'. But iron on the surface of the resin could not be removed efficiently by the backwash. As the result, feedwater iron could not be reduced to 0.5 ppb. So, Advanced Resin Cleaning System (ARCS) which can remove almost the iron on the resin was retrofitted to Tokai-2, in October 2005 (21nd outage), to reduce feedwater iron. After applying ARCS, resin cleanness was improved, and feedwater iron decreased to around 0.5 ppb same as that of BWR plants with condensate filter. Also, feedwater iron concentration was maintained in around 0.5 ppb by changing frequency of resin cleaning. By using these results, an optimum control method of

  8. One-way analysis of variance with unequal variances.

    OpenAIRE

    Rice, W R; Gaines, S. D.

    1989-01-01

    We have designed a statistical test that eliminates the assumption of equal group variances from one-way analysis of variance. This test is preferable to the standard technique of trial-and-error transformation and can be shown to be an extension of the Behrens-Fisher T test to the case of three or more means. We suggest that this procedure be used in most applications where the one-way analysis of variance has traditionally been applied to biological data.

  9. Minimum Variance Hedging and Stock Index Market Efficiency

    OpenAIRE

    Carol Alexander; Andreza Barbosa

    2006-01-01

    This empirical study examines the impact of both advanced electronic trading platforms and index exchange traded funds (ETFs) on the minimum variance hedging of stock indices with futures. Our findings show that minimum variance hedging may provide an out-of-sample hedging performance that is superior to that of the one-one futures hedge, but only in markets without active trading of ETFs and advanced development of electronic communications networks. However there is no evidence to suggest t...

  10. Experiment and mechanism investigation on advanced reburning for NO(x) reduction: influence of CO and temperature.

    Science.gov (United States)

    Wang, Zhi-Hua; Zhou, Jun-Hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-Ren; Cen, Ke-Fa

    2005-03-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15% approximately 25% reburn heat input, temperature range from 1100 degrees C to 1400 degrees C and also the carbon in fly ash, coal fineness, reburn zone stoichiometric ratio, etc. were investigated. On the condition of 25% reburn heat input, maximum of 47% NO reduction with Yanzhou coal was obtained by pure coal reburning. Optimal temperature for reburning is about 1300 degrees C and fuel-rich stoichiometric ratio is essential; coal fineness can slightly enhance the reburning ability. The temperature window for ammonia injection is about 700 degrees C approximately 1100 degrees C. CO can improve the NH3 ability at lower temperature. During advanced reburning, 72.9% NO reduction was measured. To achieve more than 70% NO reduction, Selective Non-catalytic NO(x) Reduction (SNCR) should need NH3/NO stoichiometric ratio larger than 5, while advanced reburning only uses common dose of ammonia as in conventional SNCR technology. Mechanism study shows the oxidization of CO can improve the decomposition of H2O, which will rich the radical pools igniting the whole reactions at lower temperatures. PMID:15682503

  11. Modelling volatility by variance decomposition

    OpenAIRE

    Amado, Cristina; Teräsvirta, Timo

    2011-01-01

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the conditional and unconditional variances where the transition between regimes over time is smooth. The main focus is on the multiplicative decomposition that decomposes the variance into an unconditional and...

  12. Budget variance analysis using RVUs.

    Science.gov (United States)

    Berlin, M F; Budzynski, M R

    1998-01-01

    This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247

  13. Volatility investing with variance swaps

    OpenAIRE

    Härdle, Wolfgang Karl; Silyakova, Elena

    2010-01-01

    Traditionally volatility is viewed as a measure of variability, or risk, of an underlying asset. However recently investors began to look at volatility from a different angle. It happened due to emergence of a market for new derivative instruments - variance swaps. In this paper first we introduse the general idea of the volatility trading using variance swaps. Then we describe valuation and hedging methodology for vanilla variance swaps as well as for the 3-rd generation volatility derivativ...

  14. Fixed effects analysis of variance

    CERN Document Server

    Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

    1978-01-01

    Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

  15. Bronchoscopic lung volume reduction by endobronchial valve in advanced emphysema: the first Asian report

    Directory of Open Access Journals (Sweden)

    Park TS

    2015-07-01

    Full Text Available Tai Sun Park,1 Yoonki Hong,2 Jae Seung Lee,1 Sang Young Oh,3 Sang Min Lee,3 Namkug Kim,3 Joon Beom Seo,3 Yeon-Mok Oh,1 Sang-Do Lee,1 Sei Won Lee1 1Department of Pulmonary and Critical Care Medicine and Clinical Research Center for Chronic Obstructive Airway Diseases, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea; 2Department of Internal Medicine, College of Medicine, Kangwon National University, Chuncheon, Korea; 3Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea Purpose: Endobronchial valve (EBV therapy is increasingly being seen as a therapeutic option for advanced emphysema, but its clinical utility in Asian populations, who may have different phenotypes to other ethnic populations, has not been assessed.Patients and methods: This prospective open-label single-arm clinical trial examined the clinical efficacy and the safety of EBV in 43 consecutive patients (mean age 68.4±7.5, forced expiratory volume in 1 second [FEV1] 24.5%±10.7% predicted, residual volume 208.7%±47.9% predicted with severe emphysema with complete fissure and no collateral ventilation in a tertiary referral hospital in Korea.Results: Compared to baseline, the patients exhibited significant improvements 6 months after EBV therapy in terms of FEV1 (from 0.68±0.26 L to 0.92±0.40 L; P<0.001, 6-minute walk distance (from 233.5±114.8 m to 299.6±87.5 m; P=0.012, modified Medical Research Council dyspnea scale (from 3.7±0.6 to 2.4±1.2; P<0.001, and St George’s Respiratory Questionnaire (from 65.59±13.07 to 53.76±11.40; P=0.028. Nine patients (20.9% had a tuberculosis scar, but these scars did not affect target lobe volume reduction or pneumothorax frequency. Thirteen patients had adverse events, ten (23.3% developed pneumothorax, which included one death due to tension pneumothorax.Conclusion: EBV therapy was as effective and safe in Korean

  16. Advanced therapies for COPD—What’s on the horizon? Progress in lung volume reduction and lung transplantation

    OpenAIRE

    Trotter, Michael A.; Hopkins, Peter M.

    2014-01-01

    Advanced chronic obstructive pulmonary disease (COPD) is a significant cause of morbidity. Treatment options beyond conventional medical therapies are limited to a minority of patients. Lung volume reduction surgery (LVRS) although effective in selected subgroups of patients is not commonly undertaken. Morbidity associated with the procedure has contributed to this low utilisation. In response to this, less invasive bronchoscopic lung volume techniques are being developed to attempt to mitiga...

  17. Variance Adjusted Actor Critic Algorithms

    OpenAIRE

    Tamar, Aviv; Mannor, Shie

    2013-01-01

    We present an actor-critic framework for MDPs where the objective is the variance-adjusted expected return. Our critic uses linear function approximation, and we extend the concept of compatible features to the variance-adjusted setting. We present an episodic actor-critic algorithm and show that it converges almost surely to a locally optimal point of the objective function.

  18. Radiation dose reduction in CT of the brain: can advanced noise filtering compensate for loss of image quality?

    International Nuclear Information System (INIS)

    Background: Computed tomography (CT) of the brain is performed with high local doses due to high demands on low contrast resolution. Advanced algorithms for noise reduction might be able to preserve critical image information when reducing radiation dose. Purpose: To evaluate the effect of advanced noise filtering on image quality in brain CT acquired with reduced radiation dose. Material and Methods: Thirty patients referred for non-enhanced CT of the brain were examined with two helical protocols: normal dose (ND, CTDIvol 57 mGy) and low dose (LD, CTDIvol 40 mGy) implying a 30% radiation dose reduction. Images from the LD examinations were also post processed with a noise reduction software with non-linear filters (SharpView CT), creating filtered low dose images (FLD) for each patient. The three image stacks for each patient were presented side by side in randomized order. Five radiologists, blinded for dose level and filtering, ranked these three axial image stacks (ND, LD, FLD) as best to poorest (1 to 3) regarding three image quality criteria. Measurements of mean Hounsfield units (HU) and standard deviation (SD) of the HU were calculated for large region of interest in the centrum semiovale as a measure for noise. Results: Ranking results in pooled data showed that the advanced noise filtering significantly improved the image quality in FLD as compared to LD images for all tested criteria. No significant differences in image quality were found between ND examinations and FLD. However, there was a notable inter-reader spread of the ranking. SD values were 15% higher for LD as compared to ND and FLD. Conclusion: The advanced noise filtering clearly improves image quality of CT examinations of the brain. This effect can be used to significantly lower radiation dose.

  19. Liposuction for Advanced Lymphedema: A Multidisciplinary Approach for Complete Reduction of Arm and Leg Swelling

    OpenAIRE

    Boyages, John; Kastanias, Katrina; Koelmeyer, Louise A.; Winch, Caleb J.; Lam, Thomas C.; Sherman, Kerry A.; Munnoch, David Alex; Brorson, Håkan; Ngo, Quan D.; Heydon-White, Asha; Magnussen, John S.; Mackie, Helen

    2015-01-01

    Purpose This research describes and evaluates a liposuction surgery and multidisciplinary rehabilitation approach for advanced lymphedema of the upper and lower extremities. Methods A prospective clinical study was conducted at an Advanced Lymphedema Assessment Clinic (ALAC) comprised of specialists in plastic surgery, rehabilitation, imaging, oncology, and allied health, at Macquarie University, Australia. Between May 2012 and 31 May 2014, a total of 104 patients attended the ALAC. Eligibili...

  20. Advanced Glycation End Products in Foods and a Practical Guide to Their Reduction in the Diet

    OpenAIRE

    URIBARRI, JAIME; WOODRUFF, SANDRA; Goodman, Susan; Cai, Weijing; Chen, Xue; Pyzik, Renata; YONG, ANGIE; STRIKER, GARY E.; Vlassara, Helen

    2010-01-01

    Modern diets are largely heat-processed and as a result contain high levels of advanced glycation end products (AGEs). Dietary advanced glycation end products (dAGEs) are known to contribute to increased oxidant stress and inflammation, which are linked to the recent epidemics of diabetes and cardiovascular disease. This report significantly expands the available dAGE database, validates the dAGE testing methodology, compares cooking procedures and inhibitory agents on new dAGE formation, and...

  1. Modelling volatility by variance decomposition

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the...... conditional and unconditional variances where the transition between regimes over time is smooth. The main focus is on the multiplicative decomposition that decomposes the variance into an unconditional and conditional component. A modelling strategy for the time-varying GARCH model based on the...... multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspecification tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns...

  2. Advanced therapies for COPD-What's on the horizon? Progress in lung volume reduction and lung transplantation.

    Science.gov (United States)

    Trotter, Michael A; Hopkins, Peter M

    2014-11-01

    Advanced chronic obstructive pulmonary disease (COPD) is a significant cause of morbidity. Treatment options beyond conventional medical therapies are limited to a minority of patients. Lung volume reduction surgery (LVRS) although effective in selected subgroups of patients is not commonly undertaken. Morbidity associated with the procedure has contributed to this low utilisation. In response to this, less invasive bronchoscopic lung volume techniques are being developed to attempt to mitigate some of the risks and costs associated with surgery. Of these, endobronchial valve therapy is the most comprehensively studied although the presence of collateral ventilation in a significant proportion of patients has compromised its widespread utility. Bronchial thermal vapour ablation and lung volume reduction (LVR) coils are not dependent on collateral ventilation. These techniques have shown promise in early clinical trials; ongoing work will establish whether they have a role in the management of advanced COPD. Lung transplantation, although effective in selected patients for palliation of symptoms and improving survival, is limited by donor organ availability and economic constraint. Reconditioning marginal organs previously declined for transplantation with ex vivo lung perfusion (EVLP) is one potential strategy in improving the utilisation of donor organs. By increasing the donor pool, it is hoped lung transplantation might be more accessible for patients with advanced COPD into the future. PMID:25478204

  3. Variance approximation under balanced sampling

    OpenAIRE

    Deville, Jean-Claude; Tillé, Yves

    2016-01-01

    A balanced sampling design has the interesting property that Horvitz–Thompson estimators of totals for a set of balancing variables are equal to the totals we want to estimate, therefore the variance of Horvitz–Thompson estimators of variables of interest are reduced in function of their correlations with the balancing variables. Since it is hard to derive an analytic expression for the joint inclusion probabilities, we derive a general approximation of variance based on a residual technique....

  4. Advanced RF-KO slow-extraction method for the reduction of spill ripple

    CERN Document Server

    Noda, K; Shibuya, S; Uesugi, T; Muramatsu, M; Kanazawa, M; Takada, E; Yamada, S

    2002-01-01

    Two advanced RF-knockout (RF-KO) slow-extraction methods have been developed at HIMAC in order to reduce the spill ripple for accurate heavy-ion cancer therapy: the dual frequency modulation (FM) method and the separated function method. As a result of simulations and experiments, it was verified that the spill ripple could be considerably reduced using these advanced methods, compared with the ordinary RF-KO method. The dual FM method and the separated function method bring about a low spill ripple within standard deviations of around 25% and of 15% during beam extraction within around 2 s, respectively, which are in good agreement with the simulation results.

  5. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    Science.gov (United States)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  6. Advanced experimental analysis of controls on microbial Fe(III) oxide reduction. First year progress report

    Energy Technology Data Exchange (ETDEWEB)

    Roden, E.E.; Urrutia, M.M.

    1997-07-01

    'The authors have made considerable progress toward a number of project objectives during the first several months of activity on the project. An exhaustive analysis was made of the growth rate and biomass yield (both derived from measurements of cell protein production) of two representative strains of Fe(III)-reducing bacteria (Shewanellaalga strain BrY and Geobactermetallireducens) growing with different forms of Fe(III) as an electron acceptor. These two fundamentally different types of Fe(III)-reducing bacteria (FeRB) showed comparable rates of Fe(III) reduction, cell growth, and biomass yield during reduction of soluble Fe(III)-citrate and solid-phase amorphous hydrous ferric oxide (HFO). Intrinsic growth rates of the two FeRB were strongly influenced by whether a soluble or a solid-phase source of Fe(III) was provided: growth rates on soluble Fe(III) were 10--20 times higher than those on solid-phase Fe(III) oxide. Intrinsic FeRB growth rates were comparable during reduction of HF0 and a synthetic crystalline Fe(III) oxide (goethite). A distinct lag phase for protein production was observed during the first several days of incubation in solid-phase Fe(III) oxide medium, even though Fe(III) reduction proceeded without any lag. No such lag between protein production and Fe(III) reduction was observed during growth with soluble Fe(III). This result suggested that protein synthesis coupled to solid-phase Fe(III) oxide reduction in batch culture requires an initial investment of energy (generated by Fe(III) reduction), which is probably needed for synthesis of materials (e.g. extracellular polysaccharides) required for attachment of the cells to oxide surfaces. This phenomenon may have important implications for modeling the growth of FeRB in subsurface sedimentary environments, where attachment and continued adhesion to solid-phase materials will be required for maintenance of Fe(III) reduction activity. Despite considerable differences in the rate and

  7. Variance Risk Premiums and Predictive Power of Alternative Forward Variances in the Corn Market

    OpenAIRE

    Zhiguang Wang; Scott W. Fausti; Qasmi, Bashir A.

    2010-01-01

    We propose a fear index for corn using the variance swap rate synthesized from out-of-the-money call and put options as a measure of implied variance. Previous studies estimate implied variance based on Black (1976) model or forecast variance using the GARCH models. Our implied variance approach, based on variance swap rate, is model independent. We compute the daily 60-day variance risk premiums based on the difference between the realized variance and implied variance for the period from 19...

  8. NMR Studies of Structure-Reactivity Relationships in Carbonyl Reduction: A Collaborative Advanced Laboratory Experiment

    Science.gov (United States)

    Marincean, Simona; Smith, Sheila R.; Fritz, Michael; Lee, Byung Joo; Rizk, Zeinab

    2012-01-01

    An upper-division laboratory project has been developed as a collaborative investigation of a reaction routinely taught in organic chemistry courses: the reduction of carbonyl compounds by borohydride reagents. Determination of several trends regarding structure-activity relationship was possible because each student contributed his or her results…

  9. An investigation into reservoir NOM reduction by UV photolysis and advanced oxidation processes.

    Science.gov (United States)

    Goslan, Emma H; Gurses, Filiz; Banks, Jenny; Parsons, Simon A

    2006-11-01

    A comparison of four treatment technologies for reduction of natural organic matter (NOM) in a reservoir water was made. The work presented here is a laboratory based evaluation of NOM treatment by UV-C photolysis, UV/H(2)O(2), Fenton's reagent (FR) and photo-Fenton's reagent (PFR). The work investigated ways of reducing the organic load on water treatment works (WTWs) with a view to treating 'in-reservoir' or 'in-pipe' before the water reaches the WTW. The efficiency of each process in terms of NOM removal was determined by measuring UV absorbance at 254 nm (UV(254)) and dissolved organic carbon (DOC). In terms of DOC reduction PFR was the most effective (88% removal after 1 min) however there were interferences when measuring UV(254) which was reduced to a lesser extent (31% after 1 min). In the literature, pH 3 is reported to be the optimal pH for oxidation with FR but here the reduction of UV(254) and DOC was found to be insensitive to pH in the range 3-7. The treatment that was identified as the most effective in terms of NOM reduction and cost effectiveness was PFR. PMID:16765416

  10. Effect of Two Advanced Noise Reduction Technologies on the Aerodynamic Performance of an Ultra High Bypass Ratio Fan

    Science.gov (United States)

    Hughes, Christoper E.; Gazzaniga, John A.

    2013-01-01

    A wind tunnel experiment was conducted in the NASA Glenn Research Center anechoic 9- by 15-Foot Low-Speed Wind Tunnel to investigate two new advanced noise reduction technologies in support of the NASA Fundamental Aeronautics Program Subsonic Fixed Wing Project. The goal of the experiment was to demonstrate the noise reduction potential and effect on fan model performance of the two noise reduction technologies in a scale model Ultra-High Bypass turbofan at simulated takeoff and approach aircraft flight speeds. The two novel noise reduction technologies are called Over-the-Rotor acoustic treatment and Soft Vanes. Both technologies were aimed at modifying the local noise source mechanisms of the fan tip vortex/fan case interaction and the rotor wake-stator interaction. For the Over-the-Rotor acoustic treatment, two noise reduction configurations were investigated. The results showed that the two noise reduction technologies, Over-the-Rotor and Soft Vanes, were able to reduce the noise level of the fan model, but the Over-the-Rotor configurations had a significant negative impact on the fan aerodynamic performance; the loss in fan aerodynamic efficiency was between 2.75 to 8.75 percent, depending on configuration, compared to the conventional solid baseline fan case rubstrip also tested. Performance results with the Soft Vanes showed that there was no measurable change in the corrected fan thrust and a 1.8 percent loss in corrected stator vane thrust, which resulted in a total net thrust loss of approximately 0.5 percent compared with the baseline reference stator vane set.

  11. Energy Saving Melting and Revert Reduction Technology (Energy SMARRT): Manufacturing Advanced Engineered Components Using Lost Foam Casting Technology

    Energy Technology Data Exchange (ETDEWEB)

    Littleton, Harry; Griffin, John

    2011-07-31

    This project was a subtask of Energy Saving Melting and Revert Reduction Technology (Energy SMARRT) Program. Through this project, technologies, such as computer modeling, pattern quality control, casting quality control and marketing tools, were developed to advance the Lost Foam Casting process application and provide greater energy savings. These technologies have improved (1) production efficiency, (2) mechanical properties, and (3) marketability of lost foam castings. All three reduce energy consumption in the metals casting industry. This report summarizes the work done on all tasks in the period of January 1, 2004 through June 30, 2011. Current (2011) annual energy saving estimates based on commercial introduction in 2011 and a market penetration of 97% by 2020 is 5.02 trillion BTU's/year and 6.46 trillion BTU's/year with 100% market penetration by 2023. Along with these energy savings, reduction of scrap and improvement in casting yield will result in a reduction of the environmental emissions associated with the melting and pouring of the metal which will be saved as a result of this technology. The average annual estimate of CO2 reduction per year through 2020 is 0.03 Million Metric Tons of Carbon Equivalent (MM TCE).

  12. External Magnetic Field Reduction Techniques for the Advanced Stirling Radioisotope Generator

    Science.gov (United States)

    Niedra, Janis M.; Geng, Steven M.

    2013-01-01

    Linear alternators coupled to high efficiency Stirling engines are strong candidates for thermal-to-electric power conversion in space. However, the magnetic field emissions, both AC and DC, of these permanent magnet excited alternators can interfere with sensitive instrumentation onboard a spacecraft. Effective methods to mitigate the AC and DC electromagnetic interference (EMI) from solenoidal type linear alternators (like that used in the Advanced Stirling Convertor) have been developed for potential use in the Advanced Stirling Radioisotope Generator. The methods developed avoid the complexity and extra mass inherent in data extraction from multiple sensors or the use of shielding. This paper discusses these methods, and also provides experimental data obtained during breadboard testing of both AC and DC external magnetic field devices.

  13. Development of Head-end Pyrochemical Reduction Process for Advanced Oxide Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Park, B. H.; Seo, C. S.; Hur, J. M.; Jeong, S. M.; Hong, S. S.; Choi, I. K.; Choung, W. M.; Kwon, K. C.; Lee, I. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-12-15

    The development of an electrolytic reduction technology for spent fuels in the form of oxide is of essence to introduce LWR SFs to a pyroprocessing. In this research, the technology was investigated to scale a reactor up, the electrochemical behaviors of FPs were studied to understand the process and a reaction rate data by using U{sub 3}O{sub 8} was obtained with a bench scale reactor. In a scale of 20 kgHM/batch reactor, U{sub 3}O{sub 8} and Simfuel were successfully reduced into metals. Electrochemical characteristics of LiBr, LiI and Li{sub 2}Se were measured in a bench scale reactor and an electrolytic reduction cell was modeled by a computational tool.

  14. Reduction of Worldwide Plutonium Inventories Using Conventional Reactors and Advanced Fuels: A Systems Study

    Energy Technology Data Exchange (ETDEWEB)

    Krakowski, R.A., Bathke, C.G.

    1997-12-31

    The potential for reducing plutonium inventories in the civilian nuclear fuel cycle through recycle in LWRs of a variety of mixed oxide forms is examined by means of a cost based plutonium flow systems model. This model emphasizes: (1) the minimization of separated plutonium; (2) the long term reduction of spent fuel plutonium; (3) the optimum utilization of uranium resources; and (4) the reduction of (relative) proliferation risks. This parametric systems study utilizes a globally aggregated, long term (approx. 100 years) nuclear energy model that interprets scenario consequences in terms of material inventories, energy costs, and relative proliferation risks associated with the civilian fuel cycle. The impact of introducing nonfertile fuels (NFF,e.g., plutonium oxide in an oxide matrix that contains no uranium) into conventional (LWR) reactors to reduce net plutonium generation, to increase plutonium burnup, and to reduce exo- reactor plutonium inventories also is examined.

  15. ADVANCEMENT OF NUCLEIC ACID-BASED TOOLS FOR MONITORING IN SITU REDUCTIVE DECHLORINATION

    Energy Technology Data Exchange (ETDEWEB)

    Vangelas, K; ELIZABETH EDWARDS, E; FRANK LOFFLER, F; Brian02 Looney, B

    2006-11-17

    Regulatory protocols generally recognize that destructive processes are the most effective mechanisms that support natural attenuation of chlorinated solvents. In many cases, these destructive processes will be biological processes and, for chlorinated compounds, will often be reductive processes that occur under anaerobic conditions. The existing EPA guidance (EPA, 1998) provides a list of parameters that provide indirect evidence of reductive dechlorination processes. In an effort to gather direct evidence of these processes, scientists have identified key microorganisms and are currently developing tools to measure the abundance and activity of these organisms in subsurface systems. Drs. Edwards and Luffler are two recognized leaders in this field. The research described herein continues their development efforts to provide a suite of tools to enable direct measures of biological processes related to the reductive dechlorination of TCE and PCE. This study investigated the strengths and weaknesses of the 16S rRNA gene-based approach to characterizing the natural attenuation capabilities in samples. The results suggested that an approach based solely on 16S rRNA may not provide sufficient information to document the natural attenuation capabilities in a system because it does not distinguish between strains of organisms that have different biodegradation capabilities. The results of the investigations provided evidence that tools focusing on relevant enzymes for functionally desired characteristics may be useful adjuncts to the 16SrRNA methods.

  16. Advances in projection of climate change impacts using supervised nonlinear dimensionality reduction techniques

    Science.gov (United States)

    Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali

    2016-05-01

    One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.

  17. A variance-based sensitivity index function for factor prioritization

    International Nuclear Information System (INIS)

    Among the many uses for sensitivity analysis is factor prioritization—that is, the determination of which factor, once fixed to its true value, on average leads to the greatest reduction in the variance of an output. A key assumption is that a given factor can, through further research, be fixed to some point on its domain. In general, this is an optimistic assumption, which can lead to inappropriate resource allocation. This research develops an original method that apportions output variance as a function of the amount of variance reduction that can be achieved for a particular factor. This variance-based sensitivity index function provides a main effect sensitivity index for a given factor as a function of the amount of variance of that factor that can be reduced. An aggregate measure of which factors would on average cause the greatest reduction in output variance given future research is also defined and assumes the portion of a particular factors variance that can be reduced is a random variable. An average main effect sensitivity index is then calculated by taking the mean of the variance-based sensitivity index function. A key aspect of the method is that the analysis is performed directly on the samples that were generated during a global sensitivity analysis using rejection sampling. The method is demonstrated on the Ishigami function and an additive function, where the rankings for future research are shown to be different than those of a traditional global sensitivity analysis. - Highlights: ► A sensitivity index function that apportions output variance as a function of the variance reduction that can be achieved for a given factor. ► A main effect sensitivity index that assumes the portion of a particular factor's variance that can be reduced is a random variable. ► The proposed indices are estimated directly from samples generated during a global sensitivity analysis using rejection sampling. ► Methods are demonstrated on the Ishigami

  18. Comprehensive Study on the Estimation of the Variance Components of Traverse Nets

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper advances a new simplified formula for estimating variance components ,sums up the basic law to calculate the weights of observed values and a circulation method using the increaments of weights when estimating the variance components of traverse nets,advances the charicteristic roots method to estimate the variance components of traveres nets and presents a practical method to make two real and symmetric matrices two diagonal ones.

  19. Explaining the Variance of Price Dividend Ratios

    OpenAIRE

    Cochrane, John H.

    1989-01-01

    This paper presents a bound on the variance of the price-dividend ratio and a decomposition of the variance of the price-dividend ratio into components that reflect variation in expected future discount rates and variation in expected future dividend growth. Unobserved discount rates needed to make the variance bound and variance decomposition hold are characterized, and the variance bound and variance decomposition are tested for several discount rate models, including the consumption based ...

  20. Alarm Reduction Processing of Advanced Nuclear Power Plant Using Data Mining and Active Database Technologies

    International Nuclear Information System (INIS)

    The purpose of the Advanced Alarm Processing (AAP) is to extract only the most important and the most relevant data out of large amount of available information. It should be noted that the integrity of the knowledge base is the most critical in developing a reliable AAP. This paper proposes a new approach to an AAP by using Event-Condition-Action(ECA) rules that can be automatically triggered by an active database. Also this paper proposed a knowledge acquisition method using data mining techniques to obtain the integrity of the alarm knowledge

  1. Experiment and mechanism investigation on advanced reburning for NOx reduction: influence of CO and temperature

    OpenAIRE

    Wang, Zhi-Hua; Zhou, Jun-hu; Zhang, Yan-Wei; Lu, Zhi-Min; Fan, Jian-ren; Cen, Ke-fa

    2005-01-01

    Pulverized coal reburning, ammonia injection and advanced reburning in a pilot scale drop tube furnace were investigated. Premix of petroleum gas, air and NH3 were burned in a porous gas burner to generate the needed flue gas. Four kinds of pulverized coal were fed as reburning fuel at constant rate of 1g/min. The coal reburning process parameters including 15%~25% reburn heat input, temperature range from 1100 °C to 1400 °C and also the carbon in fly ash, coal fineness, reburn zone stoichiom...

  2. Advances in earthquake and tsunami sciences and disaster risk reduction since the 2004 Indian ocean tsunami

    Science.gov (United States)

    Satake, Kenji

    2014-12-01

    The December 2004 Indian Ocean tsunami was the worst tsunami disaster in the world's history with more than 200,000 casualties. This disaster was attributed to giant size (magnitude M ~ 9, source length >1000 km) of the earthquake, lacks of expectation of such an earthquake, tsunami warning system, knowledge and preparedness for tsunamis in the Indian Ocean countries. In the last ten years, seismology and tsunami sciences as well as tsunami disaster risk reduction have significantly developed. Progress in seismology includes implementation of earthquake early warning, real-time estimation of earthquake source parameters and tsunami potential, paleoseismological studies on past earthquakes and tsunamis, studies of probable maximum size, recurrence variability, and long-term forecast of large earthquakes in subduction zones. Progress in tsunami science includes accurate modeling of tsunami source such as contribution of horizontal components or "tsunami earthquakes", development of new types of offshore and deep ocean tsunami observation systems such as GPS buoys or bottom pressure gauges, deployments of DART gauges in the Pacific and other oceans, improvements in tsunami propagation modeling, and real-time inversion or data assimilation for the tsunami warning. These developments have been utilized for tsunami disaster reduction in the forms of tsunami early warning systems, tsunami hazard maps, and probabilistic tsunami hazard assessments. Some of the above scientific developments helped to reveal the source characteristics of the 2011 Tohoku earthquake, which caused devastating tsunami damage in Japan and Fukushima Dai-ichi Nuclear Power Station accident. Toward tsunami disaster risk reduction, interdisciplinary and trans-disciplinary approaches are needed for scientists with other stakeholders.

  3. Analysis of Variance: Variably Complex

    Science.gov (United States)

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  4. Marked reduction of cerebral oxygen metabolism in patients with advanced cirrhosis; A positron emission tomography study

    Energy Technology Data Exchange (ETDEWEB)

    Kawatoko, Toshiharu; Murai, Koichiro; Ibayashi, Setsurou; Tsuji, Hiroshi; Nomiyama, Kensuke; Sadoshima, Seizo; Eujishima, Masatoshi; Kuwabara, Yasuo; Ichiya, Yuichi (Kyushu Univ., Fukuoka (Japan). Faculty of Medicine)

    1992-01-01

    Regional cerebral blood flow (rCBF), cerebral metabolic rate of oxygen (rCMRO{sub 2}), and oxygen extraction fraction (rOEF) were measured using positron emission tomography (PET) in four patients with cirrhosis (two males and two females, aged 57 to 69 years) in comparison with those in five age matched controls with previous transient global amnesia. PET studies were carried out when the patients were fully alert and oriented after the episodes of encephalopathy. In the patients, rCBF tended to be lower, while rCMRO{sub 2} was significantly lowered in almost all hemisphere cortices, more markedly in the frontal cortex. Our results suggest that the brain oxygen metabolism is diffusely impaired in patients with advanced cirrhosis, and the frontal cortex seems to be more susceptible to the systemic metabolic derangements induced by chronic liver disease. (author).

  5. Marked reduction of cerebral oxygen metabolism in patients with advanced cirrhosis

    International Nuclear Information System (INIS)

    Regional cerebral blood flow (rCBF), cerebral metabolic rate of oxygen (rCMRO2), and oxygen extraction fraction (rOEF) were measured using positron emission tomography (PET) in four patients with cirrhosis (two males and two females, aged 57 to 69 years) in comparison with those in five age matched controls with previous transient global amnesia. PET studies were carried out when the patients were fully alert and oriented after the episodes of encephalopathy. In the patients, rCBF tended to be lower, while rCMRO2 was significantly lowered in almost all hemisphere cortices, more markedly in the frontal cortex. Our results suggest that the brain oxygen metabolism is diffusely impaired in patients with advanced cirrhosis, and the frontal cortex seems to be more susceptible to the systemic metabolic derangements induced by chronic liver disease. (author)

  6. Advanced Monitoring of Trace Metals Applied to Contamination Reduction of Silicon Device Processing

    Science.gov (United States)

    Maillot, P.; Martin, C.; Planchais, A.

    2011-11-01

    The detrimental effects of metallic on certain key electrical parameters of silicon devices mandates the use of state-of-the-art characterization and metrology tools as well as appropriate control plans. Historically, this has been commonly achieved in-line on monitor wafers through a combination of Total Reflectance X-Ray Fluorescence (TXRF) and post anneal Surface Photo Voltage (SPV). On the other hand, VPD (Vapor Phase Decomposition) combined with ICP-MS (Inductively Coupled Mass Spectrometry) or TXRF is known to provide both identification and quantification of surface trace metals at lower detection limits. Based on these considerations the description of an advanced monitoring scheme using SPV, TXRF and automated VPD ICP-MS is described.

  7. Practice reduces task relevant variance modulation and forms nominal trajectory

    Science.gov (United States)

    Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.

  8. ADVANCED BYPRODUCT RECOVERY: DIRECT CATALYTIC REDUCTION OF SO2 TO ELEMENTAL SULFUR

    Energy Technology Data Exchange (ETDEWEB)

    Robert S. Weber

    1999-05-01

    Arthur D. Little, Inc., together with its commercialization partner, Engelhard Corporation, and its university partner Tufts, investigated a single-step process for direct, catalytic reduction of sulfur dioxide from regenerable flue gas desulfurization processes to the more valuable elemental sulfur by-product. This development built on recently demonstrated SO{sub 2}-reduction catalyst performance at Tufts University on a DOE-sponsored program and is, in principle, applicable to processing of regenerator off-gases from all regenerable SO{sub 2}-control processes. In this program, laboratory-scale catalyst optimization work at Tufts was combined with supported catalyst formulation work at Engelhard, bench-scale supported catalyst testing at Arthur D. Little and market assessments, also by Arthur D. Little. Objectives included identification and performance evaluation of a catalyst which is robust and flexible with regard to choice of reducing gas. The catalyst formulation was improved significantly over the course of this work owing to the identification of a number of underlying phenomena that tended to reduce catalyst selectivity. The most promising catalysts discovered in the bench-scale tests at Tufts were transformed into monolith-supported catalysts at Engelhard. These catalyst samples were tested at larger scale at Arthur D. Little, where the laboratory-scale results were confirmed, namely that the catalysts do effectively reduce sulfur dioxide to elemental sulfur when operated under appropriate levels of conversion and in conditions that do not contain too much water or hydrogen. Ways to overcome those limitations were suggested by the laboratory results. Nonetheless, at the end of Phase I, the catalysts did not exhibit the very stringent levels of activity or selectivity that would have permitted ready scale-up to pilot or commercial operation. Therefore, we chose not to pursue Phase II of this work which would have included further bench-scale testing

  9. Advanced oxidation and reduction processes: Closed-loop applications for mixed waste

    International Nuclear Information System (INIS)

    At Los Alamos we are engaged in applying innovative oxidation and reduction technologies to the destruction of hazardous organics. Non thermal plasmas and relativistic electron-beams both involve the generation of free radicals and are applicable to a wide variety of mixed waste as closed-loop designs can be easily engineered. Silent discharge plasmas (SDP), long used for the generation of ozone, have been demonstrated in the laboratory to be effective in destroying hazardous organic compounds and offer an altemative to existing post-incineration and off-gas treatments. SDP generates very energetic electrons which efficiently create reactive free radicals, without adding the enthalpy associated with very high gas temperatures. A SDP cell has been used as a second stage to a LANL designed, packed-bed reactor (PBR) and has demonstrated DREs as high as 99.9999% for a variety of combustible liquid and gas-based waste streams containing scintillation fluids, nitrates, PCB surrogates, and both chlorinated and fluorinated solvents. Radiolytic treatment of waste using electron-beams and/or bremsstrahlung can be applied to a wide range of waste media (liquids, sludges, and solids). The efficacy and economy of these systems has been demonstrated for aqueous waste through both laboratory and pilot scale studies. We win present recent experimental and theoretical results for systems using stand alone SDP, combined PBR/SDP, and electron-beam treatment methods

  10. Advanced and developmental technologies for treatment and volume reduction of dry active wastes

    International Nuclear Information System (INIS)

    The nuclear power industry processes Dry Active Wastes (DAW) to achieve cost-effective volume reduction and/or to produce a residue that is more compatible with final disposal criteria. The two principal processes currently used by the industry are compaction and incineration. Although incineration is often considered the process of choice, capital and operating cost are often high, and in some countries, public opposition and lengthy permitting processes result in expensive delays to bringing the process to operation. Therefore, alternative treatment options (mechanical, thermal, chemical, and biological) are being investigated to provide timely, cost-effective options for industry use. An overview of those developmental processes considered applicable to processing DAW is presented. In each category, open-quotes establishedclose quotes processes are mentioned and/or referenced, but the focus is on open-quotes potentialclose quotes technologies and the status of their development. The emphasis is on processing DAW, and therefore, those developmental processes that primarily treat solids in aqueous streams and melting/sintering technologies, both of lesser applicability to nuclear utility wastes, have been omitted. Included are those developmental technologies that appear to have a potential for radioactive waste application based on development on demonstration programs

  11. 2014 U.S. Offshore Wind Market Report: Industry Trends, Technology Advancement, and Cost Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Aaron; Stehly, Tyler; Walter Musial

    2015-09-29

    2015 has been an exciting year for the U.S. offshore wind market. After more than 15 years of development work, the U.S. has finally hit a crucial milestone; Deepwater Wind began construction on the 30 MW Block Island Wind Farm (BIWF) in April. A number of other promising projects, however, have run into economic, legal, and political headwinds, generating much speculation about the future of the industry. This slow, and somewhat painful, start to the industry is not without precedent; each country in northern Europe began with pilot-scale, proof-of-concept projects before eventually moving to larger commercial scale installations. Now, after more than a decade of commercial experience, the European industry is set to achieve a new deployment record, with more than 4 GW expected to be commissioned in 2015, with demonstrable progress towards industry-wide cost reduction goals. DWW is leveraging 25 years of European deployment experience; the BIWF combines state-of-the-art technologies such as the Alstom 6 MW turbine with U.S. fabrication and installation competencies. The successful deployment of the BIWF will provide a concrete showcase that will illustrate the potential of offshore wind to contribute to state, regional, and federal goals for clean, reliable power and lasting economic development. It is expected that this initial project will launch the U.S. industry into a phase of commercial development that will position offshore wind to contribute significantly to the electric systems in coastal states by 2030.

  12. Boundary layer drag reduction research hypotheses derived from bio-inspired surface and recent advanced applications.

    Science.gov (United States)

    Luo, Yuehao; Yuan, Lu; Li, Jianhua; Wang, Jianshe

    2015-12-01

    Nature has supplied the inexhaustible resources for mankind, and at the same time, it has also progressively developed into the school for scientists and engineers. Through more than four billions years of rigorous and stringent evolution, different creatures in nature gradually exhibit their own special and fascinating biological functional surfaces. For example, sharkskin has the potential drag-reducing effect in turbulence, lotus leaf possesses the self-cleaning and anti-foiling function, gecko feet have the controllable super-adhesion surfaces, the flexible skin of dolphin can accelerate its swimming velocity. Great profits of applying biological functional surfaces in daily life, industry, transportation and agriculture have been achieved so far, and much attention from all over the world has been attracted and focused on this field. In this overview, the bio-inspired drag-reducing mechanism derived from sharkskin is explained and explored comprehensively from different aspects, and then the main applications in different fluid engineering are demonstrated in brief. This overview will inevitably improve the comprehension of the drag reduction mechanism of sharkskin surface and better understand the recent applications in fluid engineering. PMID:26348428

  13. 13 CFR 307.22 - Variances.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22....22 Variances. EDA may approve variances to the requirements contained in this subpart, provided such variances: (a) Are consistent with the goals of the Economic Adjustment Assistance program and with an...

  14. 29 CFR 1920.2 - Variances.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this chapter may be granted in the same circumstances in which variances may be granted under sections 6(b)...

  15. 10 CFR 851.31 - Variance process.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a...

  16. 40 CFR 59.106 - Variance.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Variance. 59.106 Section 59.106... Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated entity... control may apply in writing to the Administrator for a temporary variance. The variance application...

  17. 40 CFR 59.206 - Variances.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Variances. 59.206 Section 59.206... Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who cannot... control may apply in writing to the Administrator for a variance. The variance application shall...

  18. Variance decomposition in stochastic simulators

    International Nuclear Information System (INIS)

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models

  19. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  20. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maître, O. P.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  1. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  2. Variance Risk Premiums in Foreign Exchange Markets

    OpenAIRE

    Ammann, Manuel; Buesser, Ralf

    2013-01-01

    Based on the theory of static replication of variance swaps we assess the sign and magnitude of variance risk premiums in foreign exchange markets. We find significantly negative risk premiums when realized variance is computed from intraday data with low frequency. As a likely consequence of microstructure effects however, the evidence is ambiguous when realized variance is based on high-frequency data. Common to all estimates, variance risk premiums are highly time-varying and inversely rel...

  3. An alternative analysis of variance

    OpenAIRE

    Longford, Nicholas T.

    2008-01-01

    The one-way analysis of variance is a staple of elementary statistics courses. The hypothesis test of homogeneity of the means encourages the use of the selected-model based estimators which are usually assessed without any regard for the uncertainty about the outcome of the test. We expose the weaknesses of such estimators when the uncertainty is taken into account, as it should be, and propose synthetic estimators as an alternative.

  4. On Mean-Variance Analysis

    OpenAIRE

    Yang Li; Pirvu, Traian A

    2011-01-01

    This paper considers the mean variance portfolio management problem. We examine portfolios which contain both primary and derivative securities. The challenge in this context is due to portfolio's nonlinearities. The delta-gamma approximation is employed to overcome it. Thus, the optimization problem is reduced to a well posed quadratic program. The methodology developed in this paper can be also applied to pricing and hedging in incomplete markets.

  5. Estimating the Modified Allan Variance

    Science.gov (United States)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  6. Vertical velocity variances and Reynold stresses at Brookhaven

    DEFF Research Database (Denmark)

    Busch, Niels E.; Brown, R.M.; Frizzola, J.A.

    1970-01-01

    Results of wind tunnel tests of the Brookhaven annular bivane are presented. The energy transfer functions describing the instrument response and the numerical filter employed in the data reduction process have been used to obtain corrected values of the normalized variance of the vertical wind...

  7. Analytic variance estimates of Swank and Fano factors

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov [US Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-07-15

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.

  8. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide emissions from coal-fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    Sorge, J.N.; Larrimore, C.L.; Slatsky, M.D.; Menzies, W.R.; Smouse, S.M.; Stallings, J.W.

    1997-12-31

    This paper discusses the technical progress of a US Department of Energy Innovative Clean Coal Technology project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The primary objectives of the demonstration is to determine the long-term NOx reduction performance of advanced overfire air (AOFA), low NOx burners (LNB), and advanced digital control optimization methodologies applied in a stepwise fashion to a 500 MW boiler. The focus of this paper is to report (1) on the installation of three on-line carbon-in-ash monitors and (2) the design and results to date from the advanced digital control/optimization phase of the project.

  9. Variance optimal stopping for geometric Levy processes

    DEFF Research Database (Denmark)

    Gad, Kamille Sofie Tågholt; Pedersen, Jesper Lund

    2015-01-01

    The main result of this paper is the solution to the optimal stopping problem of maximizing the variance of a geometric Lévy process. We call this problem the variance problem. We show that, for some geometric Lévy processes, we achieve higher variances by allowing randomized stopping. Furthermore......, for some geometric Lévy processes, the problem has a solution only if randomized stopping is allowed. When randomized stopping is allowed, we give a solution to the variance problem. We identify the Lévy processes for which the allowance of randomized stopping times increases the maximum variance....... When it does, we also solve the variance problem without randomized stopping....

  10. A multi-variance analysis in the time domain

    Science.gov (United States)

    Walter, Todd

    1993-01-01

    Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.

  11. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  12. Neutrino mass without cosmic variance

    CERN Document Server

    LoVerde, Marilena

    2016-01-01

    Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological datasets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias $b(k)$ and the linear growth parameter $f(k)$ inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on $b(k)$ and $f(k)$ continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via $b(k)$ and $f(k)$. The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high density limit, using multiple tracers allows cosmic-variance to be beaten and the forecasted errors on neutrino mass shrink dramatically. In...

  13. Variance analysis. Part I, Extending flexible budget variance analysis to acuity.

    Science.gov (United States)

    Finkler, S A

    1991-01-01

    The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002

  14. Mean-variance relation : a sentimental affair

    OpenAIRE

    Ascenso, Rui

    2015-01-01

    This work documents the role investor sentiment plays on the market’s mean-variance tradeoff. We find that, during high-sentiment periods, investor sentiment undermines an otherwise positive mean-variance tradeoff. In low-sentiment periods, the common understanding holds that investors should obtain a compensation for bearing variance risk. These findings are robust to different stock return indices, variances estimates and sentiment measures. We also provide international evidence for five o...

  15. Variance risk premia in energy commodities

    OpenAIRE

    Trolle, Anders; Eduardo S. Schwartz

    2010-01-01

    This paper investigates variance risk premia in energy commodities, particularly crude oil and natural gas, using a robust model-independent approach. Over a period of 11 years, we find that the average variance risk premia are significantly negative for both energy commodities. However, it is difficult to explain the level and variation in energy variance risk premia with systematic or commodity specific factors. The return profile of a natural gas variance swap resembles that of a call opti...

  16. Recent Advances in Carbon Supported Metal Nanoparticles Preparation for Oxygen Reduction Reaction in Low Temperature Fuel Cells

    Directory of Open Access Journals (Sweden)

    Yaovi Holade

    2015-03-01

    Full Text Available The oxygen reduction reaction (ORR is the oldest studied and most challenging of the electrochemical reactions. Due to its sluggish kinetics, ORR became the major contemporary technological hurdle for electrochemists, as it hampers the commercialization of fuel cell (FC technologies. Downsizing the metal particles to nanoscale introduces unexpected fundamental modifications compared to the corresponding bulk state. To address these fundamental issues, various synthetic routes have been developed in order to provide more versatile carbon-supported low platinum catalysts. Consequently, the approach of using nanocatalysts may overcome the drawbacks encountered in massive materials for energy conversion. This review paper aims at summarizing the recent important advances in carbon-supported metal nanoparticles preparation from colloidal methods (microemulsion, polyol, impregnation, Bromide Anion Exchange… as cathode material in low temperature FCs. Special attention is devoted to the correlation of the structure of the nanoparticles and their catalytic properties. The influence of the synthesis method on the electrochemical properties of the resulting catalysts is also discussed. Emphasis on analyzing data from theoretical models to address the intrinsic and specific electrocatalytic properties, depending on the synthetic method, is incorporated throughout. The synthesis process-nanomaterials structure-catalytic activity relationships highlighted herein, provide ample new rational, convenient and straightforward strategies and guidelines toward more effective nanomaterials design for energy conversion.

  17. Natural Exponential Families with Quadratic Variance Functions

    OpenAIRE

    Morris, Carl N.

    1982-01-01

    The normal, Poisson, gamma, binomial, and negative binomial distributions are univariate natural exponential families with quadratic variance functions (the variance is at most a quadratic function of the mean). Only one other such family exists. Much theory is unified for these six natural exponential families by appeal to their quadratic variance property, including infinite divisibility, cumulants, orthogonal polynomials, large deviations, and limits in distribution.

  18. 21 CFR 1010.4 - Variances.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for variances. (1) Upon application by a manufacturer (including an assembler), the Director, Center for...

  19. 40 CFR 142.41 - Variance request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variance request. 142.41 Section 142...) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...

  20. 20 CFR 654.402 - Variances.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Variances. 654.402 Section 654.402 Employees... EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a) An employer may apply for a permanent, structural variance from a specific standard(s) in...

  1. 40 CFR 52.2183 - Variance provision.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Variance provision. 52.2183 Section 52...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...

  2. 75 FR 6220 - Information Collection Requirements for the Variance Regulations; Submission for Office of...

    Science.gov (United States)

    2010-02-08

    ... Paperwork Reduction Act of 1995 (44 U.S.C. 3506 et seq.) and Secretary of Labor's Order No. 5-2007 (72 FR... Occupational Safety and Health Administration Information Collection Requirements for the Variance Regulations..., experimental, permanent, and national defense variances. DATES: Comments must be submitted...

  3. Global variance reduction for Monte Carlo reactor physics calculations

    International Nuclear Information System (INIS)

    Over the past few decades, hybrid Monte-Carlo-Deterministic (MC-DT) techniques have been mostly focusing on the development of techniques primarily with shielding applications in mind, i.e. problems featuring a limited number of responses. This paper focuses on the application of a new hybrid MC-DT technique: the SUBSPACE method, for reactor analysis calculation. The SUBSPACE method is designed to overcome the lack of efficiency that hampers the application of MC methods in routine analysis calculations on the assembly level where typically one needs to execute the flux solver in the order of 103-105 times. It places high premium on attaining high computational efficiency for reactor analysis application by identifying and capitalizing on the existing correlations between responses of interest. This paper places particular emphasis on using the SUBSPACE method for preparing homogenized few-group cross section sets on the assembly level for subsequent use in full-core diffusion calculations. A BWR assembly model is employed to calculate homogenized few-group cross sections for different burn-up steps. It is found that using the SUBSPACE method significant speedup can be achieved over the state of the art FW-CADIS method. While the presented speed-up alone is not sufficient to render the MC method competitive with the DT method, we believe this work will become a major step on the way of leveraging the accuracy of MC calculations for assembly calculations. (authors)

  4. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosher, Scott W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Seth R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bevill, Aaron M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ibrahim, Ahmad M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daily, Charles R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Evans, Thomas M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wagner, John C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Grove, Robert E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.

  5. Seasonal variance in P system models for metapopulations

    Institute of Scientific and Technical Information of China (English)

    Daniela Besozzi; Paolo Cazzaniga; Dario Pescini; Giancarlo Mauri

    2007-01-01

    Metapopulations are ecological models describing the interactions and the behavior of populations living in fragmented habitats. In this paper, metapopulations are modelled by means of dynamical probabilistic P systems, where additional structural features have been defined (e. g., a weighted graph associated with the membrane structure and the reduction of maximal parallelism). In particular, we investigate the influence of stochastic and periodic resource feeding processes, owing to seasonal variance, on emergent metapopulation dynamics.

  6. The Correct Kriging Variance Estimated by Bootstrapping

    OpenAIRE

    den Hertog, D.; Kleijnen, J.P.C.; Siem, A.Y.D.

    2004-01-01

    The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrapping to estimate the Kriging variance.The new method is tested on several artificial examples and a real-life case study.These results demonstrate that the classic formula underestimates the true Kri...

  7. Variance Swaps and Intertemporal Asset Pricing

    OpenAIRE

    Nieto, Belén; Novales Cinca, Alfonso; Rubio, Gonzalo

    2011-01-01

    This paper proposes an ICAPM in which the risk premium embedded in variance swaps is the factor mimicking portfolio for hedging exposure to changes in future investment conditions. Recent empirical evidence shows that the fears by investors to deviations from Normality in the distribution of returns are able to explain time-varying financial and macroeconomic risks in addition to being a determinant of the variance risk premium. Moreover, variance swaps hedges unfavorable changes in the stoch...

  8. Generalized analysis of molecular variance.

    Directory of Open Access Journals (Sweden)

    Caroline M Nievergelt

    2007-04-01

    Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by

  9. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  10. Analysis of variance for model output

    NARCIS (Netherlands)

    Jansen, M.J.W.

    1999-01-01

    A scalar model output Y is assumed to depend deterministically on a set of stochastically independent input vectors of different dimensions. The composition of the variance of Y is considered; variance components of particular relevance for uncertainty analysis are identified. Several analysis of va

  11. On testing variance components in ANOVA models

    OpenAIRE

    Hartung, Joachim; Knapp, Guido

    2000-01-01

    In this paper we derive asymptotic x 2 - tests for general linear hypotheses on variance components using repeated variance components models. In two examples, the two-way nested classification model and the two-way crossed classification model with interaction, we explicitly investigate the properties of the asymptotic tests in small sample sizes.

  12. 10 CFR 1022.16 - Variances.

    Science.gov (United States)

    2010-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF... Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  13. 10 CFR 1021.343 - Variances.

    Science.gov (United States)

    2010-01-01

    ... Procedures § 1021.343 Variances. (a) Emergency actions. DOE may take an action without observing all provisions of this part or the CEQ Regulations, in accordance with 40 CFR 1506.11, in emergency situations... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT...

  14. 18 CFR 1304.408 - Variances.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... § 1304.408 Variances. The Vice President or the designee thereof is authorized, following...

  15. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  16. Nonlinear epigenetic variance: review and simulations

    NARCIS (Netherlands)

    K.J. Kan; A. Ploeger; M.E.J. Raijmakers; C.V. Dolan; H.L.J. van der Maas

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addit

  17. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  18. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...... genetic variance. However, in Holstein cattle, a group of genes that explained close to none of the genetic variance could also have a high likelihood ratio. This is still a good separation of signal and noise, but instead of capturing the genetic signal in the marker set being tested, we are instead...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  19. Encoding of natural sounds by variance of the cortical local field potential.

    Science.gov (United States)

    Ding, Nai; Simon, Jonathan Z; Shamma, Shihab A; David, Stephen V

    2016-06-01

    Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex. PMID:26912594

  20. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  1. The term structure of variance swap rates and optimal variance swap investments

    OpenAIRE

    Egloff, Daniel; Leippold, Markus; Liuren WU

    2010-01-01

    This paper performs specification analysis on the term structure of variance swap rates on the S&P 500 index and studies the optimal investment decision on the variance swaps and the stock index. The analysis identifies two stochastic variance risk factors, which govern the short and long end of the variance swap term structure variation, respectively. The highly negative estimate for the market price of variance risk makes it optimal for an investor to take short positions in a short-term va...

  2. Prices and Asymptotics for Discrete Variance Swaps

    OpenAIRE

    Carole Bernard; Zhenyu Cui

    2013-01-01

    We study the fair strike of a discrete variance swap for a general time-homogeneous stochastic volatility model. In the special cases of Heston, Hull-White and Schobel-Zhu stochastic volatility models we give simple explicit expressions (improving Broadie and Jain (2008a) in the case of the Heston model). We give conditions on parameters under which the fair strike of a discrete variance swap is higher or lower than that of the continuous variance swap. The interest rate and the correlation b...

  3. On Normal Variance-Mean Mixtures

    CERN Document Server

    Yu, Yaming

    2011-01-01

    Normal variance-mean mixtures encompass a large family of useful distributions such as the generalized hyperbolic distribution, which itself includes the Student t, Laplace, hyperbolic, normal inverse Gaussian, and variance gamma distributions as special cases. We study shape properties of normal variance-mean mixtures, in both the univariate and multivariate cases, and determine conditions for unimodality and log-concavity of the density functions. This leads to a short proof of the unimodality of all generalized hyperbolic densities. We also interpret such results in practical terms and discuss discrete analogues.

  4. Variance gradients and uncertainty budgets for nonlinear measurement functions with independent inputs

    International Nuclear Information System (INIS)

    A novel variance-based measure for global sensitivity analysis, termed a variance gradient (VG), is presented for constructing uncertainty budgets under the Guide to the Expression of Uncertainty in Measurement (GUM) framework for nonlinear measurement functions with independent inputs. The motivation behind VGs is the desire of metrologists to understand which inputs' variance reductions would most effectively reduce the variance of the measurand. VGs are particularly useful when the application of the first supplement to the GUM is indicated because of the inadequacy of measurement function linearization. However, VGs reduce to a commonly understood variance decomposition in the case of a linear(ized) measurement function with independent inputs for which the original GUM readily applies. The usefulness of VGs is illustrated by application to an example from the first supplement to the GUM, as well as to the benchmark Ishigami function. A comparison of VGs to other available sensitivity measures is made. (paper)

  5. Advances in ion back-flow reduction in cascaded gaseous electron multipliers incorporating R-MHSP elements

    OpenAIRE

    Lyashenko, A. V.; Breskin, A.; Chechik, R.; Veloso, J. F. C. A.; Santos, J. M. F. dos; Amaro, F. D.

    2006-01-01

    A new concept is presented for the reduction of ion back-flow in GEM-based cascaded gaseous electron multipliers, by incorporating Micro-Hole & Strip Plate (MHSP) elements operating in reversed-bias mode (R-MHSP). About an order of magnitude reduction in ion back-flow is achieved by diverting back-drifting ions from their original path. A R-MHSP/2GEM/MHSP cascaded multiplier operated at total gain of ~1.5*10^5 yielded ion back-flow fractions of 0.0015 and 0.0004, at drift fields of 0.5 and 0....

  6. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  7. A Mean variance analysis of arbitrage portfolios

    Science.gov (United States)

    Fang, Shuhong

    2007-03-01

    Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.

  8. Kalman filtering techniques for reducing variance of digital speckle displacement measurement noise

    Institute of Scientific and Technical Information of China (English)

    Donghui Li; Li Guo

    2006-01-01

    @@ Target dynamics are assumed to be known in measuring digital speckle displacement. Use is made of a simple measurement equation, where measurement noise represents the effect of disturbances introduced in measurement process. From these assumptions, Kalman filter can be designed to reduce variance of measurement noise. An optical and analysis system was set up, by which object motion with constant displacement and constant velocity is experimented with to verify validity of Kalman filtering techniques for reduction of measurement noise variance.

  9. Reduction of Ambient Radon Activity by the use of Advanced Building Materials at King Saud University, Saudi Arabia

    International Nuclear Information System (INIS)

    The spatial variation of radon concentration within the building of the preparatory year located in Riyadh was studied. Nuclear track detectors (CR-39) were used to measure radon concentration for two consecutive six month periods in more than 40 rooms of the surveyed building. Coefficient of variation (CV) was calculated as a measure of relative variation of radon concentration between floors and between rooms on the same floor. Floor mean ratios, with ground floor as a reference level, were calculated also in order to study the correlation between radon concentration and floor levels in case of using advanced Italian granite building material. All the results of this study were investigated and compared with usual Indian granite building material and it was found that the knowledgement buildingis a healthy work place which may be due to uses of advanced building materials.

  10. Some Investigations on Hardness of Investment Casting Process After Advancements in Shell Moulding for Reduction in Cycle Time

    Science.gov (United States)

    Singh, R.; Mahajan, V.

    2014-07-01

    In the present work surface hardness investigations have been made on acrylonitrile butadiene styrene (ABS) pattern based investment castings after advancements in shell moulding for replication of biomedical implants. For the present study, a hip joint, made of ABS material, was fabricated as a master pattern by fused deposition modelling (FDM). After preparation of master pattern, mold was prepared by deposition of primary (1°), secondary (2°) and tertiary (3°) coatings with the addition of nylon fibre (1-2 cm in length of 1.5D). This study outlines the surface hardness mechanism for cast component prepared from ABS master pattern after advancement in shell moulding. The results of study highlight that during shell production, fibre modified shells have a much reduced drain time. Further the results are supported by cooling rate and micro structure analysis of casting.

  11. [ADVANCE-ON Trial; How to Achieve Maximum Reduction of Mortality in Patients With Type 2 Diabetes].

    Science.gov (United States)

    Kanorskiĭ, S G

    2015-01-01

    Of 10,261 patients with type 2 diabetes who survived to the end of a randomized ADVANCE trial 83% were included in the ADVANCE-ON project for observation for 6 years. The difference in the level of blood pressure which had been achieved during 4.5 years of within trial treatment with fixed perindopril/indapamide combination quickly vanished but significant decrease of total and cardiovascular mortality in the group of patients treated with this combination for 4.5 years was sustained during 6 years of post-trial follow-up. The results can be related to gradually weakening protective effect of perindopril/indapamide combination on cardiovascular system, and are indicative of the expedience of long-term use of this antihypertensive therapy for maximal lowering of mortality of patients with diabetes. PMID:26164995

  12. Reduced Variance for Material Sources in Implicit Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, Todd J. [Los Alamos National Laboratory

    2012-06-25

    Implicit Monte Carlo (IMC), a time-implicit method due to Fleck and Cummings, is used for simulating supernovae and inertial confinement fusion (ICF) systems where x-rays tightly and nonlinearly interact with hot material. The IMC algorithm represents absorption and emission within a timestep as an effective scatter. Similarly, the IMC time-implicitness splits off a portion of a material source directly into the radiation field. We have found that some of our variance reduction and particle management schemes will allow large variances in the presence of small, but important, material sources, as in the case of ICF hot electron preheat sources. We propose a modification of our implementation of the IMC method in the Jayenne IMC Project. Instead of battling the sampling issues associated with a small source, we bypass the IMC implicitness altogether and simply deterministically update the material state with the material source if the temperature of the spatial cell is below a user-specified cutoff. We describe the modified method and present results on a test problem that show the elimination of variance for small sources.

  13. 42 CFR 456.522 - Content of request for variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Content of request for variance. 456.522 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.522 Content of request for variance. The agency's request for a variance must include—...

  14. An Approximation of the Minimum-Variance Estimator of Heritability Based on Variance Component Analysis

    OpenAIRE

    Grossman, M.; Norton, H W

    1981-01-01

    An approximate minimum-variance estimate of heritability (h2) is proposed, using the sire and dam components of variance from a hierarchical analysis of variance. The minimum sampling variance is derived for unbalanced data. Optimum structures for the estimation of h2 are given for the balanced case. The degree to which ĥ2 is more precise than the equally weighted estimate ĥ2S+D is a function of the size and structure of the sample used. However, computer simulation reveals that ĥ2 has less d...

  15. Techno-stress: a prospective psychophysiological study of the impact of a controlled stress-reduction program in advanced telecommunication systems design work.

    Science.gov (United States)

    Arnetz, B B

    1996-01-01

    There is a void of studies concerning occupational health aspects from working with the most advanced forms of information technologies techniques such as are found in some of the world-renowned telecommunication systems development laboratories. However, many of these techniques will later be applied in the regular office environment. We wanted to identify some of the major stressors perceived by advanced telecommunication systems design employees and develop a valid and reliable instrument by which to monitor such stressors. We were also interested in assessing the impact of a controlled prospective stress-reduction program on perceived mental stress and specific psychophysiological parameters. A total of 116 employees were recruited. Sixty-one were offered to participate in one of three stress-reduction training programs (intervention group). The additional 50 functioned as a reference group. After a detailed baseline assessment, including a comprehensive questionnaire and psychophysiological measurements, new assessments were made at the end of the formal training program (+ 3 months) and after an additional 5-month period. Results reveal a significant improvement in the intervention group with regard to circulating levels of the stress-sensitive hormone prolactin as well as an attenuation in mental strain. Cardiovascular risk indicators were also improved. Circulating thrombocytes decreased in the intervention group. Type of stress-reduction programs chosen and intensity of participation did not significantly impact results. Coping style was not affected and no beneficial effects were observed with regard to the psychological characteristics of the work, eg intellectual discretion and control over work processes. The survey instrument is now being used in the continuous improvement of work processes and strategic leadership of occupational health issues. The results suggest that prior psychophysiological stress research, based on low- and medium-skill, rather

  16. Inhomogeneity-induced variance of cosmological parameters

    CERN Document Server

    Wiegand, Alexander

    2011-01-01

    Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. So, how can local measurements (at the 100 Mpc scale) be used to determine global cosmological parameters (defined at the 10 Gpc scale)? We use Buchert's averaging formalism and determine a set of locally averaged cosmological parameters in the context of the flat Lambda cold dark matter model. We calculate their ensemble means (i.e. their global values) and variances (i.e. their cosmic variances). We apply our results to typical survey geometries and focus on the study of the effects of local fluctuations of the curvature parameter. By this means we show, that in the linear regime cosmological backreaction and averaging can be reformulated as the issue of cosmic variance. The cosmic variance is found largest for the curvature parameter and discuss some of its consequences. We further propose to use the observed variance of cosmological parameters t...

  17. Giardia duodenalis: Number and Fluorescence Reduction Caused by the Advanced Oxidation Process (H2O2/UV)

    OpenAIRE

    Guimarães, José Roberto; Franco, Regina Maura Bueno; Guadagnini, Regiane Aparecida; Santos, Luciana Urbano dos

    2014-01-01

    This study evaluated the effect of peroxidation assisted by ultraviolet radiation (H2O2/UV), which is an advanced oxidation process (AOP), on Giardia duodenalis cysts. The cysts were inoculated in synthetic and surface water using a concentration of 12 g H2O2 L−1 and a UV dose (λ=254 nm) of 5,480 mJcm−2. The aqueous solutions were concentrated using membrane filtration, and the organisms were observed using a direct immunofluorescence assay (IFA). The AOP was effective in reducing the number ...

  18. Genomic prediction of breeding values using previously estimated SNP variances

    NARCIS (Netherlands)

    Calus, M.P.L.; Schrooten, C.; Veerkamp, R.F.

    2014-01-01

    Background Genomic prediction requires estimation of variances of effects of single nucleotide polymorphisms (SNPs), which is computationally demanding, and uses these variances for prediction. We have developed models with separate estimation of SNP variances, which can be applied infrequently, and

  19. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process is a...... rather general SV model - which is a special case of the semimartingale model. Then QV is integrated variance and we can derive the asymptotic distribution of the RV and its rate of convergence. These results do not require us to specify a model for either the drift or volatility functions, although we...... have to impose some weak regularity assumptions. We illustrate the use of the limit theory on some exchange rate data and some stock data. We show that even with large values of M the RV is sometimes a quite noisy estimator of integrated variance. Copyright © 2002 John Wiley & Sons, Ltd....

  20. One-step preparation of N-doped graphene/Co nanocomposite as an advanced oxygen reduction electrocatalyst

    International Nuclear Information System (INIS)

    Graphical abstract: N-doped graphene/Co nanocomposites were synthesized through one-step pyrolysis process and the product exhibits high performance for ORR and excellent stability in alkaline medium. - Highlights: • N-doped graphene/Co nano-composite is directly synthesized by a one-step method from Co(NO3)2∙6H2O, glucose and dicyandiamide (DCDA). • The electrocatalytic performance of as-prepared NG/Co-0.5 shows the peak potential positively shifts about 10 mV than commercial Pt/C electrode. • The material shows an excellent stability and tolerance to methanol poisoning effects in alkaline medium. - Abstract: N-doped graphene/Co nanocomposites (NG/Co NPs) have been prepared by a simple one-step pyrolysis of Co(NO3)2∙6H2O, glucose and dicyandiamide (DCDA). The products with nitrogen doped and suitable graphitic degree perform high electrocatalytic activity (with the reduction peak at −0.099 V vs Ag/AgCl) and near four-electron selectivity for the oxygen reduction reaction (ORR), with excellent stability and durability in alkaline medium comparable to a commercial Pt/C catalyst. Owing to the superb ORR performance, low cost and facile preparation, the catalysts of NG/Co NPs have great potential applications in fuel cells, metal-air batteries and ORR-related electrochemical industries

  1. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  2. Inhomogeneity-induced variance of cosmological parameters

    Science.gov (United States)

    Wiegand, A.; Schwarz, D. J.

    2012-02-01

    Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.

  3. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with...... additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....

  4. A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance

    Science.gov (United States)

    Weiss, Marc A.; Greenhall, Charles A.

    1996-01-01

    An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.

  5. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander;

    2013-01-01

    variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...

  6. Summary Report of Advanced Hydropower Innovations and Cost Reduction Workshop at Arlington, VA, November 5 & 6, 2015

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Patrick [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rugani, Kelsey [Kearns & West, Inc., San Francisco, CA (United States); West, Anna [Kearns & West, Inc., San Francisco, CA (United States)

    2016-03-01

    On behalf of the U.S. Department of Energy (DOE) Wind and Water Power Technology Office (WWPTO), Oak Ridge National Laboratory (ORNL), hosted a day and half long workshop on November 5 and 6, 2015 in the Washington, D.C. metro area to discuss cost reduction opportunities in the development of hydropower projects. The workshop had a further targeted focus on the costs of small, low-head1 facilities at both non-powered dams (NPDs) and along undeveloped stream reaches (also known as New Stream-Reach Development or “NSD”). Workshop participants included a cross-section of seasoned experts, including project owners and developers, engineering and construction experts, conventional and next-generation equipment manufacturers, and others to identify the most promising ways to reduce costs and achieve improvements for hydropower projects.

  7. Formative Use of Intuitive Analysis of Variance

    Science.gov (United States)

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  8. Multivariate Analysis of Variance Using Spatial Ranks

    OpenAIRE

    KYUNGMEE CHOI; JOHN MARDEN

    2002-01-01

    The authors consider multivariate analysis of variance procedures based on the multivariate spatial ranks. Two models are considered: the location-family model and the fully nonparametric model. Procedures for testing main and interaction effects are given for the 2 × 2 layout.

  9. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel-b...

  10. Broadband Minimum Variance Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2009-01-01

    A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization weights for each frequency subband. The performance of the proposed MV beamformer is tested on...

  11. Strengthened Chernoff-type variance bounds

    OpenAIRE

    Afendras, G.; Papadatos, N.

    2014-01-01

    Let $X$ be an absolutely continuous random variable from the integrated Pearson family and assume that $X$ has finite moments of any order. Using some properties of the associated orthonormal polynomial system, we provide a class of strengthened Chernoff-type variance bounds.

  12. The Variance of Language in Different Contexts

    Institute of Scientific and Technical Information of China (English)

    申一宁

    2012-01-01

    language can be quite different (here referring to the meaning) in different contexts. And there are 3 categories of context: the culture, the situation and the cotext. In this article, we will analysis the variance of language in each of the 3 aspects. This article is written for the purpose of making people understand the meaning of a language under specific better.

  13. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive...

  14. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with...

  15. LOCAL MEDIAN ESTIMATION OF VARIANCE FUNCTION

    Institute of Scientific and Technical Information of China (English)

    杨瑛

    2004-01-01

    This paper considers local median estimation in fixed design regression problems. The proposed method is employed to estimate the median function and the variance function of a heteroscedastic regression model. Strong convergence rates of the proposed estimators are obtained. Simulation results are given to show the performance of the proposed methods.

  16. ROBUST ESTIMATION OF VARIANCE COMPONENTS MODEL

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Classical least squares estimation consists of minimizing the sum of the squared residuals of observation. Many authors have produced more robust versions of this estimation by replacing the square by something else, such as the absolute value. These approaches have been generalized, and their robust estimations and influence functions of variance components have been presented. The results may have wide practical and theoretical value.

  17. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Tauchen, George; Zhou, Hao

    Motivated by the implications from a stylized self-contained general equilibrium model incorporating the effects of time-varying economic uncertainty, we show that the difference between implied and realized variation, or the variance risk premium, is able to explain a non-trivial fraction of the...

  18. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia predi...

  19. Lorenz Dominance and the Variance of Logarithms.

    OpenAIRE

    Ok, Efe A.; Foster, James

    1997-01-01

    The variance of logarithms is a widely used inequality measure which is well known to disagree with the Lorenz criterion. Up to now, the extent and likelihood of this inconsistency were thought to be vanishingly small. We find that this view is mistaken : the extent of the disgreement can be extremely large; the likelihood is far from negligible.

  20. Variance Component Testing in Multilevel Models

    NARCIS (Netherlands)

    Berkhof, J.; Snijders, T.A.B.

    2001-01-01

    Available variance component tests are reviewed and three new score tests are presented In the first score test, the asymptotic normal distribution of the test statistic is used as a reference distribution. In the other two score tests, a Satterthwaite approximation is used for the null distribution

  1. Linear transformations of variance/covariance matrices

    NARCIS (Netherlands)

    Parois, P.J.A.; Lutz, M.

    2011-01-01

    Many applications in crystallography require the use of linear transformations on parameters and their standard uncertainties. While the transformation of the parameters is textbook knowledge, the transformation of the standard uncertainties is more complicated and needs the full variance/covariance

  2. Bias and variance in continuous EDA

    OpenAIRE

    Teytaud, Fabien; Teytaud, Olivier

    2009-01-01

    Estimation of Distribution Algorithms are based on statistical estimates. We show that when combining classical tools from statistics, namely bias/variance decomposition, reweighting and quasi-randomization, we can strongly improve the convergence rate. All modifications are easy, compliant with most algorithms, and experimentally very efficient in particular in the parallel case (large offsprings).

  3. A study on effect of point-of-use filters on defect reduction for advanced 193nm processes

    Science.gov (United States)

    Vitorino, Nelson; Wolfer, Elizabeth; Cao, Yi; Lee, DongKwan; Wu, Aiwen

    2009-03-01

    Bottom Anti-Reflective Coatings (BARCs) have been widely used in the lithography process for decades. BARCs play important roles in controlling reflections and therefore improving swing ratios, CD variations, reflective notching, and standing waves. The implementation of BARC processes in 193nm dry and immersion lithography has been accompanied by defect reduction challenges on fine patterns. Point-of-Use filters are well known among the most critical components on a track tool ensuring low wafer defects by providing particle-free coatings on wafers. The filters must have very good particle retention to remove defect-causing particulate and gels while not altering the delicate chemical formulation of photochemical materials. This paper describes a comparative study of the efficiency and performance of various Point-of-Use filters in reducing defects observed in BARC materials. Multiple filter types with a variety of pore sizes, membrane materials, and filter designs were installed on an Entegris Intelligent(R) Mini dispense pump which is integrated in the coating module of a clean track. An AZ(R) 193nm organic BARC material was spin-coated on wafers through various filter media. Lithographic performance of filtered BARCs was examined and wafer defect analysis was performed. By this study, the effect of filter properties on BARC process related defects can be learned and optimum filter media and design can be selected for BARC material to yield the lowest defects on a coated wafer.

  4. Advanced Technology Application Station Blackout Core Damage Frequency Reduction - The Contribution of an AC Independent Core Residual Heat Removal System

    International Nuclear Information System (INIS)

    An event of station blackout (SBO) can result in severe core damage and undesirable consequences to the public and the environment. To cope with an SBO, nuclear reactors are provided with protection systems that automatically shut down the reactor, and with safety systems to remove the core residual heat. In order to reduce core damage frequency, the design of new reactors incorporates passive systems that rely only on natural forces to operate. This paper presents an evaluation of the SBO core damage frequency of a PWR reactor being designed in Brazil. The reactor has two core residual heat removal systems - an AC dependent system, and a passive system. Probabilistic safety assessment is applied to identify failure scenarios leading to SBO core damage. The SBO is treated as an initiating event, and fault trees are developed to model those systems required to operate in SBO conditions. Event trees are developed to assist in the evaluation of the possible combinations of success or failure of the systems required to cope with an SBO. The evaluation is performed using SAPHIRE, as the software for reliability and risk assessment. It is shown that a substantial reduction in the core damage frequency can be achieved by implementing the passive system proposed for the LABGENE reactor design. Keywords: Station blackout, passive safety system, core damage frequency. (author)

  5. Advanced byproduct recovery: Direct catalytic reduction of sulfur dioxide to elemental sulfur. Quarterly report, April 1--June 30, 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    The team of Arthur D. Little, Tufts University and Engelhard Corporation are conducting Phase 1 of a four and a half year, two-phase effort to develop and scale-up an advanced byproduct recovery technology that is a direct, single-stage, catalytic process for converting sulfur dioxide to elemental sulfur. This catalytic process reduces SO{sub 2} over a fluorite-type oxide (such as ceria and zirconia). The catalytic activity can be significantly promoted by active transition metals, such as copper. More than 95% elemental sulfur yield, corresponding to almost complete sulfur dioxide conversion, was obtained over a Cu-Ce-O oxide catalyst as part of an on-going DOE-sponsored, University Coal Research Program. This type of mixed metal oxide catalyst has stable activity, high selectivity for sulfur production, and is resistant to water and carbon dioxide poisoning. Tests with CO and CH{sub 4} reducing gases indicate that the catalyst has the potential for flexibility with regard to the composition of the reducing gas, making it attractive for utility use. The performance of the catalyst is consistently good over a range of SO{sub 2} inlet concentration (0.1 to 10%) indicating its flexibility in treating SO{sub 2} tail gases as well as high concentration streams. The principal objective of the Phase 1 program is to identify and evaluate the performance of a catalyst which is robust and flexible with regard to choice of reducing gas. In order to achieve this goal, the authors have planned a structured program including: Market/process/cost/evaluation; Lab-scale catalyst preparation/optimization studies; Lab-scale, bulk/supported catalyst kinetic studies; Bench-scale catalyst/process studies; and Utility review. Progress is reported from all three organizations.

  6. 10 CFR 851.32 - Action on variance requests.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Action on variance requests. 851.32 Section 851.32 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.32 Action on variance requests. (a... approval of a variance application, the Chief Health, Safety and Security Officer must forward to the...

  7. 41 CFR 50-204.1a - Variances.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances. 50-204.1a... and Application § 50-204.1a Variances. (a) Variances from standards in this part may be granted in the same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the...

  8. A Critical Note on the Forecast Error Variance Decomposition

    OpenAIRE

    Seymen, Atilim

    2008-01-01

    The paper questions the reasonability of using forecast error variance decompositions for assessing the role of different structural shocks in business cycle fluctuations. It is shown that the forecast error variance decomposition is related to a dubious definition of the business cycle. A historical variance decomposition approach is proposed to overcome the problems related to the forecast error variance decomposition.

  9. 42 CFR 456.525 - Request for renewal of variance.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Request for renewal of variance. 456.525 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.525 Request for renewal of variance. (a) The agency must submit a request for renewal of...

  10. 42 CFR 456.521 - Conditions for granting variance requests.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Conditions for granting variance requests. 456.521..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.521 Conditions for granting variance requests. (a) Except as described under paragraph...

  11. Variance-reduced particle simulation of the Boltzmann transport equation in the relaxation-time approximation.

    Science.gov (United States)

    Radtke, Gregg A; Hadjiconstantinou, Nicolas G

    2009-05-01

    We present an efficient variance-reduced particle simulation technique for solving the linearized Boltzmann transport equation in the relaxation-time approximation used for phonon, electron, and radiative transport, as well as for kinetic gas flows. The variance reduction is achieved by simulating only the deviation from equilibrium. We show that in the limit of small deviation from equilibrium of interest here, the proposed formulation achieves low relative statistical uncertainty that is also independent of the magnitude of the deviation from equilibrium, in stark contrast to standard particle simulation methods. Our results demonstrate that a space-dependent equilibrium distribution improves the variance reduction achieved, especially in the collision-dominated regime where local equilibrium conditions prevail. We also show that by exploiting the physics of relaxation to equilibrium inherent in the relaxation-time approximation, a very simple collision algorithm with a clear physical interpretation can be formulated. PMID:19518597

  12. Carpal ligamentous disruptions and negative ulnar variance

    International Nuclear Information System (INIS)

    Negative ulnar variance is a condition in which the ulna is relatively shorter than the radius at the carpus. It was found in 21% of 203 normal wrists. We have observed an increased incidence (49%) of this anomaly in patients with carpal ligamentous instabilities (dorsiflexion instability, palmar flexion instability, scapholunate dissociation with rotary luxation of the scaphoid, and lunate and perilunate dislocations). While the reasons for this association have yet to be adequately delineated, the presence of a negative ulnar variant may serve as an impartial clue to the presence of ligamentous instability. Many carpal instabilities present with subtle radiographic findings requiring careful evaluation of radiographs. Patients with negative ulnar variance and histories suggestive of ligamentous instability should undergo careful radiologic evaluation to assure early diagnosis of carpal disruption. (orig.)

  13. Analysis of variance of microarray data.

    Science.gov (United States)

    Ayroles, Julien F; Gibson, Greg

    2006-01-01

    Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792

  14. Applications of non-parametric statistics and analysis of variance on sample variances

    Science.gov (United States)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  15. Uses and abuses of analysis of variance.

    OpenAIRE

    Evans, S. J.

    1983-01-01

    Analysis of variance is a term often quoted to explain the analysis of data in experiments and clinical trials. The relevance of its methodology to clinical trials is shown and an explanation of the principles of the technique is given. The assumptions necessary are examined and the problems caused by their violation are discussed. The dangers of misuse are given with some suggestions for alternative approaches.

  16. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  17. Mean variance optimality in Markov decision chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel; Sitař, Milan

    Hradec Králové : Gadeamus, 2005 - (Skalská, H.), s. 350-357 ISBN 978-80-7041-535-1. [Mathematical Methods in Economics 2005 /23./. Hradec Králové (CZ), 14.09.2005-16.09.2005] R&D Projects: GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov reward processes * expectation and variance of cumulative rewards Subject RIV: BB - Applied Statistics, Operational Research

  18. High-dimensional regression with unknown variance

    CERN Document Server

    Giraud, Christophe; Verzelen, Nicolas

    2011-01-01

    We review recent results for high-dimensional sparse linear regression in the practical case of unknown variance. Different sparsity settings are covered, including coordinate-sparsity, group-sparsity and variation-sparsity. The emphasize is put on non-asymptotic analyses and feasible procedures. In addition, a small numerical study compares the practical performance of three schemes for tuning the Lasso esti- mator and some references are collected for some more general models, including multivariate regression and nonparametric regression.

  19. The Theory of Variances in Equilibrium Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  20. Systems Engineering Programmatic Estimation Using Technology Variance

    Science.gov (United States)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  1. The Theory of Variances in Equilibrium Reconstruction

    International Nuclear Information System (INIS)

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature

  2. 40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.

    Science.gov (United States)

    2010-07-01

    ... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1...-precipitation with barium sulfate (f) Intermediate to Advanced Ground waters with suitable water quality....

  3. Directional variance analysis of annual rings

    Science.gov (United States)

    Kumpulainen, P.; Marjanen, K.

    2010-07-01

    The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.

  4. The Parabolic variance (PVAR), a wavelet variance based on least-square fit

    CERN Document Server

    Vernotte, F; Bourgeois, P -Y; Rubiola, E

    2015-01-01

    The Allan variance (AVAR) is one option among the wavelet variances. However a milestone in the analysis of frequency fluctuations and in the long-term stability of clocks, and certainly the most widely used one, AVAR is not suitable when fast noise processes show up, chiefly because of the poor rejection of white phase noise. The modified Allan variance (MVAR) features high resolution in the presence of white PM noise, but it is poorer for slow phenomena because the wavelet spans over 50% longer time. This article introduces the Parabolic Variance (PVAR), a wavelet variance similar to the Allan variance, based on the Linear Regression (LR) of phase data. The PVAR relates to the Omega frequency counter, which is the topics of a companion article [the reference to the article, or to the ArXiv manuscript, will be provided later]. The PVAR wavelet spans over 2 tau, the same of the AVAR wavelet. After setting the theoretical framework, we analyze the degrees of freedom and the detection of weak noise processes in...

  5. Minimum variance brain source localization for short data sequences.

    Science.gov (United States)

    Ravan, Maryam; Reilly, James P; Hasey, Gary

    2014-02-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second-order statistics often fail when the number of samples of the recorded data sequences is small in comparison to the number of electrodes. This condition is particularly relevant when measuring evoked potentials. Due to the correlated background EEG/MEG signal, an adaptive approach to localization is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This reduction results in decreased resolution and accuracy of the estimated source configuration. This paper develops and tests a new multistage adaptive processing technique based on the minimum variance beamformer for brain source localization that has been previously used in the radar statistical signal processing context. This processing, referred to as the fast fully adaptive (FFA) approach, can significantly reduce the required sample support, while still preserving all available DoFs. To demonstrate the performance of the FFA approach in the limited data scenario, simulation and experimental results are compared with two previous beamforming approaches; i.e., the fully adaptive minimum variance beamforming method and the beamspace beamforming method. Both simulation and experimental results demonstrate that the FFA method can localize all types of brain activity more accurately than the other approaches with limited data. PMID:24108457

  6. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, second quarter 1994, April 1994--June 1994

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NOx combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NOx reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NOx burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NOx reductions of each technology and evaluate the effects of those reductions on other combustion parameters. Results are described.

  7. An effective approximation for variance-based global sensitivity analysis

    International Nuclear Information System (INIS)

    The paper presents a fairly efficient approximation for the computation of variance-based sensitivity measures associated with a general, n-dimensional function of random variables. The proposed approach is based on a multiplicative version of the dimensional reduction method (M-DRM), in which a given complex function is approximated by a product of low dimensional functions. Together with the Gaussian quadrature, the use of M-DRM significantly reduces the computation effort associated with global sensitivity analysis. An important and practical benefit of the M-DRM is the algebraic simplicity and closed-form nature of sensitivity coefficient formulas. Several examples are presented to show that the M-DRM method is as accurate as results obtained from simulations and other approximations reported in the literature

  8. Further results on variances of local stereological estimators

    DEFF Research Database (Denmark)

    Pawlas, Zbynek; Jensen, Eva B. Vedel

    2006-01-01

    In the present paper the statistical properties of local stereological estimators of particle volume are studied. It is shown that the variance of the estimators can be decomposed into the variance due to the local stereological estimation procedure and the variance due to the variability in the...... particle population. It turns out that these two variance components can be estimated separately, from sectional data. We present further results on the variances that can be used to determine the variance by numerical integration for particular choices of particle shapes....

  9. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  10. A relation between information entropy and variance

    CERN Document Server

    Pandey, Biswajit

    2016-01-01

    We obtain an analytic relation between the information entropy and the variance of a distribution in the regime of small fluctuations. We use a set of Monte Carlo simulations of different homogeneous and inhomogeneous distributions to verify the relation and also test it in a set of cosmological N-body simulations. We find that the relation is in excellent agreement with the simulations and is independent of number density and the nature of the distributions. The relation would help us to relate entropy to other conventional measures and widen its scope.

  11. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can free...

  12. A Mean-variance Problem in the Constant Elasticity of Variance (CEV) Mo del

    Institute of Scientific and Technical Information of China (English)

    Hou Ying-li; Liu Guo-xin; Jiang Chun-lan

    2015-01-01

    In this paper, we focus on a constant elasticity of variance (CEV) model and want to find its optimal strategies for a mean-variance problem under two con-strained controls: reinsurance/new business and investment (no-shorting). First, a Lagrange multiplier is introduced to simplify the mean-variance problem and the corresponding Hamilton-Jacobi-Bellman (HJB) equation is established. Via a power transformation technique and variable change method, the optimal strategies with the Lagrange multiplier are obtained. Final, based on the Lagrange duality theorem, the optimal strategies and optimal value for the original problem (i.e., the efficient strategies and efficient frontier) are derived explicitly.

  13. Estimators for variance components in structured stair nesting models

    Science.gov (United States)

    Monteiro, Sandra; Fonseca, Miguel; Carvalho, Francisco

    2016-06-01

    The purpose of this paper is to present the estimation of the components of variance in structured stair nesting models. The relationship between the canonical variance components and the original ones, will be very important in obtaining that estimators.

  14. Understanding Scaled Prediction Variance Using Graphical Methods for Model Robustness, Measurement Error and Generalized Linear Models for Response Surface Designs

    OpenAIRE

    Ozol-Godfrey, Ayca

    2004-01-01

    Graphical summaries are becoming important tools for evaluating designs. The need to compare designs in term of their prediction variance properties advanced this development. A recent graphical tool, the Fraction of Design Space plot, is useful to calculate the fraction of the design space where the scaled prediction variance (SPV) is less than or equal to a given value. In this dissertation we adapt FDS plots, to study three specific design problems: robustness to model assumptions, robustn...

  15. The Importance of Variance Analysis for Costs Control in Organizations

    OpenAIRE

    Okoh, L. O.; Uzoka, P.

    2012-01-01

    This review aimed at examining the importance of variance analysis for cost control in organizations. The study x-rayed the concept of variance analysis, types, sources, objectives and its significance. The study reported that variance analysis has significant influence in evaluating individual performance in organizations, assignment of responsibilities to individuals and assisting management to rely on the principle of management by exception and recommended among others, variances analysis...

  16. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available...... variance. Our findings suggest that the empirical path of quadratic variation is also estimated better with the realized range-based variance....

  17. Inheritance beyond plain heritability : variance controlling genes in Arabidopsis thaliana

    OpenAIRE

    Xia Shen; Mats Pettersson; Lars Rönnegård; Örjan Carlborg

    2012-01-01

    Author Summary The most well-studied effects of genes are those leading to different phenotypic means for alternative genotypes. A less well-explored type of genetic control is that resulting in a heterogeneity in variance between genotypes. Here, we reanalyze a publicly available Arabidopsis thaliana GWAS dataset to detect genetic effects on the variance heterogeneity, and our results indicate that the environmental variance is under extensive genetic control by a large number of variance-co...

  18. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  19. The VIX, the Variance Premium and Stock Market Volatility

    OpenAIRE

    Bekaert, Geert; Hoerova, Marie

    2013-01-01

    We decompose the squared VIX index, derived from US S&P500; options prices, into the conditional variance of stock returns and the equity variance premium. We evaluate a plethora of state-of-the-art volatility forecasting models to produce an accurate measure of the conditional variance. We then examine the predictive power of the VIX and its two components for stock market returns, economic activity and financial instability. The variance premium predicts stock returns while the conditional ...

  20. Volatility forecasting when the noise variance Is time-varying

    OpenAIRE

    Chaker, Selma; Meddahi, Nour

    2013-01-01

    This paper explores the volatility forecasting implications of a model in which the friction in high-frequency prices is related to the true underlying volatility. The contribution of this paper is to propose a framework under which the realized variance may improve volatility forecasting if the noise variance is related to the true return volatility. The realized variance is defined as the sum of the squared intraday returns. When based on high-frequency returns, the realized variance would ...

  1. 40 CFR 142.43 - Disposition of a variance request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Disposition of a variance request. 142... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.43 Disposition of a variance request. (a) If...

  2. Semiparametric bounds of mean and variance for exotic options

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Finding semiparametric bounds for option prices is a widely studied pricing technique.We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic(Collar and Gap) call options given mean and variance information on the underlying asset price.Mathematically,we extended domination technique by quadratic functions to bound mean and variances.

  3. 40 CFR 52.1390 - Missoula variance provision.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Missoula variance provision. 52.1390... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Montana § 52.1390 Missoula variance provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was...

  4. Semiparametric bounds of mean and variance for exotic options

    Institute of Scientific and Technical Information of China (English)

    LIU GuoQing; LI V.Wenbo

    2009-01-01

    Finding semiparametric bounds for option prices is a widely studied pricing technique. We obtain closed-form semiparametric bounds of the mean and variance for the pay-off of two exotic (Collar and Gap) call options given mean and variance information on the underlying asset price. Mathematically, we extended domination technique by quadratic functions to bound mean and variances.

  5. 40 CFR 142.42 - Consideration of a variance request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Consideration of a variance request... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.42 Consideration of a variance request. (a)...

  6. 40 CFR 142.40 - Requirements for a variance.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Requirements for a variance. 142.40... (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.40 Requirements for a variance. (a) The Administrator may...

  7. 31 CFR 10.67 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE INTERNAL REVENUE SERVICE Rules Applicable to Disciplinary Proceedings § 10.67 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in pleadings and the...

  8. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901... Suspension or Termination of Enrollment § 901.40 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the evidence adduced in support of the pleading,...

  9. Variance gamma process simulation and it's parameters estimation

    OpenAIRE

    Kuzmina, A. V.

    2010-01-01

    Variance gamma process is a three parameter process. Variance gamma process is simulated as a gamma time-change Brownian motion and as a difference of two independent gamma processes. Estimations of simulated variance gamma process parameters are presented in this paper.

  10. The Effect of Selection on the Phenotypic Variance

    OpenAIRE

    Shnol, E.E.; Kondrashov, A S

    1993-01-01

    We consider the within-generation changes of phenotypic variance caused by selection w(x) which acts on a quantitative trait x. If before selection the trait has Gaussian distribution, its variance decreases if the second derivative of the logarithm of w(x) is negative for all x, while if it is positive for all x, the variance increases.

  11. 40 CFR 59.509 - Can I get a variance?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... its reasonable control may apply in writing to the Administrator for a temporary variance....

  12. 31 CFR 8.59 - Proof; variance; amendment of pleadings.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading, the...

  13. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    Science.gov (United States)

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523

  14. The variance of the adjusted Rand index.

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence

    2016-06-01

    For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record PMID:26881693

  15. The value of travel time variance

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Engelson, Leonid

    2011-01-01

    This paper considers the value of travel time variability under scheduling preferences that are defined in terms of linearly time varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that...... does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers who can freely choose departure time and to travellers who use a scheduled service with fixed headway. Depending on...... parameters, travellers may be risk averse or risk seeking and the value of travel time may increase or decrease in the mean travel time....

  16. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  17. Variance-based interaction index measuring heteroscedasticity

    Science.gov (United States)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  18. Confidence Intervals for the Between Group Variance in the Unbalanced One-Way Random Effects Model of Analysis of Variance

    OpenAIRE

    Hartung, Joachim; Knapp, Guido

    2000-01-01

    A confidence interval for the between group variance is proposed which is deduced from Wald’s exact confidence interval for the ratio of the two variance components in the one-way random effects model and the exact confidence interval for the error variance resp. an unbiased estimator of the error variance. In a simulation study the confidence coefficients for these two intervals are compared with the confidence coefficients of two other commonly used confidence intervals. There, the confiden...

  19. Variance Estimation Using Refitted Cross-validation in Ultrahigh Dimensional Regression

    CERN Document Server

    Fan, Jianqing; Hao, Ning

    2010-01-01

    Variance estimation is a fundamental problem in statistical modeling. In ultrahigh dimensional linear regressions where the dimensionality is much larger than sample size, traditional variance estimation techniques are not applicable. Recent advances on variable selection in ultrahigh dimensional linear regressions make this problem accessible. One of the major problems in ultrahigh dimensional regression is the high spurious correlation between the unobserved realized noise and some of the predictors. As a result, the realized noises are actually predicted when extra irrelevant variables are selected, leading to serious underestimate of the noise level. In this paper, we propose a two-stage refitted procedure via a data splitting technique, called refitted cross-validation (RCV), to attenuate the influence of irrelevant variables with high spurious correlations. Our asymptotic results show that the resulting procedure performs as well as the oracle estimator, which knows in advance the mean regression functi...

  20. Regional sensitivity analysis using revised mean and variance ratio functions

    International Nuclear Information System (INIS)

    The variance ratio function, derived from the contribution to sample variance (CSV) plot, is a regional sensitivity index for studying how much the output deviates from the original mean of model output when the distribution range of one input is reduced and to measure the contribution of different distribution ranges of each input to the variance of model output. In this paper, the revised mean and variance ratio functions are developed for quantifying the actual change of the model output mean and variance, respectively, when one reduces the range of one input. The connection between the revised variance ratio function and the original one is derived and discussed. It is shown that compared with the classical variance ratio function, the revised one is more suitable to the evaluation of model output variance due to reduced ranges of model inputs. A Monte Carlo procedure, which needs only a set of samples for implementing it, is developed for efficiently computing the revised mean and variance ratio functions. The revised mean and variance ratio functions are compared with the classical ones by using the Ishigami function. At last, they are applied to a planar 10-bar structure

  1. Cyclostationary analysis with logarithmic variance stabilisation

    Science.gov (United States)

    Borghesani, Pietro; Shahriar, Md Rifat

    2016-03-01

    Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

  2. Balanced and Approximate Zero-Variance Recursive Estimators for the Static Communication Network Reliability Problem

    OpenAIRE

    Cancela, Héctor; El Khadiri, Mohamed; Rubino, Gerardo; Tuffin, Bruno

    2015-01-01

    International audience Exact evaluation of static network reliability parameters belongs to the NP-hard family and Monte Carlo simulation is therefore a relevant tool to provide their estimations. The first goal of this paper is to review a Recursive Variance Reduction (RVR) estimator which approaches the unreliability by recursively reducing the graph from the random choice of the first working link on selected cuts. We show that the method does not verify the bounded relative error (BRE)...

  3. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  4. Common, Specific, and Error Variance Components of Factor Models

    OpenAIRE

    Raffalovich, Lawrence E.; George W. Bohrnstedt

    1987-01-01

    In the classic factor-analysis model, the total variance of an item is decomposed into common, specific, and random error components. Since with cross-sectional data it is not possible to estimate the specific variance component, specific and random error variance are summed to the item's uniqueness. This procedure imposes a downward bias to item reliability estimates, however, and results in correlated item uniqueness in longitudinal models. In this article, we describe a method for estimati...

  5. ELLIPTICAL SYMMETRY, EXPECTED UTILITY, AND MEAN-VARIANCE ANALYSIS

    OpenAIRE

    Carl H. NELSON; Ndjeunga, Jupiter

    1997-01-01

    Mean-variance analysis in the form of risk programming has a long, productive history in agricultural economics research. And risk programming continues to be used despite well known theoretical results that choices based on mean-variance analysis are not consistent with choices based on expected utility maximization. This paper demonstrates that the multivariate distribution of returns used in risk programming must be elliptically symmetric in order for mean-variance analysis to be consisten...

  6. On spectral methods for variance based sensitivity analysis

    OpenAIRE

    Alexanderian, Alen

    2013-01-01

    Consider a mathematical model with a finite number of random parameters. Variance based sensitivity analysis provides a framework to characterize the contribution of the individual parameters to the total variance of the model response. We consider the spectral methods for variance based sensitivity analysis which utilize representations of square integrable random variables in a generalized polynomial chaos basis. Taking a measure theoretic point of view, we provide a rigorous and at the sam...

  7. Visualization Method for Finding Critical Care Factors in Variance Analysis

    OpenAIRE

    YUI, Shuntaro; BITO, Yoshitaka; OBARA, Kiyohiro; KAMIYAMA, Takuya; SETO, Kumiko; Ban, Hideyuki; HASHIZUME, Akihide; HAGA, Masashi; Oka, Yuji

    2006-01-01

    We present a novel visualization method for finding care factors in variance analysis. The analysis has two stages: first stage enables users to extract a significant variance, and second stage enables users to find out a critical care factors of the variance. The analysis has been validated by using synthetically created inpatient care processes. It was found that the method is efficient in improving clinical pathways.

  8. Estimation of the Conditional Variance in Paired Experiments

    OpenAIRE

    Abadie, Alberto; Guido W. IMBENS

    2008-01-01

    In paired randomized experiments units are grouped in pairs, often based on covariate information, with random assignment within the pairs. Average treatment effects are then estimated by averaging the within-pair differences in outcomes. Typically the variance of the average treatment effect estimator is estimated using the sample variance of the within-pair differences. However, conditional on the covariates the variance of the average treatment effect estimator may be substantially smaller...

  9. Variance analysis. Part II, The use of computers.

    Science.gov (United States)

    Finkler, S A

    1991-09-01

    This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788

  10. Accounting for Variance in Hyperspectral Data Coming from Limitations of the Imaging System

    Science.gov (United States)

    Shurygin, B.; Shestakova, M.; Nikolenko, A.; Badasen, E.; Strakhov, P.

    2016-06-01

    Over the course of the past few years, a number of methods was developed to incorporate hyperspectral imaging specifics into generic data mining techniques, traditionally used for hyperspectral data processing. Projection pursuit methods embody the largest class of methods empoyed for hyperspectral image data reduction, however, they all have certain drawbacks making them either hard to use or inefficient. It has been shown that hyperspectral image (HSI) statistics tend to display "heavy tails" (Manolakis2003)(Theiler2005), rendering most of the projection pursuit methods hard to use. Taking into consideration the magnitude of described deviations of observed data PDFs from normal distribution, it is apparent that a priori knowledge of variance in data caused by the imaging system is to be employed in order to efficiently classify objects on HSIs (Kerr, 2015), especially in cases of wildly varying SNR. A number of attempts to describe this variance and compensating techniques has been made (Aiazzi2006), however, new data quality standards are not yet set and accounting for the detector response is made under large set of assumptions. Current paper addresses the issue of hyperspectral image classification in the context of different variance sources based on the knowledge of calibration curves (both spectral and radiometric) obtained for each pixel of imaging camera. A camera produced by ZAO NPO Lepton (Russia) was calibrated and used to obtain a test image. A priori known values of SNR and spectral channel cross-correlation were incorporated into calculating test statistics used in dimensionality reduction and feature extraction. Expectation-Maximization classification algorithm modification for non-Gaussian model as described by (Veracini2010) was further employed. The impact of calibration data coarsening by ignoring non-uniformities on false alarm rate was studied. Case study shows both regions of scene-dominated variance and sensor-dominated variance, leading

  11. Discrete and continuous time dynamic mean-variance analysis

    OpenAIRE

    Reiss, Ariane

    1999-01-01

    Contrary to static mean-variance analysis, very few papers have dealt with dynamic mean-variance analysis. Here, the mean-variance efficient self-financing portfolio strategy is derived for n risky assets in discrete and continuous time. In the discrete setting, the resulting portfolio is mean-variance efficient in a dynamic sense. It is shown that the optimal strategy for n risky assets may be dominated if the expected terminal wealth is constrained to exactly attain a certain goal instead o...

  12. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    Science.gov (United States)

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  13. 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Public design report (preliminary and final)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    This Public Design Report presents the design criteria of a DOE Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of NO{sub x} emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 (500 MW) near Rome, Georgia. The technologies being demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NO{sub x} burner. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NO{sub x} burners, advanced overfire systems, and digital control system.

  14. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO sub x ) emissions from coal-fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    1992-04-21

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company's Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as parameters such as particulate characteristics and boiler efficiency.

  15. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report, fourth quarter 1991

    Energy Technology Data Exchange (ETDEWEB)

    1992-04-21

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. The project is being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as parameters such as particulate characteristics and boiler efficiency.

  16. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    International Nuclear Information System (INIS)

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 104 to 106 times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of

  17. Estimation of Variance Components in the Mixed-Effects Models: A Comparison Between Analysis of Variance and Spectral Decomposition

    OpenAIRE

    Wu, Mi-Xia; Yu, Kai-Fun; Liu, Ai-Yi

    2009-01-01

    The mixed-effects models with two variance components are often used to analyze longitudinal data. For these models, we compare two approaches to estimating the variance components, the analysis of variance approach and the spectral decomposition approach. We establish a necessary and sufficient condition for the two approaches to yield identical estimates, and some sufficient conditions for the superiority of one approach over the other, under the mean squared error criterion. Applications o...

  18. Time variance effects and measurement error indications for MLS measurements

    DEFF Research Database (Denmark)

    Liu, Jiyuan

    1999-01-01

    Mathematical characteristics of Maximum-Length-Sequences are discussed, and effects of measuring on slightly time-varying systems with the MLS method are examined with computer simulations with MATLAB. A new coherence measure is suggested for the indication of time-variance effects. The results...... of the simulations show that the proposed MLS coherence can give an indication of time-variance effects....

  19. Productive Failure in Learning the Concept of Variance

    Science.gov (United States)

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  20. 75 FR 22424 - Avalotis Corp.; Grant of a Permanent Variance

    Science.gov (United States)

    2010-04-28

    ...), and 74 FR 41742 (August 18, 2009)).\\1\\ \\1\\ Zurn Industries, Inc. received two permanent variances from OSHA. The first variance, granted on May 14, 1985 (50 FR 20145), addressed the boatswain's-chair... proposed alternatives (see 38 FR 8545 (April 3, 1973), 44 FR 51352 (August 31, 1979), 50 FR 20145 (May...

  1. Sublinear variance for directed last-passage percolation

    OpenAIRE

    Graham, B. T.

    2009-01-01

    A range of first-passage percolation type models are believed to demonstrate the related properties of sublinear variance and superdiffusivity. We show that directed last-passage percolation with Gaussian vertex weights has a sublinear variance property. We also consider other vertex weight distributions. Corresponding results are obtained for the ground state of the `directed polymers in a random environment' model.

  2. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Science.gov (United States)

    2010-07-01

    ... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...

  3. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  4. Research on variance of subnets in network sampling

    Institute of Scientific and Technical Information of China (English)

    Qi Gao; Xiaoting Li; Feng Pan

    2014-01-01

    In the recent research of network sampling, some sam-pling concepts are misunderstood, and the variance of subnets is not taken into account. We propose the correct definition of the sample and sampling rate in network sampling, as wel as the formula for calculating the variance of subnets. Then, three commonly used sampling strategies are applied to databases of the connecting nearest-neighbor (CNN) model, random network and smal-world network to explore the variance in network sam-pling. As proved by the results, snowbal sampling obtains the most variance of subnets, but does wel in capturing the network struc-ture. The variance of networks sampled by the hub and random strategy are much smal er. The hub strategy performs wel in re-flecting the property of the whole network, while random sampling obtains more accurate results in evaluating clustering coefficient.

  5. Confidence Intervals of Variance Functions in Generalized Linear Model

    Institute of Scientific and Technical Information of China (English)

    Yong Zhou; Dao-ji Li

    2006-01-01

    In this paper we introduce an appealing nonparametric method for estimating variance and conditional variance functions in generalized linear models (GLMs), when designs are fixed points and random variables respectively. Bias-corrected confidence bands are proposed for the (conditional) variance by local linear smoothers. Nonparametric techniques are developed in deriving the bias-corrected confidence intervals of the (conditional) variance. The asymptotic distribution of the proposed estimator is established and show that the bias-corrected confidence bands asymptotically have the correct coverage properties. A small simulation is performed when unknown regression parameter is estimated by nonparametric quasi-likelihood. The results are also applicable to nonparametric autoregressive times series model with heteroscedastic conditional variance.

  6. Predicting the variance of a measurement with 1/f noise

    CERN Document Server

    Lenoir, Benjamin

    2013-01-01

    Measurement devices always add noise to the signal of interest and it is necessary to evaluate the variance of the results. This article focuses on stationary random processes whose Power Spectrum Density is a power law of frequency. For flicker noise, behaving as $1/f$ and which is present in many different phenomena, the usual way to compute the variance leads to infinite values. This article proposes an alternative definition of the variance which takes into account the fact that measurement devises need to be calibrated. This new variance, which depends on the calibration duration, the measurement duration and the duration between the calibration and the measurement, allows avoiding infinite values when computing the variance of a measurement.

  7. Variance After-Effects Distort Risk Perception in Humans.

    Science.gov (United States)

    Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel

    2016-06-01

    In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500

  8. Pricing Volatility Derivatives Under the Modified Constant Elasticity of Variance Model

    OpenAIRE

    Leunglung Chan; Eckhard Platen

    2015-01-01

    This paper studies volatility derivatives such as variance and volatility swaps, options on variance in the modified constant elasticity of variance model using the benchmark approach. The analytical expressions of pricing formulas for variance swaps are presented. In addition, the numerical solutions for variance swaps, volatility swaps and options on variance are demonstrated.

  9. Modeling variance structure of body shape traits of Lipizzan horses.

    Science.gov (United States)

    Kaps, M; Curik, I; Baban, M

    2010-09-01

    Heterogeneity of variance of growth traits over age is a common issue in estimating genetic parameters and is addressed in this study by selecting appropriate variance structure models for additive genetic and environmental variances. Modeling and partitioning those variances connected with analyzing small data sets were demonstrated on Lipizzan horses. The following traits were analyzed: withers height, chest girth, and cannon bone circumference. The measurements were taken at birth, and at approximately 6, 12, 24, and 36 mo of age of 660 Lipizzan horses born in Croatia between 1948 and 2000. The corresponding pedigree file consisted of 1,458 horses. Sex, age of dam, and stud-year-season interaction were considered fixed effects; additive genetic and permanent environment effects were defined as random. Linear adjustments of age at measuring were done within measuring groups. Maternal effects were included only for measurements taken at birth and at 6 mo. Additive genetic variance structures were modeled by using uniform structures or structures based on polynomial random regression. Environmental variance structures were modeled by using one of the following models: unstructured, exponential, Gaussian, or combinations of identity or diagonal with structures based on polynomial random regression. The parameters were estimated by using REML. Comparison and fits of the models were assessed by using Akaike and Bayesian information criteria, and by checking graphically the adequacy of the shape of the overall (phenotypic) and component (additive genetic and environmental) variance functions. The best overall fit was obtained from models with unstructured error variance. Compared with the model with uniform additive genetic variance, models with structures based on random regression only slightly improved overall fit. Exponential and Gaussian models were generally not suitable because they do not accommodate adequately heterogeneity of variance. Using the unstructured

  10. Low-variance Monte Carlo Solutions of the Boltzmann Transport Equation

    CERN Document Server

    Hadjiconstantinou, Nicolas G; Baker, Lowell L

    2009-01-01

    We present and discuss a variance-reduced stochastic particle method for simulating the relaxation-time model of the Boltzmann transport equation. The present paper focuses on the dilute gas case, although the method is expected to directly extend to all fields (carriers) for which the relaxation-time approximation is reasonable. The variance reduction, achieved by simulating only the deviation from equilibrium, results in a significant computational efficiency advantage compared to traditional stochastic particle methods in the limit of small deviation from equilibrium. More specifically, the proposed method can efficiently simulate arbitrarily small deviations from equilibrium at a computational cost that is independent of the deviation from equilibrium, which is in sharp contrast to traditional particle methods.

  11. High-fidelity Simulation of Jet Noise from Rectangular Nozzles . [Large Eddy Simulation (LES) Model for Noise Reduction in Advanced Jet Engines and Automobiles

    Science.gov (United States)

    Sinha, Neeraj

    2014-01-01

    This Phase II project validated a state-of-the-art LES model, coupled with a Ffowcs Williams-Hawkings (FW-H) far-field acoustic solver, to support the development of advanced engine concepts. These concepts include innovative flow control strategies to attenuate jet noise emissions. The end-to-end LES/ FW-H noise prediction model was demonstrated and validated by applying it to rectangular nozzle designs with a high aspect ratio. The model also was validated against acoustic and flow-field data from a realistic jet-pylon experiment, thereby significantly advancing the state of the art for LES.

  12. Estimation of model error variances during data assimilation

    Science.gov (United States)

    Dee, D.

    2003-04-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  13. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    Science.gov (United States)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  14. Study on reduction of neptunium and uranium in nitric acid solution using flow type electrolytic cell, as a basic technique for advanced reprocessing process

    International Nuclear Information System (INIS)

    The reduction of neptunium and uranium was studied using a flow type electrolytic cell containing a carbon-fiber column electrode. Np(VI) (10-3 mol·l-1) in 3 mol·l-1 HNO3 solution was quantitatively reduced into Np(V) at the potential of 0.3 V vs. Ag/AgCl using the cell. Reduction of U(VI) (0.1 mol·l-1) into U(IV) with co-existing Np and Tc at -0.3 V vs. Ag/AgCl in 6 mol·l-1 HNO3 solution was also demonstrated. (author)

  15. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent and...... commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  16. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A; Kyvik, Kirsten O

    2007-01-01

    been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect......OBJECTIVE: Twin studies are used extensively to decompose the variance of a trait, mainly to estimate the heritability of the trait. A second purpose of such studies is to estimate to what extent the non-genetic variance is shared or specific to individuals. To a lesser extent the twin studies have...

  17. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    Science.gov (United States)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  18. Unemployment A Variance Decomposition of Index-Linked Bond Returns

    OpenAIRE

    Francis Breedon

    2012-01-01

    We undertake a variance decomposition of index-linked bond returns for the US, UK and Iceland. In all cases, news about future excess returns is the key driver though only for Icelandic bonds are returns independent of inflation.

  19. A new definition of nonlinear statistics mean and variance

    OpenAIRE

    Chen, W.,

    1999-01-01

    This note presents a new definition of nonlinear statistics mean and variance to simplify the nonlinear statistics computations. These concepts aim to provide a theoretical explanation of a novel nonlinear weighted residual methodology presented recently by the present author.

  20. A Multi-Period Mean-Variance Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Rodrigo de Barros Nabholz

    2005-06-01

    Full Text Available In a recent paper, Li and Ng (2000 considered the multi-period mean variance optimization problem, with investing horizon T, for the case in which only the final variance Var(V(T or expected value of the portfolio E(V(T are considered in the optimization problem. In this paper we extend their results to the case in which the intermediate expected values E(V(t and variances Var(V(t for t = 1,,T can also be taken into account in the optimization problem. The main advantage of this technique is that it is possible to control the intermediate behavior of the portfolios return or variance. An example illustrating this situation is presented.

  1. Allowable variance set on left ventricular function parameter

    International Nuclear Information System (INIS)

    Purpose: To evaluate the influence of allowable Variance settings on left ventricular function parameter of the arrhythmia patients during gated myocardial perfusion imaging. Method: 42 patients with evident arrhythmia underwent myocardial perfusion SPECT, 3 different allowable variance with 20%, 60%, 100% would be set before acquisition for every patients,and they will be acquired simultaneously. After reconstruction by Astonish, end-diastole volume(EDV) and end-systolic volume (ESV) and left ventricular ejection fraction (LVEF) would be computed with Quantitative Gated SPECT(QGS). Using SPSS software EDV, ESV, EF values of analysis of variance. Result: there is no statistical difference between three groups. Conclusion: arrhythmia patients undergo Gated myocardial perfusion imaging, Allowable Variance settings on EDV, ESV, EF value does not have a statistical meaning. (authors)

  2. Variance estimation in neutron coincidence counting using the bootstrap method

    International Nuclear Information System (INIS)

    In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters

  3. Detecting Pulsars with Interstellar Scintillation in Variance Images

    CERN Document Server

    Dai, S; Bell, M E; Coles, W A; Hobbs, G; Ekers, R D; Lenc, E

    2016-01-01

    Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show th...

  4. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with realized range-based variance - a statistic that replaces every squared return of realized variance with a normalized squared range. If the entire sample path of the process is available......, and under a set of weak conditions, our statistic is consistent and has a mixed Gaussian limit, whose precision is five times greater than that of realized variance. In practice, of course, inference is drawn from discrete data and true ranges are unobserved, leading to downward bias. We solve this...... problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared with realized...

  5. Realized range-based estimation of integrated variance

    DEFF Research Database (Denmark)

    Christensen, Kim; Podolskij, Mark

    2007-01-01

    We provide a set of probabilistic laws for estimating the quadratic variation of continuous semimartingales with the realized range-based variance-a statistic that replaces every squared return of the realized variance with a normalized squared range. If the entire sample path of the process is...... available, and under a set of weak conditions, our statistic is consistent and has a mixed Gaussian limit, whose precision is five times greater than that of the realized variance. In practice, of course, inference is drawn from discrete data and true ranges are unobserved, leading to downward bias. We...... solve this problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared...

  6. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    Science.gov (United States)

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  7. Variance estimation in neutron coincidence counting using the bootstrap method

    Energy Technology Data Exchange (ETDEWEB)

    Dubi, C., E-mail: chendb331@gmail.com [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Ocherashvilli, A.; Ettegui, H. [Physics Department, Nuclear Research Center of the Negev, P.O.B. 9001 Beer Sheva (Israel); Pedersen, B. [Nuclear Security Unit, Institute for Transuranium Elements, Via E. Fermi, 2749 JRC, Ispra (Italy)

    2015-09-11

    In the study, we demonstrate the implementation of the “bootstrap” method for a reliable estimation of the statistical error in Neutron Multiplicity Counting (NMC) on plutonium samples. The “bootstrap” method estimates the variance of a measurement through a re-sampling process, in which a large number of pseudo-samples are generated, from which the so-called bootstrap distribution is generated. The outline of the present study is to give a full description of the bootstrapping procedure, and to validate, through experimental results, the reliability of the estimated variance. Results indicate both a very good agreement between the measured variance and the variance obtained through the bootstrap method, and a robustness of the method with respect to the duration of the measurement and the bootstrap parameters.

  8. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  9. Sublinear variance for directed last-passage percolation

    CERN Document Server

    Graham, B T

    2009-01-01

    A range of first-passage percolation type models are believed to demonstrate the related properties of sublinear variance and superdiffusivity. We show that directed last-passage percolation with Gaussian vertex weights has a sublinear variance property. The proof makes use of Benaim and Rossignol's work on concentration, adapting an argument of Benjamini, Kalai and Schramm from undirected first-passage percolation. The proof can be adapted to handle other vertex weight distributions such as the gamma distribution.

  10. Wavelet Variance Analysis of EEG Based on Window Function

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yuan-zhuang; YOU Rong-yi

    2014-01-01

    A new wavelet variance analysis method based on window function is proposed to investigate the dynamical features of electroencephalogram (EEG).The ex-prienmental results show that the wavelet energy of epileptic EEGs are more discrete than normal EEGs, and the variation of wavelet variance is different between epileptic and normal EEGs with the increase of time-window width. Furthermore, it is found that the wavelet subband entropy (WSE) of the epileptic EEGs are lower than the normal EEGs.

  11. Temperature variance study in Monte-Carlo photon transport theory

    International Nuclear Information System (INIS)

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case

  12. Wild bootstrap of the mean in the infinite variance case

    OpenAIRE

    Giuseppe Cavaliere; Iliyan Georgiev; Robert Taylor, A. M.

    2011-01-01

    It is well known that the standard i.i.d. bootstrap of the mean is inconsistent in a location model with infinite variance (alfa-stable) innovations. This occurs because the bootstrap distribution of a normalised sum of infinite variance random variables tends to a random distribution. Consistent bootstrap algorithms based on subsampling methods have been proposed but have the drawback that they deliver much wider confidence sets than those generated by the i.i.d. bootstrap owing to the fact ...

  13. Testing instantaneous causality in presence of non constant unconditional variance

    OpenAIRE

    Gianetto, Quentin Giai; Raissi, Hamdi

    2012-01-01

    The problem of testing instantaneous causality between variables with time-varying unconditional variance is investigated. It is shown that the classical tests based on the assumption of stationary processes must be avoided in our non standard framework. More precisely we underline that the standard test does not control the type I errors, while the tests with White (1980) and Heteroscedastic Autocorrelation Consistent (HAC) corrections can suffer from a severe loss of power when the variance...

  14. Option Pricing in a Dynamic Variance-Gamma Model

    OpenAIRE

    Lorenzo Mercuri; Fabio Bellini

    2014-01-01

    We present a discrete time stochastic volatility model in which the conditional distribution of the logreturns is a Variance-Gamma, that is a normal variance-mean mixture with Gamma mixing density. We assume that the Gamma mixing density is time varying and follows an affine Garch model, trying to capture persistence of volatility shocks and also higher order conditional dynamics in a parsimonious way. We select an equivalent martingale measure by means of the conditional Esscher transform as...

  15. Adaptive Estimation of Autoregressive Models with Time-Varying Variances

    OpenAIRE

    Ke-Li Xu; Phillips, Peter C. B.

    2006-01-01

    Stable autoregressive models of known finite order are considered with martingale differences errors scaled by an unknown nonparametric time-varying function generating heterogeneity. An important special case involves structural change in the error variance, but in most practical cases the pattern of variance change over time is unknown and may involve shifts at unknown discrete points in time, continuous evolution or combinations of the two. This paper develops kernel-based estimators of th...

  16. Towards the ultimate variance-conserving convection scheme

    International Nuclear Information System (INIS)

    In the past various arguments have been used for applying kinetic energy-conserving advection schemes in numerical simulations of incompressible fluid flows. One argument is obeying the programmed dissipation by viscous stresses or by sub-grid stresses in Direct Numerical Simulation and Large Eddy Simulation, see e.g. [Phys. Fluids A 3 (7) (1991) 1766]. Another argument is that, according to e.g. [J. Comput. Phys. 6 (1970) 392; 1 (1966) 119], energy-conserving convection schemes are more stable i.e. by prohibiting a spurious blow-up of volume-integrated energy in a closed volume without external energy sources. In the above-mentioned references it is stated that nonlinear instability is due to spatial truncation rather than to time truncation and therefore these papers are mainly concerned with the spatial integration. In this paper we demonstrate that discretized temporal integration of a spatially variance-conserving convection scheme can induce non-energy conserving solutions. In this paper the conservation of the variance of a scalar property is taken as a simple model for the conservation of kinetic energy. In addition, the derivation and testing of a variance-conserving scheme allows for a clear definition of kinetic energy-conserving advection schemes for solving the Navier-Stokes equations. Consequently, we first derive and test a strictly variance-conserving space-time discretization for the convection term in the convection-diffusion equation. Our starting point is the variance-conserving spatial discretization of the convection operator presented by Piacsek and Williams [J. Comput. Phys. 6 (1970) 392]. In terms of its conservation properties, our variance-conserving scheme is compared to other spatially variance-conserving schemes as well as with the non-variance-conserving schemes applied in our shallow-water solver, see e.g. [Direct and Large-eddy Simulation Workshop IV, ERCOFTAC Series, Kluwer Academic Publishers, 2001, pp. 409-287

  17. Occupancy, spatial variance, and the abundance of species

    OpenAIRE

    He, F.; Gaston, K J

    2003-01-01

    A notable and consistent ecological observation known for a long time is that spatial variance in the abundance of a species increases with its mean abundance and that this relationship typically conforms well to a simple power law (Taylor 1961). Indeed, such models can be used at a spectrum of spatial scales to describe spatial variance in the abundance of a single species at different times or in different regions and of different species across the same set of areas (Tayl...

  18. Valuation of Variance Forecast with Simulated Option Markets

    OpenAIRE

    Engle, Robert F; Che-Hsiung Hong; Alex Kane

    1990-01-01

    An appropriate metric for the success of an algorithm to forecast the variance of the rate of return on a capital asset could be the incremental profit from substituting it for the next best alternative. We propose a framework to assess incremental profits for competing algorithms to forecast the variance of a prespecified asset. The test is based on the return history of the asset in question. A hypothetical insurance market is set up, where competing forecasting algorithms are used. One alg...

  19. A characterization of Poisson-Gaussian families by generalized variance

    OpenAIRE

    Kokonendji, Célestin C.; Masmoudi, Afif

    2006-01-01

    We show that if the generalized variance of an infinitely divisible natural exponential family [math] in a [math] -dimensional linear space is of the form [math] , then there exists [math] in [math] such that [math] is a product of [math] univariate Poisson and ( [math] )-variate Gaussian families. In proving this fact, we use a suitable representation of the generalized variance as a Laplace transform and the result, due to Jörgens, Calabi and Pogorelov, that any strictly convex smooth funct...

  20. Testing hypothesis on stability of expected value and variance

    OpenAIRE

    Grzegorz Konczak; Janusz Wywial

    2006-01-01

    The simple samples are independently taken from normal distribution. The two functions of the sample means and sample variances are considered. The density functions of these two statistics have been derived. These statistics can be applied for verifying the hypothesis on stability of expected value and variance of normal distribution considered, e.g., in statistical process control. The critical values for these statistics have been found using numerical integration. The tables with approxim...

  1. CLTs and asymptotic variance of time-sampled Markov chains

    CERN Document Server

    Latuszynski, Krzysztof

    2011-01-01

    For a Markov transition kernel $P$ and a probability distribution $ \\mu$ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel $P_{\\mu} = \\sum_k \\mu(k)P^k.$ In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker's and Metropolis algorithms in terms of asymptotic variance.

  2. FMRI group analysis combining effect estimates and their variances

    OpenAIRE

    Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Michael S Beauchamp; Cox, Robert W.

    2011-01-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an a...

  3. On variance estimate for covariate adjustment by propensity score analysis.

    Science.gov (United States)

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  4. A Monte Carlo Study of Seven Homogeneity of Variance Tests

    Directory of Open Access Journals (Sweden)

    Howard B. Lee

    2010-01-01

    Full Text Available Problem statement: The decision by SPSS (now PASW to use the unmodified Levene test to test homogeneity of variance was questioned. It was compared to six other tests. In total, seven homogeneity of variance tests used in Analysis Of Variance (ANOVA were compared on robustness and power using Monte Carlo studies. The homogeneity of variance tests were (1 Levene, (2 modified Levene, (3 Z-variance, (4 Overall-Woodward Modified Z-variance, (5 O’Brien, (6 Samiuddin Cube Root and (7 F-Max. Approach: Each test was subjected to Monte Carlo analysis through different shaped distributions: (1 normal, (2 platykurtic, (3 leptokurtic, (4 moderate skewed and (5 highly skewed. The Levene Test is the one used in all of the latest versions of SPSS. Results: The results from these studies showed that the Levene Test is neither the best nor worst in terms of robustness and power. However, the modified Levene Test showed very good robustness when compared to the other tests but lower power than other tests. The Samiuddin test is at its best in terms of robustness and power when the distribution is normal. The results of this study showed the strengths and weaknesses of the seven tests. Conclusion/Recommendations: No single test outperformed the others in terms of robustness and power. The authors recommend that kurtosis and skewness indices be presented in statistical computer program packages such as SPSS to guide the data analyst in choosing which test would provide the highest robustness and power.

  5. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  6. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  7. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  8. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  9. Innovative clean coal technology: 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. Final report, Phases 1 - 3B

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-01-01

    This report presents the results of a U.S. Department of Energy (DOE) Innovative Clean Coal Technology (ICCT) project demonstrating advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NOx) emissions from coal-fired boilers. The project was conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The technologies demonstrated at this site include Foster Wheeler Energy Corporation`s advanced overfire air system and Controlled Flow/Split Flame low NOx burner. The primary objective of the demonstration at Hammond Unit 4 was to determine the long-term effects of commercially available wall-fired low NOx combustion technologies on NOx emissions and boiler performance. Short-term tests of each technology were also performed to provide engineering information about emissions and performance trends. A target of achieving fifty percent NOx reduction using combustion modifications was established for the project. Short-term and long-term baseline testing was conducted in an {open_quotes}as-found{close_quotes} condition from November 1989 through March 1990. Following retrofit of the AOFA system during a four-week outage in spring 1990, the AOFA configuration was tested from August 1990 through March 1991. The FWEC CF/SF low NOx burners were then installed during a seven-week outage starting on March 8, 1991 and continuing to May 5, 1991. Following optimization of the LNBs and ancillary combustion equipment by FWEC personnel, LNB testing commenced during July 1991 and continued until January 1992. Testing in the LNB+AOFA configuration was completed during August 1993. This report provides documentation on the design criteria used in the performance of this project as it pertains to the scope involved with the low NOx burners and advanced overfire systems.

  10. Innovative Clean Coal Technology (ICCT): 500 MW demonstration of advanced wall-fired combustion techniques for the reduction of nitrogen oxide (NO{sub x}) emissions from coal-fired boilers. Technical progress report: First quarter 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-12-31

    This quarterly report discusses the technical progress of an Innovative Clean Coal Technology (ICCT) demonstration being conducted at Georgia Power Company`s Plant Hammond Unit 4 located near Rome, Georgia. The primary goal of this project is the characterization of the low NO{sub x} combustion equipment through the collection and analysis of long-term emissions data. A target of achieving fifty percent NO{sub x} reduction using combustion modifications has been established for the project. The project provides a stepwise retrofit of an advanced overfire air (AOFA) system followed by low NO{sub x} burners (LNB). During each test phase of the project, diagnostic, performance, long-term, and verification testing will be performed. These tests are used to quantify the NO{sub x} reductions of each technology and evaluate the effects of those reductions on other combustion parameters such as particulate characteristics and boiler efficiency. During this quarter, long-term testing of the LNB + AOFA configuration continued and no parametric testing was performed. Further full-load optimization of the LNB + AOFA system began on March 30, 1993. Following completion of this optimization, comprehensive testing in this configuration will be performed including diagnostic, performance, verification, long-term, and chemical emissions testing. These tests are scheduled to start in May 1993 and continue through August 1993. Preliminary engineering and procurement are progressing on the Advanced Low NOx Digital Controls scope addition to the wall-fired project. The primary activities during this quarter include (1) refinement of the input/output lists, (2) procurement of the distributed digital control system, (3) configuration training, and (4) revision of schedule to accommodate project approval cycle and change in unit outage dates.

  11. Increased circulating VCAM-1 correlates with advanced disease and poor survival in patients with multiple myeloma: reduction by post-bortezomib and lenalidomide treatment.

    Science.gov (United States)

    Terpos, E; Migkou, M; Christoulas, D; Gavriatopoulou, M; Eleutherakis-Papaiakovou, E; Kanellias, N; Iakovaki, M; Panagiotidis, I; Ziogas, D C; Fotiou, D; Kastritis, E; Dimopoulos, M A

    2016-01-01

    Circulating vascular cell adhesion molecule-1 (VCAM-1), intercellular adhesion molecule-1 (ICAM-1) and selectins were prospectively measured in 145 newly-diagnosed patients with symptomatic myeloma (NDMM), 61 patients with asymptomatic/smoldering myeloma (SMM), 47 with monoclonal gammopathy of undetermined significance (MGUS) and 87 multiple myeloma (MM) patients at first relapse who received lenalidomide- or bortezomib-based treatment (RD, n=47; or VD, n=40). Patients with NDMM had increased VCAM-1 and ICAM-1 compared with MGUS and SMM patients. Elevated VCAM-1 correlated with ISS-3 and was independently associated with inferior overall survival (OS) (45 months for patients with VCAM-1 >median vs 75 months, P=0.001). MM patients at first relapse had increased levels of ICAM-1 and L-selectin, even compared with NDMM patients and had increased levels of VCAM-1 compared with MGUS and SMM. Both VD and RD reduced dramatically serum VCAM-1 after four cycles of therapy, but only VD reduced serum ICAM-1, irrespective of response to therapy. The reduction of VCAM-1 was more pronounced after RD than after VD. Our study provides evidence for the prognostic value of VCAM-1 in myeloma patients, suggesting that VCAM-1 could be a suitable target for the development of anti-myeloma therapies. Furthermore, the reduction of VCAM-1 and ICAM-1 by RD and VD supports the inhibitory effect of these drugs on the adhesion of MM cells to stromal cells. PMID:27232930

  12. Contrast agent and radiation dose reduction in abdominal CT by a combination of low tube voltage and advanced image reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Buls, Nico; Gompel, Gert van; Nieboer, Koenraad; Willekens, Inneke; Mey, Johan de [Universitair Ziekenhuis Brussel (UZ Brussel), Department of Radiology, Brussels (Belgium); Vrije Universiteit Brussel (VUB), Research group LABO, Brussel (Belgium); Cauteren, Toon van [Vrije Universiteit Brussel (VUB), Research group LABO, Brussel (Belgium); Verfaillie, Guy [Universitair Ziekenhuis Brussel (UZ Brussel), Department of Radiology, Brussels (Belgium); Evans, Paul; Macholl, Sven; Newton, Ben [GE Healthcare, Department of Medical Diagnostics, Amersham, Buckinghamshire (United Kingdom)

    2015-04-01

    To assess image quality in abdominal CT at low tube voltage combined with two types of iterative reconstruction (IR) at four reduced contrast agent dose levels. Minipigs were scanned with standard 320 mg I/mL contrast concentration at 120 kVp, and with reduced formulations of 120, 170, 220 and 270 mg I/mL at 80 kVp with IR. Image quality was assessed by CT value, dose normalized contrast and signal to noise ratio (CNRD and SNRD) in the arterial and venous phases. Qualitative analysis was included by expert reading. Protocols with 170 mg I/mL or higher showed equal or superior CT values: aorta (278-468 HU versus 314 HU); portal vein (205-273 HU versus 208 HU); liver parenchyma (122-146 HU versus 115 HU). In the aorta, all 170 mg I/mL protocols or higher yielded equal or superior CNRD (15.0-28.0 versus 13.7). In liver parenchyma, all study protocols resulted in higher SNRDs. Radiation dose could be reduced from standard CTDI{sub vol} = 7.8 mGy (6.2 mSv) to 7.6 mGy (5.2 mSv) with 170 mg I/mL. Combining 80 kVp with IR allows at least a 47 % contrast agent dose reduction and 16 % radiation dose reduction for images of comparable quality. (orig.)

  13. Variance-based fingerprint distance adjustment algorithm for indoor localization

    Institute of Scientific and Technical Information of China (English)

    Xiaolong Xu; Yu Tang; Xinheng Wang; Yun Zhang

    2015-01-01

    The multipath effect and movements of people in in-door environments lead to inaccurate localization. Through the test, calculation and analysis on the received signal strength in-dication (RSSI) and the variance of RSSI, we propose a novel variance-based fingerprint distance adjustment algorithm (VFDA). Based on the rule that variance decreases with the increase of RSSI mean, VFDA calculates RSSI variance with the mean value of received RSSIs. Then, we can get the correction weight. VFDA adjusts the fingerprint distances with the correction weight based on the variance of RSSI, which is used to correct the fingerprint distance. Besides, a threshold value is applied to VFDA to im-prove its performance further. VFDA and VFDA with the threshold value are applied in two kinds of real typical indoor environments deployed with several Wi-Fi access points. One is a quadrate lab room, and the other is a long and narrow corridor of a building. Experimental results and performance analysis show that in in-door environments, both VFDA and VFDA with the threshold have better positioning accuracy and environmental adaptability than the current typical positioning methods based on the k-nearest neighbor algorithm and the weighted k-nearest neighbor algorithm with similar computational costs.

  14. Detecting Pulsars with Interstellar Scintillation in Variance Images

    Science.gov (United States)

    Dai, S.; Johnston, S.; Bell, M. E.; Coles, W. A.; Hobbs, G.; Ekers, R. D.; Lenc, E.

    2016-08-01

    Pulsars are the only cosmic radio sources known to be sufficiently compact to show diffractive interstellar scintillations. Images of the variance of radio signals in both time and frequency can be used to detect pulsars in large-scale continuum surveys using the next generation of synthesis radio telescopes. This technique allows a search over the full field of view while avoiding the need for expensive pixel-by-pixel high time resolution searches. We investigate the sensitivity of detecting pulsars in variance images. We show that variance images are most sensitive to pulsars whose scintillation time-scales and bandwidths are close to the subintegration time and channel bandwidth. Therefore, in order to maximise the detection of pulsars for a given radio continuum survey, it is essential to retain a high time and frequency resolution, allowing us to make variance images sensitive to pulsars with different scintillation properties. We demonstrate the technique with Murchision Widefield Array data and show that variance images can indeed lead to the detection of pulsars by distinguishing them from other radio sources.

  15. Variance and covariance calculations for nuclear materials accounting using ''PROFF''

    International Nuclear Information System (INIS)

    To determine the detection sensitivity of a materials accounting system to the loss of Special Nuclear Material (SNM) requires: (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for those measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. We have developed an interactive, menu-driven computer program, called PROFF (for PROcessing and Fuel Facilities), that considerably reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system. PROFF asks questions of the user to establish the form of each term in the materials balance equation, possible correlations between them, and whether the measured quantities are characterized by an additive or multiplicative error model. Then for each term of the materials balance equation, it presents the user with a menu that is to be completed with values of the SNM concentration, mass (or volume), measurement error standard deviations, and the number of measurements made during the accounting period. On completion of all the data menus, PROFF presents the variance of the materials balance and the square root of this variance, so that the sensitivity of the accounting system can be determined. PROFF is programmed in TURBO-PASCAL for micro-computers using MS-DOS 2.1 (IBM and compatibles)

  16. Efficient nonlinear predictive error variance for highly parameterized models

    Science.gov (United States)

    Tonkin, Matthew; Doherty, John; Moore, Catherine

    2007-07-01

    Predictive error variance analysis attempts to determine how wrong predictions made by a calibrated model may be. Predictive error variance analysis is usually undertaken following calibration using a small number of parameters defined through a priori parsimony. In contrast, we introduce a method for investigating the potential error in predictions made by highly parameterized models calibrated using regularized inversion. Vecchia and Cooley (1987) describe a method of predictive error variance analysis that is constrained by calibration data. We extend this approach to include constraints on parameters that lie within the calibration null space. These constraints are determined by dividing parameter space into combinations of parameters for which estimates can be obtained and those for which they cannot. This enables the contribution to predictive error variance from parameterization simplifications required to solve the inverse problem to be quantified, in addition to the contribution from measurement noise. We also describe a novel technique that restricts the analysis to a strategically defined predictive solution subspace, enabling an approximate predictive error variance analysis to be completed efficiently. The method is illustrated using a synthetic and a real-world groundwater flow and transport model.

  17. Models of Postural Control: Shared Variance in Joint and COM Motions.

    Science.gov (United States)

    Kilby, Melissa C; Molenaar, Peter C M; Newell, Karl M

    2015-01-01

    This paper investigated the organization of the postural control system in human upright stance. To this aim the shared variance between joint and 3D total body center of mass (COM) motions was analyzed using multivariate canonical correlation analysis (CCA). The CCA was performed as a function of established models of postural control that varied in their joint degrees of freedom (DOF), namely, an inverted pendulum ankle model (2DOF), ankle-hip model (4DOF), ankle-knee-hip model (5DOF), and ankle-knee-hip-neck model (7DOF). Healthy young adults performed various postural tasks (two-leg and one-leg quiet stances, voluntary AP and ML sway) on a foam and rigid surface of support. Based on CCA model selection procedures, the amount of shared variance between joint and 3D COM motions and the cross-loading patterns we provide direct evidence of the contribution of multi-DOF postural control mechanisms to human balance. The direct model fitting of CCA showed that incrementing the DOFs in the model through to 7DOF was associated with progressively enhanced shared variance with COM motion. In the 7DOF model, the first canonical function revealed more active involvement of all joints during more challenging one leg stances and dynamic posture tasks. Furthermore, the shared variance was enhanced during the dynamic posture conditions, consistent with a reduction of dimension. This set of outcomes shows directly the degeneracy of multivariate joint regulation in postural control that is influenced by stance and surface of support conditions. PMID:25973896

  18. Correlation Between Tumor Markers Variance and Chemotherapeutic Effect of Pemetrexed on Patients with Advanced Non-small Cell Lung Cancer%晚期非小细胞肺癌患者培美曲塞化疗后血清肿瘤标志物变化与疗效的相关性

    Institute of Scientific and Technical Information of China (English)

    王增; 翁琳; 游隽; 程斌

    2011-01-01

    Objective To investigate the changes of serum levels of tumor markers during chemotherapy of single pemetrexed or combination with platinum in patients with advanced non -small cell lung cancer ( NSCLC ).Methods 102 advanced NSCLC patients who experienced more than 2 cycles of chemotherapy by pemetrexed single agent or combination with platinum were retrospective analyzed.The changes of CEA, CA125, CYFRA21-1, NSE, SCC and changes of chest CT scan were recorded before and after chemotherapy.Results After chemotherapy, tumor markers like CEA, CA125, CYFRA21-1 , NSE and SCC were decreased, in which CEA, CA125 and CYFRA21-1 diminished 19.3% ( P<0.05), 24.8% ( P<0.05), and 18.5% (P<0.05) , respectively.It was suggested that the correlation between tumor marker response (TMR)and imaging-based response ( IBR ) of CEA and CA125 was positive; the correlation between TMR and IBR of CYFRA 21 -1 was also positive; that between TMR and IBR of CEA, CA125 and CA19-9 joint inspection was positive as well.Conclusion Serum CEA, CA125 and CYFRA21-1 represent reliable markers for chemotherapy efficacy on patients with advanced NSCLC.Monitoring changes of serum levels of tumor markers would benefit for assessment of chemotherapy efficacy , which is simple, economic and useful in clinical settings.%目的 观察晚期非小细胞肺癌(NSCLC)患者在培美曲塞单药或联合铂类化疗后血清肿瘤标志物的变化.方法 回顾性分析接受培美曲塞单药或联合铂类方案化疗的NSCLC患者102例,观察化疗前和化疗第2个周期后肿瘤标志物癌胚抗原(CEA)、糖类抗原125(CA125)、细胞角质素片段抗原21-1(CYFRA21-1)、神经元特异性烯醇化酶(NSE)、鳞癌相关抗原(SCC)的变化,并根据化疗前后CT 等影像学结果的改变来进行对比.结果 化疗第2 个周期后CEA、CA125、CYFRA21-1 、NSE、SCC的均值与化疗前都有不同程度下降,其中CEA下降19.3%(P<0.05),CA125下降24.8%(P<0.05),CYFRA21-1

  19. CMB-S4 and the Hemispherical Variance Anomaly

    CERN Document Server

    O'Dwyer, Marcio; Knox, Lloyd; Starkman, Glenn D

    2016-01-01

    Cosmic Microwave Background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the northern and southern Ecliptic hemispheres. In this context, the northern hemisphere displays an anomalously low variance while the southern hemisphere appears unremarkable (consistent with expectations from the best-fitting theory, $\\Lambda$CDM). While this is a well established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground ba...

  20. Sensitivity to Estimation Errors in Mean-variance Models

    Institute of Scientific and Technical Information of China (English)

    Zhi-ping Chen; Cai-e Zhao

    2003-01-01

    In order to give a complete and accurate description about the sensitivity of efficient portfolios to changes in assets' expected returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz continuous, differentiable mapping of these parameters under suitable conditions. The change rate of the efficient portfolio's weights with respect to variations about riskreturn estimations is derived by estimating the Lipschitz constant. Our general quantitative results show thatthe efficient portfolio's weights are normally not so sensitive to estimation errors about means and variances .Moreover, we point out those extreme cases which might cause stability problems and how to avoid them in practice. Preliminary numerical results are also provided as an illustration to our theoretical results.

  1. Impact of Damping Uncertainty on SEA Model Response Variance

    Science.gov (United States)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  2. Pricing perpetual American options under multiscale stochastic elasticity of variance

    International Nuclear Information System (INIS)

    Highlights: • We study the effects of the stochastic elasticity of variance on perpetual American option. • Our SEV model consists of a fast mean-reverting factor and a slow mean-revering factor. • A slow scale factor has a very significant impact on the option price. • We analyze option price structures through the market prices of elasticity risk. - Abstract: This paper studies pricing the perpetual American options under a constant elasticity of variance type of underlying asset price model where the constant elasticity is replaced by a fast mean-reverting Ornstein–Ulenbeck process and a slowly varying diffusion process. By using a multiscale asymptotic analysis, we find the impact of the stochastic elasticity of variance on the option prices and the optimal exercise prices with respect to model parameters. Our results enhance the existing option price structures in view of flexibility and applicability through the market prices of elasticity risk

  3. Extragalactic number counts at 100 um, free from cosmic variance

    CERN Document Server

    Sibthorpe, B; Massey, R J; Roseboom, I G; van der Werf, P; Matthews, B C; Greaves, J S

    2012-01-01

    We use data from the Disc Emission via a Bias-free Reconnaissance in the Infrared/Submillimetre (DEBRIS) survey, taken at 100 um with the Photoconductor Array Camera and Spectrometer instrument on board the Herschel Space Observatory, to make a cosmic variance independent measurement of the extragalactic number counts. These data consist of 323 small-area mapping observations performed uniformly across the sky, and thus represent a sparse sampling of the astronomical sky with an effective coverage of ~2.5 deg^2. We find our cosmic variance independent analysis to be consistent with previous count measurements made using relatively small area surveys. Furthermore, we find no statistically significant cosmic variance on any scale within the errors of our data. Finally, we interpret these results to estimate the probability of galaxy source confusion in the study of debris discs.

  4. Variance and covariance calculations for nuclear materials accounting using ''PROFF''

    International Nuclear Information System (INIS)

    To determine the detection sensitivity of a materials accounting system to the loss of Special Nuclear Material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for those measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. The authors have developed an interactive, menu-driven computer program, called PROFF (for PROcessing and Fuel Facilities), that considerably reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system. PROFF is discussed in this paper

  5. Expectation Values and Variance Based on Lp-Norms

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2012-11-01

    Full Text Available This analysis introduces a generalization of the basic statistical concepts of expectation values and variance for non-Euclidean metrics induced by Lp-norms. The non-Euclidean Lp means are defined by exploiting the fundamental property of minimizing the Lp deviations that compose the Lp variance. These Lp expectation values embody a generic formal scheme of means characterization. Having the p-norm as a free parameter, both the Lp-normed expectation values and their variance are flexible to analyze new phenomena that cannot be described under the notions of classical statistics based on Euclidean norms. The new statistical approach provides insights into regression theory and Statistical Physics. Several illuminating examples are examined.

  6. Assessment of heterogeneity of residual variances using changepoint techniques

    Directory of Open Access Journals (Sweden)

    Toro Miguel A

    2000-07-01

    Full Text Available Abstract Several studies using test-day models show clear heterogeneity of residual variance along lactation. A changepoint technique to account for this heterogeneity is proposed. The data set included 100 744 test-day records of 10 869 Holstein-Friesian cows from northern Spain. A three-stage hierarchical model using the Wood lactation function was employed. Two unknown changepoints at times T1 and T2, (0 T1 T2 tmax, with continuity of residual variance at these points, were assumed. Also, a nonlinear relationship between residual variance and the number of days of milking t was postulated. The residual variance at a time t( in the lactation phase i was modeled as: for (i = 1, 2, 3, where λι is a phase-specific parameter. A Bayesian analysis using Gibbs sampling and the Metropolis-Hastings algorithm for marginalization was implemented. After a burn-in of 20 000 iterations, 40 000 samples were drawn to estimate posterior features. The posterior modes of T1, T2, λ1, λ2, λ3, , , were 53.2 and 248.2 days; 0.575, -0.406, 0.797 and 0.702, 34.63 and 0.0455 kg2, respectively. The residual variance predicted using these point estimates were 2.64, 6.88, 3.59 and 4.35 kg2 at days of milking 10, 53, 248 and 305, respectively. This technique requires less restrictive assumptions and the model has fewer parameters than other methods proposed to account for the heterogeneity of residual variance during lactation.

  7. Simultaneous optimal estimates of fixed effects and variance components in the mixed model

    Institute of Scientific and Technical Information of China (English)

    WU; Mixia; WANG; Songgui

    2004-01-01

    For a general linear mixed model with two variance components, a set of simple conditions is obtained, under which, (i) the least squares estimate of the fixed effects and the analysis of variance (ANOVA) estimates of variance components are proved to be uniformly minimum variance unbiased estimates simultaneously; (ii) the exact confidence intervals of the fixed effects and uniformly optimal unbiased tests on variance components are given; (iii) the exact probability expression of ANOVA estimates of variance components taking negative value is obtained.

  8. Variance in trace constituents following the final stratospheric warming

    Science.gov (United States)

    Hess, Peter

    1990-01-01

    Concentration variations with time in trace stratospheric constituents N2O, CF2Cl2, CFCl3, and CH4 were investigated using samples collected aboard balloons flown over southern France during the summer months of 1977-1979. Data are analyzed using a tracer transport model, and the mechanisms behind the modeled tracer variance are examined. An analysis of the N2O profiles for the month of June showed that a large fraction of the variance reported by Ehhalt et al. (1983) is on an interannual time scale.

  9. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting in a...... general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  10. On Variance and Covariance for Bounded Linear Operators

    Institute of Scientific and Technical Information of China (English)

    Chia Shiang LIN

    2001-01-01

    In this paper we initiate a study of covariance and variance for two operators on a Hilbert space, proving that the c-v (covariance-variance) inequality holds, which is equivalent to the CauchySchwarz inequality. As for applications of the c-v inequality we prove uniformly the Bernstein-type incqualities and equalities, and show the generalized Heinz-Kato-Furuta-type inequalities and equalities,from which a generalization and sharpening of Reid's inequality is obtained. We show that every operator can be expressed as a p-hyponormal-type, and a hyponornal-type operator. Finally, some new characterizations of the Furuta inequality are given.

  11. Recursive identification for multidimensional ARMA processes with increasing variances

    Institute of Scientific and Technical Information of China (English)

    CHEN Hanfu

    2005-01-01

    In time series analysis, almost all existing results are derived for the case where the driven noise {wn} in the MA part is with bounded variance (or conditional variance). In contrast to this, the paper discusses how to identify coefficients in a multidimensional ARMA process with fixed orders, but in its MA part the conditional moment E(‖wn‖β| Fn-1), β> 2 Is possible to grow up at a rate of a power of logn. The wellknown stochastic gradient (SG) algorithm is applied to estimating the matrix coefficients of the ARMA process, and the reasonable conditions are given to guarantee the estimate to be strongly consistent.

  12. Variance squeezing and entanglement of the XX central spin model

    International Nuclear Information System (INIS)

    In this paper, we study the quantum properties for a system that consists of a central atom interacting with surrounding spins through the Heisenberg XX couplings of equal strength. Employing the Heisenberg equations of motion we manage to derive an exact solution for the dynamical operators. We consider that the central atom and its surroundings are initially prepared in the excited state and in the coherent spin state, respectively. For this system, we investigate the evolution of variance squeezing and entanglement. The nonclassical effects have been remarked in the behavior of all components of the system. The atomic variance can exhibit revival-collapse phenomenon based on the value of the detuning parameter.

  13. The density variance -- Mach number relation in supersonic, isothermal turbulence

    OpenAIRE

    Price, Daniel J.; Federrath, Christoph; Brunt, Christopher M.

    2010-01-01

    We examine the relation between the density variance and the mean-square Mach number in supersonic, isothermal turbulence, assumed in several recent analytic models of the star formation process. From a series of calculations of supersonic, hydrodynamic turbulence driven using purely solenoidal Fourier modes, we find that the `standard' relationship between the variance in the log of density and the Mach number squared, i.e., sigma^2_(ln rho/rhobar)=ln (1+b^2 M^2), with b = 1/3 is a good fit ...

  14. The Column Density Variance-Sonic Mach Number Relationship

    OpenAIRE

    Burkhart, Blakesley; Lazarian, A.

    2012-01-01

    Although there are a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are very few observational studies investigating the relationship between the density variance ($\\sigma^2$) and the sonic Mach number (${\\cal M}_s$). This is in part due to the fact that the $\\sigma^2$-${\\cal M}_s$ relationship is derived, via MHD simulations, for the 3D density variance only, which is not a direct observable. We investigate the utility of a 2D column density $\\...

  15. Further Development of the Variance-Covariance Method

    OpenAIRE

    J. Chen; Breckow, Joachim; Roos, H; Kellerer, Albrecht M.

    1990-01-01

    Applications of the variance-covariance technique are presented that illustrate the potential of the method. The dose mean lineal energy, yD, can be determined in time-varying radiation fields where the fluctuations of the dose rate are substantially in excess of the stochastic fluctuations of the energy imparted. An added advantage is, that yD is little influenced by noise that affects both detectors simultaneously. The variance-covariance method is thus stable with respect to dose rate fluc...

  16. Precise Asymptotics of Error Variance Estimator in Partially Linear Models

    Institute of Scientific and Technical Information of China (English)

    Shao-jun Guo; Min Chen; Feng Liu

    2008-01-01

    In this paper, we focus our attention on the precise asymptoties of error variance estimator in partially linear regression models, yi = xTi β + g(ti) +εi, 1 ≤i≤n, {εi,i = 1,... ,n } are i.i.d random errors with mean 0 and positive finite variance q2. Following the ideas of Allan Gut and Aurel Spataru[7,8] and Zhang[21],on precise asymptotics in the Baum-Katz and Davis laws of large numbers and precise rate in laws of the iterated logarithm, respectively, and subject to some regular conditions, we obtain the corresponding results in partially linear regression models.

  17. Recent advances in the management of chronic stable angina II. Anti-ischemic therapy, options for refractory angina, risk factor reduction, and revascularization

    Directory of Open Access Journals (Sweden)

    Richard Kones

    2010-08-01

    Full Text Available Richard KonesThe Cardiometabolic Research Institute, Houston, Texas, USAAbstract: The objectives in treating angina are relief of pain and prevention of disease ­progression through risk reduction. Mechanisms, indications, clinical forms, doses, and side effects of the traditional antianginal agents – nitrates, ß-blockers, and calcium channel ­blockers – are reviewed. A number of patients have contraindications or remain unrelieved from anginal discomfort with these drugs. Among newer alternatives, ranolazine, recently approved in the United States, indirectly prevents the intracellular calcium overload involved in cardiac ischemia and is a welcome addition to available treatments. None, however, are disease-modifying agents. Two options for refractory angina, enhanced external counterpulsation and spinal cord stimulation (SCS, are presented in detail. They are both well-studied and are effective means of treating at least some patients with this perplexing form of angina. Traditional modifiable risk factors for coronary artery disease (CAD – smoking, hypertension, dyslipidemia, ­diabetes, and obesity – account for most of the population-attributable risk. Individual therapy of high-risk patients differs from population-wide efforts to prevent risk factors from appearing or reducing their severity, in order to lower the national burden of disease. Current American College of Cardiology/American Heart Association guidelines to lower risk in patients with chronic angina are reviewed. The Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation (COURAGE trial showed that in patients with stable angina, optimal medical therapy alone and percutaneous coronary intervention (PCI with medical therapy were equal in preventing myocardial infarction and death. The integration of COURAGE results into current practice is discussed. For patients who are unstable, with very high risk, with left main coronary artery lesions, in

  18. Ultrasonic Waves and Strength Reduction Indexes for the Assessment of the Advancement of Deterioration Processes in Travertines from Pamukkale and Hierapolis (Turkey)

    Science.gov (United States)

    Bobrowska, Alicja; Domonik, Andrzej

    2015-09-01

    In constructions, the usefulness of modern technical diagnostics of stone as a raw material requires predicting the effects of long-term environmental impact of its qualities and geomechanical properties. The paper presents geomechanical research enabling presentation of the factors for strength loss of the stone and forecasting the rate of development of destructive phenomena on the stone structure on a long-time basis. As research material Turkish travertines were selected from the Denizli-Kaklık Basin (Pamukkale and Hierapolis quarries), which have been commonly used for centuries in global architecture. The rock material was subjected to testing of the impact of various environmental factors, as well as European standards recommended by the author of the research program. Their resistance to the crystallization of salts from aqueous solutions and the effects of SO2, as well as the effect of frost and high temperatures are presented. The studies allowed establishing the following quantitative indicators: the ultrasonic waves index (IVp) and the strength reduction index (IRc). Reflections on the assessment of deterioration effects indicate that the most active factors decreasing travertine resistance in the aging process include frost and sulphur dioxide (SO2). Their negative influence is particularly intense when the stone material is already strongly weathered.

  19. Facile synthesis of N-rich carbon quantum dots by spontaneous polymerization and incision of solvents as efficient bioimaging probes and advanced electrocatalysts for oxygen reduction reaction.

    Science.gov (United States)

    Lei, Zhouyue; Xu, Shengjie; Wan, Jiaxun; Wu, Peiyi

    2016-01-28

    In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread commercial application prospects, not only as bioimaging probes but also as promising electrocatalysts for the metal-free ORR. PMID:26739885

  20. Decomposition of variance in terms of conditional means

    Directory of Open Access Journals (Sweden)

    Alessandro Figà Talamanca

    2013-05-01

    Full Text Available Two different sets of data are used to test an apparently new approach to the analysis of the variance of a numerical variable which depends on qualitative variables. We suggest that this approach be used to complement other existing techniques to study the interdependence of the variables involved. According to our method, the variance is expressed as a sum of orthogonal components, obtained as differences of conditional means, with respect to the qualitative characters. The resulting expression for the variance depends on the ordering in which the characters are considered. We suggest an algorithm which leads to an ordering which is deemed natural. The first set of data concerns the score achieved by a population of students on an entrance examination based on a multiple choice test with 30 questions. In this case the qualitative characters are dyadic and correspond to correct or incorrect answer to each question. The second set of data concerns the delay to obtain the degree for a population of graduates of Italian universities. The variance in this case is analyzed with respect to a set of seven specific qualitative characters of the population studied (gender, previous education, working condition, parent's educational level, field of study, etc..

  1. Statistics review 9: One-way analysis of variance

    OpenAIRE

    Bewick, Viv; Cheek, Liz; Ball, Jonathan

    2004-01-01

    This review introduces one-way analysis of variance, which is a method of testing differences between more than two groups or treatments. Multiple comparison procedures and orthogonal contrasts are described as methods for identifying specific differences between pairs of treatments.

  2. Gravity interpretation of dipping faults using the variance analysis method

    International Nuclear Information System (INIS)

    A new algorithm is developed to estimate simultaneously the depth and the dip angle of a buried fault from the normalized gravity gradient data. This algorithm utilizes numerical first horizontal derivatives computed from the observed gravity anomaly, using filters of successive window lengths to estimate the depth and the dip angle of a buried dipping fault structure. For a fixed window length, the depth is estimated using a least-squares sense for each dip angle. The method is based on computing the variance of the depths determined from all horizontal gradient anomaly profiles using the least-squares method for each dip angle. The minimum variance is used as a criterion for determining the correct dip angle and depth of the buried structure. When the correct dip angle is used, the variance of the depths is always less than the variances computed using wrong dip angles. The technique can be applied not only to the true residuals, but also to the measured Bouguer gravity data. The method is applied to synthetic data with and without random errors and two field examples from Egypt and Scotland. In all cases examined, the estimated depths and other model parameters are found to be in good agreement with the actual values. (paper)

  3. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  4. Hazards in Choosing Between Pooled and Separate- Variances t Tests

    Directory of Open Access Journals (Sweden)

    Bruno D. Zumbo

    2009-01-01

    Full Text Available If the variances of two treatment groups are heterogeneous and, at the same time, sample sizes are unequal, the Type I error probabilities of the pooledvariances Student t test are modified extensively. It is known that the separate-variances tests introduced by Welch and others overcome this problem in many cases and restore the probability to the nominal significance level. In practice, however, it is not always apparent from sample data whether or not the homogeneity assumption is valid at the population level, and this uncertainty complicates the choice of an appropriate significance test. The present study quantifies the extent to which correct and incorrect decisions occur under various conditions. Furthermore, in using statistical packages, such as SPSS, in which both pooled-variances and separate-variances t tests are available, there is a temptation to perform both versions and to reject H0 if either of the two test statistics exceeds its critical value. The present simulations reveal that this procedure leads to incorrect statistical decisions with high probability.

  5. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11. ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.347, year: 2014

  6. Variance-based uncertainty relations for incompatible observables

    Science.gov (United States)

    Chen, Bin; Cao, Ning-Ping; Fei, Shao-Ming; Long, Gui-Lu

    2016-06-01

    We formulate uncertainty relations for arbitrary finite number of incompatible observables. Based on the sum of variances of the observables, both Heisenberg-type and Schrödinger-type uncertainty relations are provided. These new lower bounds are stronger in most of the cases than the ones derived from some existing inequalities. Detailed examples are presented.

  7. Least-squares variance component estimation: theory and GPS applications

    NARCIS (Netherlands)

    Amiri-Simkooei, A.

    2007-01-01

    In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known princip

  8. Multivariate variance targeting in the BEKK-GARCH model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus S.; Rahbæk, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding to these...

  9. Diffusion-Based Trajectory Observers with Variance Constraints

    DEFF Research Database (Denmark)

    Alcocer, Alex; Jouffroy, Jerome; Oliveira, Paulo;

    level of smoothing and is determined by resorting to trial and error. This paper presents a methodology to choose the observer gain by taking into account a priori information on the variance of the position measurement errors. Experimental results with data from an acoustic positioning system are...

  10. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas;

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most of these...

  11. Partitioning the Variance in Scores on Classroom Environment Instruments

    Science.gov (United States)

    Dorman, Jeffrey P.

    2009-01-01

    This paper reports the partitioning of variance in scale scores from the use of three classroom environment instruments. Data sets from the administration of the What Is Happening In this Class (WIHIC) to 4,146 students, the Questionnaire on Teacher Interaction (QTI) to 2,167 students and the Catholic School Classroom Environment Questionnaire…

  12. Intuitive Analysis of Variance-- A Formative Assessment Approach

    Science.gov (United States)

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  13. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed to be...

  14. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding to these...

  15. [ECoG classification based on wavelet variance].

    Science.gov (United States)

    Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin

    2013-06-01

    For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300

  16. Variance in parametric images: direct estimation from parametric projections

    International Nuclear Information System (INIS)

    Recent work has shown that it is possible to apply linear kinetic models to dynamic projection data in PET in order to calculate parameter projections. These can subsequently be back-projected to form parametric images - maps of parameters of physiological interest. Critical to the application of these maps, to test for significant changes between normal and pathophysiology, is an assessment of the statistical uncertainty. In this context, parametric images also include simple integral images from, e.g., [O-15]-water used to calculate statistical parametric maps (SPMs). This paper revisits the concept of parameter projections and presents a more general formulation of the parameter projection derivation as well as a method to estimate parameter variance in projection space, showing which analysis methods (models) can be used. Using simulated pharmacokinetic image data we show that a method based on an analysis in projection space inherently calculates the mathematically rigorous pixel variance. This results in an estimation which is as accurate as either estimating variance in image space during model fitting, or estimation by comparison across sets of parametric images - as might be done between individuals in a group pharmacokinetic PET study. The method based on projections has, however, a higher computational efficiency, and is also shown to be more precise, as reflected in smooth variance distribution images when compared to the other methods. (author)

  17. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the...

  18. Unbiased Estimates of Variance Components with Bootstrap Procedures

    Science.gov (United States)

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  19. Explaining Common Variance Shared by Early Numeracy and Literacy

    Science.gov (United States)

    Davidse, N. J.; De Jong, M. T.; Bus, A. G.

    2014-01-01

    How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…

  20. Gender variance in Asia: discursive contestations and legal implications

    NARCIS (Netherlands)

    S.E. Wieringa

    2010-01-01

    A recent court case in Indonesia in which a person diagnosed with an intersex condition was classified as a transsexual gives rise to a reflection on three discourses in which gender variance is discussed: the biomedical, the cultural, and the human rights discourse. This article discusses the impli

  1. Infinite variance in fermion quantum Monte Carlo calculations

    Science.gov (United States)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  2. A Visual Model for the Variance and Standard Deviation

    Science.gov (United States)

    Orris, J. B.

    2011-01-01

    This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.

  3. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  4. Third-generation dual-source 70-kVp chest CT angiography with advanced iterative reconstruction in young children: image quality and radiation dose reduction

    Energy Technology Data Exchange (ETDEWEB)

    Rompel, Oliver; Janka, Rolf; Lell, Michael M.; Uder, Michael; Hammon, Matthias [University Hospital Erlangen, Department of Radiology, Erlangen (Germany); Gloeckler, Martin; Dittrich, Sven [University Hospital Erlangen, Department of Pediatric Cardiology, Erlangen (Germany); Cesnjevar, Robert [University Hospital Erlangen, Department of Pediatric Cardiac Surgery, Erlangen (Germany)

    2016-04-15

    Many technical updates have been made in multi-detector CT. To evaluate image quality and radiation dose of high-pitch second- and third-generation dual-source chest CT angiography and to assess the effects of different levels of advanced modeled iterative reconstruction (ADMIRE) in newborns and children. Chest CT angiography (70 kVp) was performed in 42 children (age 158 ± 267 days, range 1-1,194 days). We evaluated subjective and objective image quality, and radiation dose with filtered back projection (FBP) and different strength levels of ADMIRE. For comparison were 42 matched controls examined with a second-generation 128-slice dual-source CT-scanner (80 kVp). ADMIRE demonstrated improved objective and subjective image quality (P <.01). Mean signal/noise, contrast/noise and subjective image quality were 11.9, 10.0 and 1.9, respectively, for the 80 kVp mode and 11.2, 10.0 and 1.9 for the 70 kVp mode. With ADMIRE, the corresponding values for the 70 kVp mode were 13.7, 12.1 and 1.4 at strength level 2 and 17.6, 15.6 and 1.2 at strength level 4. Mean CTDI{sub vol}, DLP and effective dose were significantly lower with the 70-kVp mode (0.31 mGy, 5.33 mGy*cm, 0.36 mSv) compared to the 80-kVp mode (0.46 mGy, 9.17 mGy*cm, 0.62 mSv; P <.01). The third-generation dual-source CT at 70 kVp provided good objective and subjective image quality at lower radiation exposure. ADMIRE improved objective and subjective image quality. (orig.)

  5. Third-generation dual-source 70-kVp chest CT angiography with advanced iterative reconstruction in young children: image quality and radiation dose reduction

    International Nuclear Information System (INIS)

    Many technical updates have been made in multi-detector CT. To evaluate image quality and radiation dose of high-pitch second- and third-generation dual-source chest CT angiography and to assess the effects of different levels of advanced modeled iterative reconstruction (ADMIRE) in newborns and children. Chest CT angiography (70 kVp) was performed in 42 children (age 158 ± 267 days, range 1-1,194 days). We evaluated subjective and objective image quality, and radiation dose with filtered back projection (FBP) and different strength levels of ADMIRE. For comparison were 42 matched controls examined with a second-generation 128-slice dual-source CT-scanner (80 kVp). ADMIRE demonstrated improved objective and subjective image quality (P <.01). Mean signal/noise, contrast/noise and subjective image quality were 11.9, 10.0 and 1.9, respectively, for the 80 kVp mode and 11.2, 10.0 and 1.9 for the 70 kVp mode. With ADMIRE, the corresponding values for the 70 kVp mode were 13.7, 12.1 and 1.4 at strength level 2 and 17.6, 15.6 and 1.2 at strength level 4. Mean CTDIvol, DLP and effective dose were significantly lower with the 70-kVp mode (0.31 mGy, 5.33 mGy*cm, 0.36 mSv) compared to the 80-kVp mode (0.46 mGy, 9.17 mGy*cm, 0.62 mSv; P <.01). The third-generation dual-source CT at 70 kVp provided good objective and subjective image quality at lower radiation exposure. ADMIRE improved objective and subjective image quality. (orig.)

  6. Facile synthesis of N-rich carbon quantum dots by spontaneous polymerization and incision of solvents as efficient bioimaging probes and advanced electrocatalysts for oxygen reduction reaction

    Science.gov (United States)

    Lei, Zhouyue; Xu, Shengjie; Wan, Jiaxun; Wu, Peiyi

    2016-01-01

    In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread commercial application prospects, not only as bioimaging probes but also as promising electrocatalysts for the metal-free ORR.In this study, uniform nitrogen-doped carbon quantum dots (N-CDs) were synthesized through a one-step solvothermal process of cyclic and nitrogen-rich solvents, such as N-methyl-2-pyrrolidone (NMP) and dimethyl-imidazolidinone (DMEU), under mild conditions. The products exhibited strong light blue fluorescence, good cell permeability and low cytotoxicity. Moreover, after a facile post-thermal treatment, it developed a lotus seedpod surface-like structure of seed-like N-CDs decorating on the surface of carbon layers with a high proportion of quaternary nitrogen moieties that exhibited excellent electrocatalytic activity and long-term durability towards the oxygen reduction reaction (ORR). The peak potential was -160 mV, which was comparable to or even lower than commercial Pt/C catalysts. Therefore, this study provides an alternative facile approach to the synthesis of versatile carbon quantum dots (CDs) with widespread

  7. Advancing automation of power distribution facilities and the cost reduction measures. Activities o technology development for advanced automation systems; Susumu haiden setsubi no jidoka, cost teigen taisaku. Jidoka system no kodoka eno gijutsu kaihatsu no torikumi

    Energy Technology Data Exchange (ETDEWEB)

    Hayami, M.; Matsui, Y. [Hitachi, Ltd., Tokyo (Japan)

    1998-07-01

    Electric power companies in Japan are making efforts to reduce the cost by improving the operation rate of existing facilities through the employment of advanced automation systems in the sector of distribution. This paper introduces the systems of Hitachi. A 22 kV-line automation system using high-speed photo-transmission line is adopted for the maintenance of widely extended distribution facilities. This system includes a 22 kV/240-415 V transformer and a 22 kV/105-210 V transformer. To supervise and control these transformers and switches, and to recover the accidents, this system consists of a computer system, a remote host station, and remote end terminals. Based on the information of distribution facilities of substations, end terminals and a host station, monitor/control of these facilities and recovery of accidents are conducted using computers. A system plan supporting system is also introduced, which aims at improvements of facility utilization factor, operation efficiency, and distribution operation efficiency. 5 figs.

  8. Heterogeneity of variances for carcass traits by percentage Brahman inheritance.

    Science.gov (United States)

    Crews, D H; Franke, D E

    1998-07-01

    Heterogeneity of carcass trait variances due to level of Brahman inheritance was investigated using records from straightbred and crossbred steers produced from 1970 to 1988 (n = 1,530). Angus, Brahman, Charolais, and Hereford sires were mated to straightbred and crossbred cows to produce straightbred, F1, back-cross, three-breed cross, and two-, three-, and four-breed rotational crossbred steers in four non-overlapping generations. At weaning (mean age = 220 d), steers were randomly assigned within breed group directly to the feedlot for 200 d, or to a backgrounding and stocker phase before feeding. Stocker steers were fed from 70 to 100 d in generations 1 and 2 and from 60 to 120 d in generations 3 and 4. Carcass traits included hot carcass weight, subcutaneous fat thickness and longissimus muscle area at the 12-13th rib interface, carcass weight-adjusted longissimus muscle area, USDA yield grade, estimated total lean yield, marbling score, and Warner-Bratzler shear force. Steers were classified as either high Brahman (50 to 100% Brahman), moderate Brahman (25 to 49% Brahman), or low Brahman (0 to 24% Brahman) inheritance. Two types of animal models were fit with regard to level of Brahman inheritance. One model assumed similar variances between pairs of Brahman inheritance groups, and the second model assumed different variances between pairs of Brahman inheritance groups. Fixed sources of variation in both models included direct and maternal additive and nonadditive breed effects, year of birth, and slaughter age. Variances were estimated using derivative free REML procedures. Likelihood ratio tests were used to compare models. The model accounting for heterogeneous variances had a greater likelihood (P yield, and Warner-Bratzler shear force, indicating improved fit with percentage Brahman inheritance considered as a source of heterogeneity of variance. Genetic covariances estimated from the model accounting for heterogeneous variances resulted in genetic

  9. Flux-Variance Similarity in Complex Terrain and Its Sensitivity to Different Methods of Treating Non-stationarity

    Science.gov (United States)

    Babić, Nevio; Večenaj, Željko; De Wekker, Stephan F. J.

    2016-04-01

    Various criteria have been developed to remove non-stationarity in turbulence time series, though it remains unclear how the choice of the stationarity criterion affects similarity functions in the framework of the Monin-Obukhov similarity theory. To investigate this, we use stationary datasets that result from applying five common criteria to remove non-stationarity in turbulence time series from the Terrain-Induced Rotor EXperiment conducted in Owens Valley, California. We determine the form of the flux-variance similarity functions and the scatter around these similarity functions for all five stationary datasets. Data were collected at two valley locations and one slope location using 34-m flux towers with six levels of turbulence measurements. Our results show (i) systematic differences from previously found near-neutral values of the parameters in the flux-variance similarity functions over flat terrain, indicating a larger anisotropy of the flow over complex than over flat terrain, (ii) a reduction of this anisotropy when stationary data are used, with the amount of reduction depending on the stationarity criterion, (iii) a general reduction in scatter around the similarity functions when using stationary data but more so for stable than for unstable stratification, and for valley locations than for the slope location, and (iv) a weak variation with height of near-neutral values of parameters in the flux-variance similarity functions.

  10. Logistics Reduction and Repurposing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Advanced Exploration Systems (AES) Logistics Reduction and Repurposing (LRR) project will enable a mission-independent cradle-to-grave-to-cradle...

  11. Gravity Wave Variances and Propagation Derived from AIRS Radiances

    Science.gov (United States)

    Gong, Jie; Wu, Dong L.; Eckermann, S. D.

    2012-01-01

    As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).

  12. Variance in the reproductive success of dominant male mountain gorillas.

    Science.gov (United States)

    Robbins, Andrew M; Gray, Maryke; Uwingeli, Prosper; Mburanumwe, Innocent; Kagoda, Edwin; Robbins, Martha M

    2014-10-01

    Using 30 years of demographic data from 15 groups, this study estimates how harem size, female fertility, and offspring survival may contribute to variance in the siring rates of dominant male mountain gorillas throughout the Virunga Volcano Region. As predicted for polygynous species, differences in harem size were the greatest source of variance in the siring rate, whereas differences in female fertility and offspring survival were relatively minor. Harem size was positively correlated with offspring survival, even after removing all known and suspected cases of infanticide, so the correlation does not seem to reflect differences in the ability of males to protect their offspring. Harem size was not significantly correlated with female fertility, which is consistent with the hypothesis that mountain gorillas have minimal feeding competition. Harem size, offspring survival, and siring rates were not significantly correlated with the proportion of dominant tenures that occurred in multimale groups versus one-male groups; even though infanticide is less likely when those tenures end in multimale groups than one-male groups. In contrast with the relatively small contribution of offspring survival to variance in the siring rates of this study, offspring survival is a major source of variance in the male reproductive success of western gorillas, which have greater predation risks and significantly higher rates of infanticide. If differences in offspring protection are less important among male mountain gorillas than western gorillas, then the relative importance of other factors may be greater for mountain gorillas. Thus, our study illustrates how variance in male reproductive success and its components can differ between closely related species. PMID:24818867

  13. Poverty Reduction

    OpenAIRE

    Ortiz, Isabel

    2007-01-01

    The paper reviews poverty trends and measurements, poverty reduction in historical perspective, the poverty-inequality-growth debate, national poverty reduction strategies, criticisms of the agenda and the need for redistribution, international policies for poverty reduction, and ultimately understanding poverty at a global scale. It belongs to a series of backgrounders developed at Joseph Stiglitz's Initiative for Policy Dialogue.

  14. DEVELOPMENT OF A NOVEL RADIATIVELY/CONDUCTIVELY STABILIZED BURNER FOR SIGNIFICANT REDUCTION OF NOx EMISSIONS AND FOR ADVANCING THE MODELING AND UNDERSTANDING OF PULVERIZED COAL COMBUSTION AND EMISSIONS

    Energy Technology Data Exchange (ETDEWEB)

    Noam Lior; Stuart W. Churchill

    2003-10-01

    the Gordon Conference on Modern Development in Thermodynamics. The results obtained are very encouraging for the development of the RCSC as a commercial burner for significant reduction of NO{sub x} emissions, and highly warrants further study and development.

  15. A reduction in growth rate of Pseudomonas putida KT2442 counteracts productivity advances in medium-chain-length polyhydroxyalkanoate production from gluconate

    Directory of Open Access Journals (Sweden)

    Zinn Manfred

    2011-04-01

    Full Text Available Abstract Background The substitution of plastics based on fossil raw material by biodegradable plastics produced from renewable resources is of crucial importance in a context of oil scarcity and overflowing plastic landfills. One of the most promising organisms for the manufacturing of medium-chain-length polyhydroxyalkanoates (mcl-PHA is Pseudomonas putida KT2440 which can accumulate large amounts of polymer from cheap substrates such as glucose. Current research focuses on enhancing the strain production capacity and synthesizing polymers with novel material properties. Many of the corresponding protocols for strain engineering rely on the rifampicin-resistant variant, P. putida KT2442. However, it remains unclear whether these two strains can be treated as equivalent in terms of mcl-PHA production, as the underlying antibiotic resistance mechanism involves a modification in the RNA polymerase and thus has ample potential for interfering with global transcription. Results To assess PHA production in P. putida KT2440 and KT2442, we characterized the growth and PHA accumulation on three categories of substrate: PHA-related (octanoate, PHA-unrelated (gluconate and poor PHA substrate (citrate. The strains showed clear differences of growth rate on gluconate and citrate (reduction for KT2442 > 3-fold and > 1.5-fold, respectively but not on octanoate. In addition, P. putida KT2442 PHA-free biomass significantly decreased after nitrogen depletion on gluconate. In an attempt to narrow down the range of possible reasons for this different behavior, the uptake of gluconate and extracellular release of the oxidized product 2-ketogluconate were measured. The results suggested that the reason has to be an inefficient transport or metabolization of 2-ketogluconate while an alteration of gluconate uptake and conversion to 2-ketogluconate could be excluded. Conclusions The study illustrates that the recruitment of a pleiotropic mutation, whose effects might

  16. 40 CFR 142.302 - Who can issue a small system variance?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Who can issue a small system variance... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances for Small System General Provisions § 142.302 Who can issue a small system variance? A small system variance under...

  17. 29 CFR 4204.11 - Variance of the bond/escrow and sale-contract requirements.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Variance of the bond/escrow and sale-contract requirements... CORPORATION WITHDRAWAL LIABILITY FOR MULTIEMPLOYER PLANS VARIANCES FOR SALE OF ASSETS Variance of the Statutory Requirements § 4204.11 Variance of the bond/escrow and sale-contract requirements. (a)...

  18. 40 CFR 142.21 - State consideration of a variance or exemption request.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false State consideration of a variance or... State-Issued Variances and Exemptions § 142.21 State consideration of a variance or exemption request. A State with primary enforcement responsibility shall act on any variance or exemption request...

  19. Advances in PHWR design

    International Nuclear Information System (INIS)

    Recent advances by AECL in improved performance, cost reduction and safety improvement of CANDU reactors are described. Topics include: computer-aided design tools, up-front licensing, site utilization, plant life management, construction techniques, plant control, safety-critical software, advanced fuels, human-machine interface, heat sinks, radiation protection, feedback to design, emergency core cooling and probabilistic safety assessment

  20. Validation technique using mean and variance of kriging model

    International Nuclear Information System (INIS)

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling

  1. Fidelity between Gaussian mixed states with quantum state quadrature variances

    Science.gov (United States)

    Hai-Long, Zhang; Chun, Zhou; Jian-Hong, Shi; Wan-Su, Bao

    2016-04-01

    In this paper, from the original definition of fidelity in a pure state, we first give a well-defined expansion fidelity between two Gaussian mixed states. It is related to the variances of output and input states in quantum information processing. It is convenient to quantify the quantum teleportation (quantum clone) experiment since the variances of the input (output) state are measurable. Furthermore, we also give a conclusion that the fidelity of a pure input state is smaller than the fidelity of a mixed input state in the same quantum information processing. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002) and the Foundation of Science and Technology on Information Assurance Laboratory (Grant No. KJ-14-001).

  2. Identifiability, stratification and minimum variance estimation of causal effects.

    Science.gov (United States)

    Tong, Xingwei; Zheng, Zhongguo; Geng, Zhi

    2005-10-15

    The weakest sufficient condition for the identifiability of causal effects is the weakly ignorable treatment assignment, which implies that potential responses are independent of treatment assignment in each fine subpopulation stratified by a covariate. In this paper, we expand the independence that holds in fine subpopulations to the case that the independence may also hold in several coarse subpopulations, each of which consists of several fine subpopulations and may have overlaps with other coarse subpopulations. We first show that the identifiability of causal effects occurs if and only if the coarse subpopulations partition the whole population. We then propose a principle, called minimum variance principle, which says that the estimator possessing the minimum variance is preferred, in dealing with the stratification and the estimation of the causal effects. The simulation results with the detail programming and a practical example demonstrate that it is a feasible and reasonable way to achieve our goals. PMID:16149123

  3. Convergence of Recursive Identification for ARMAX Process with Increasing Variances

    Institute of Scientific and Technical Information of China (English)

    JIN Ya; LUO Guiming

    2007-01-01

    The autoregressive moving average exogenous (ARMAX) model is commonly adopted for describing linear stochastic systems driven by colored noise. The model is a finite mixture with the ARMA component and external inputs. In this paper we focus on a paramete estimate of the ARMAX model. Classical modeling methods are usually based on the assumption that the driven noise in the moving average (MA) part has bounded variances, while in the model considered here the variances of noise may increase by a power of log n. The plant parameters are identified by the recursive stochastic gradient algorithm. The diminishing excitation technique and some results of martingale difference theory are adopted in order to prove the convergence of the identification. Finally, some simulations are given to show the theoretical results.

  4. Sample variance and Lyman-alpha forest transmission statistics

    CERN Document Server

    Rollinde, Emmanuel; Schaye, Joop; Pâris, Isabelle; Petitjean, Patrick

    2012-01-01

    We compare the observed probability distribution function of the transmission in the \\HI\\ Lyman-alpha forest, measured from the UVES 'Large Programme' sample at redshifts z=[2,2.5,3], to results from the GIMIC cosmological simulations. Our measured values for the mean transmission and its PDF are in good agreement with published results. Errors on statistics measured from high-resolution data are typically estimated using bootstrap or jack-knife resampling techniques after splitting the spectra into chunks. We demonstrate that these methods tend to underestimate the sample variance unless the chunk size is much larger than is commonly the case. We therefore estimate the sample variance from the simulations. We conclude that observed and simulated transmission statistics are in good agreement, in particular, we do not require the temperature-density relation to be 'inverted'.

  5. Extended Active Contour Algorithm Based on Color Variance

    Institute of Scientific and Technical Information of China (English)

    Seung-tae LEE; Young-jun HAN; Hern-soo HAHN

    2010-01-01

    General active contour algorithm,which uses the intensity of the image,has been used to actively segment chjects.Because the cbjects have a similar intensity but different colors,it is difficult to segment any object from the others.Moreover,this algorithm can only be used in the simple environment since it is very sensitive to noise.In order to solve these problems.This paper proposes an extended active contour algarithm based on a color variance.In complex images,the color variance energy as the image energy is introduced into the general active contour algorithm.Experimental results show that the proposed active contour algorithm is very effective in various environments.

  6. No evidence for anomalously low variance circles on the sky

    CERN Document Server

    Moss, Adam; Zibin, James P

    2010-01-01

    In a recent paper, Gurzadyan & Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan & Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.

  7. Variance in multiplex suspension array assays: microsphere size variation impact

    Directory of Open Access Journals (Sweden)

    Cheng R Holland

    2007-08-01

    Full Text Available Abstract Background Luminex suspension microarray assays are in widespread use. There are issues of variability of assay readings using this technology. Methods and results Size variation is demonstrated by transmission electron microscopy. Size variations of microspheres are shown to occur in stepwise increments. A strong correspondence between microsphere size distribution and distribution of fluorescent events from assays is shown. An estimate is made of contribution of microsphere size variation to assay variance. Conclusion A probable significant cause of variance in suspended microsphere assay results is variation in microsphere diameter. This can potentially be addressed by changes in the manufacturing process. Provision to users of mean size, median size, skew, the number of standard deviations that half the size range represents (sigma multiple, and standard deviation is recommended. Establishing a higher sigma multiple for microsphere production is likely to deliver a significant improvement in precision of raw instrument readings. Further research is recommended on the molecular architecture of microsphere coatings.

  8. A surface layer variance heat budget for ENSO

    Science.gov (United States)

    Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.

    2015-05-01

    Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.

  9. Explaining the Prevalence, Scaling and Variance of Urban Phenomena

    CERN Document Server

    Gomez-Lievano, Andres; Hausmann, Ricardo

    2016-01-01

    The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.

  10. Variance in prey abundance influences time budgets of breeding seabirds: Evidence from pigeon guillemots Cepphus columba

    Science.gov (United States)

    Litzow, M.A.; Piatt, J.F.

    2003-01-01

    We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.

  11. The relation of the Allan- and Delta-variance to the continuous wavelet transform

    OpenAIRE

    Zielinsky, M.; Stutzki, J.

    1999-01-01

    This paper is understood as a supplement to the paper by [Stutzki et al, 1998], where we have shown the usefulness of the Allan-variance and its higher dimensional generalization, the Delta-variance, for the characterization of molecular cloud structures. In this study we present the connection between the Allan- and Delta-variance and a more popular structure analysis tool: the wavelet transform. We show that the Allan- and Delta-variances are the variances of wavelet transform coefficients.

  12. Epistasis and Its Contribution to Genetic Variance Components

    OpenAIRE

    Cheverud, J M; Routman, E J

    1995-01-01

    We present a new parameterization of physiological epistasis that allows the measurement of epistasis separate from its effects on the interaction (epistatic) genetic variance component. Epistasis is the deviation of two-locus genotypic values from the sum of the contributing single-locus genotypic values. This parameterization leads to statistical tests for epistasis given estimates of two-locus genotypic values such as can be obtained from quantitative trait locus studies. The contributions...

  13. Empirical Performance of the Constant Elasticity Variance Option Pricing Model

    OpenAIRE

    Ren-Raw Chen; Cheng-Few Lee; Han-Hsing Lee

    2009-01-01

    In this essay, we empirically test the Constant–Elasticity-of-Variance (CEV) option pricing model by Cox (1975, 1996) and Cox and Ross (1976), and compare the performances of the CEV and alternative option pricing models, mainly the stochastic volatility model, in terms of European option pricing and cost-accuracy based analysis of their numerical procedures.In European-style option pricing, we have tested the empirical pricing performance of the CEV model and compared the results with those ...

  14. DISCO analysis: A nonparametric extension of analysis of variance

    OpenAIRE

    RIZZO, MARIA L.; Székely, Gábor J.

    2010-01-01

    In classical analysis of variance, dispersion is measured by considering squared distances of sample elements from the sample mean. We consider a measure of dispersion for univariate or multivariate response based on all pairwise distances between-sample elements, and derive an analogous distance components (DISCO) decomposition for powers of distance in $(0,2]$. The ANOVA F statistic is obtained when the index (exponent) is 2. For each index in $(0,2)$, this decomposition determines a nonpar...

  15. From the Editors: Common method variance in international business research

    OpenAIRE

    Sea-Jin Chang; Arjen van Witteloostuijn; Lorraine Eden

    2010-01-01

    JIBS receives many manuscripts that report findings from analyzing survey data based on same-respondent replies. This can be problematic since same-respondent studies can suffer from common method variance (CMV). Currently, authors who submit manuscripts to JIBS that appear to suffer from CMV are asked to perform validity checks and resubmit their manuscripts. This letter from the Editors is designed to outline the current state of best practice for handling CMV in international business rese...

  16. Stream sampling for variance-optimal estimation of subset sums

    OpenAIRE

    Cohen, Edith; Duffield, Nick; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present an efficient reservoir sampling scheme, $\\varoptk$, that dominates all previous schemes in terms of estimation quality. $\\varoptk$ provides {\\em variance optimal unbiased estimation of subset sum...

  17. On mean reward variance in semi-Markov processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2005-01-01

    Roč. 62, č. 3 (2005), s. 387-397. ISSN 1432-2994 R&D Projects: GA ČR(CZ) GA402/05/0115; GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Markov and semi- Markov processes with rewards * variance of cumulative reward * asymptotic behaviour Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.259, year: 2005

  18. Analysis of Variance in the Modern Design of Experiments

    Science.gov (United States)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  19. Variance computations for functional of absolute risk estimates

    OpenAIRE

    Pfeiffer, R. M.; E. Petracci

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function base...

  20. What Do We Know About Variance in Accounting Profitability?

    OpenAIRE

    Anita M McGahan; Porter, Michael E.

    2002-01-01

    In this paper, we analyze the variance of accounting profitability among a broad cross-section of forms in the American economy from 1981 to 1994. The purpose of the analysis is to identify the importance of year, industry, corporate-parent, and business-specific effects on accounting profitability among operating businesses across sectors. The findings indicate that industry and corporate-parent effects are important and related to one another. As expected, business-specific effects, which a...

  1. Constraining the local variance of H0 from directional analyses

    Science.gov (United States)

    Bengaly, C. A. P., Jr.

    2016-04-01

    We evaluate the local variance of the Hubble Constant H0 with low-z Type Ia Supernovae (SNe). Our analyses are performed using a hemispherical comparison method in order to test whether taking the bulk flow motion into account can reconcile the measurement of the Hubble Constant H0 from standard candles (H0 = 73.8±2.4 km s-1 Mpc -1) with that of the Planck's Cosmic Microwave Background data (H0 = 67.8 ± 0.9km s-1 Mpc-1). We obtain that H0 ranges from 68.9±0.5 km s-1 Mpc-1 to 71.2±0.7 km s-1 Mpc-1 through the celestial sphere (1σ uncertainty), implying a Hubble Constant maximal variance of δH0 = (2.30±0.86) km s-1 Mpc-1 towards the (l,b) = (315°,27°) direction. Interestingly, this result agrees with the bulk flow direction estimates found in the literature, as well as previous evaluations of the H0 variance due to the presence of nearby inhomogeneities. We assess the statistical significance of this result with different prescriptions of Monte Carlo simulations, obtaining moderate statistical significance, i.e., 68.7% confidence level (CL) for such variance. Furthermore, we test the hypothesis of a higher H0 value in the presence of a bulk flow velocity dipole, finding some evidence for this result which, however, cannot be claimed to be significant due to the current large uncertainty in the SNe distance modulus. Then, we conclude that the tension between different H0 determinations can plausibly be caused to the bulk flow motion of the local Universe, even though the current incompleteness of the SNe data set, both in terms of celestial coverage and distance uncertainties, does not allow a high statistical significance for these results or a definitive conclusion about this issue.

  2. Combinatorial Topic Models using Small-Variance Asymptotics

    OpenAIRE

    Jiang, Ke; Sra, Suvrit; Kulis, Brian

    2016-01-01

    Topic models have emerged as fundamental tools in unsupervised machine learning. Most modern topic modeling algorithms take a probabilistic view and derive inference algorithms based on Latent Dirichlet Allocation (LDA) or its variants. In contrast, we study topic modeling as a combinatorial optimization problem, and derive its objective function from LDA by passing to the small-variance limit. We minimize the derived objective by using ideas from combinatorial optimization, which results in ...

  3. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  4. Recursive identification of time-varying systems using minimum variance

    OpenAIRE

    Tian, Y.; M. Wahl; Vasseur, C.

    2003-01-01

    This paper presents a new on-line identification method based on minimum variance using a sliding data window in order to improve robustness against noise and effective tracking ability. This method is based on a local linearization model between the sliding data window bounds and on an incremental procedure. The corresponding algorithm is applied to a non-linear fermentation process. The results illustrate the performances of this method in comparison with other existent techniques.

  5. Mean–Variance and Expected Utility: The Borch Paradox

    OpenAIRE

    David Johnstone; Dennis Lindley

    2013-01-01

    The model of rational decision-making in most of economics and statistics is expected utility theory (EU) axiomatised by von Neumann and Morgenstern, Savage and others. This is less the case, however, in financial economics and mathematical finance, where investment decisions are commonly based on the methods of mean-variance (MV) introduced in the 1950s by Markowitz. Under the MV framework, each available investment opportunity ("asset") or portfolio is represented in just two dimensions by ...

  6. End-state comfort and joint configuration variance during reaching

    Science.gov (United States)

    Solnik, Stanislaw; Pazin, Nemanja; Coelho, Chase J.; Rosenbaum, David A.; Scholz, John P.; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2013-01-01

    This study joined two approaches to motor control. The first approach comes from cognitive psychology and is based on the idea that goal postures and movements are chosen to satisfy task-specific constraints. The second approach comes from the principle of motor abundance and is based on the idea that control of apparently redundant systems is associated with the creation of multi-element synergies stabilizing important performance variables. The first approach has been tested by relying on psychophysical ratings of comfort. The second approach has been tested by estimating variance along different directions in the space of elemental variables such as joint postures. The two approaches were joined here. Standing subjects performed series of movements in which they brought a hand-held pointer to each of four targets oriented within a frontal plane, close to or far from the body. The subjects were asked to rate the comfort of the final postures, and the variance of their joint configurations during the steady state following pointing was quantified with respect to pointer endpoint position and pointer orientation. The subjects showed consistent patterns of comfort ratings among the targets, and all movements were characterized by multi-joint synergies stabilizing both pointer endpoint position and orientation. Contrary to what was expected, less comfortable postures had higher joint configuration variance than did more comfortable postures without major changes in the synergy indices. Multi-joint synergies stabilized the pointer position and orientation similarly across a range of comfortable/uncomfortable postures. The results are interpreted in terms conducive to the two theoretical frameworks underlying this work, one focusing on comfort ratings reflecting mean postures adopted for different targets and the other focusing on indices of joint configuration variance. PMID:23288326

  7. Board composition and firm performance variance: Australian evidence

    OpenAIRE

    Yi Wang; Judith Oliver

    2009-01-01

    Purpose – The purpose of this paper is to investigate the relationship between board composition and firm performance variance in the context of recent corporate governance reforms, based on the agency and organisational literatures. Design/methodology/approach – This paper uses 384 of the top 500 Australian companies as its dataset. Board composition measures include the percentages of affiliated, executive and independent members on the board. Firm risk is represented by the standard deviat...

  8. Imaging structural co-variance between human brain regions

    OpenAIRE

    Alexander-Bloch, Aaron; Giedd, Jay N.; Bullmore, Ed

    2013-01-01

    Brain structure varies between people in a markedly organized fashion. Communities of brain regions co-vary in their morphological properties. For example, cortical thickness in one region influences the thickness of structurally and functionally connected regions. Such networks of structural co-variance partially recapitulate the functional networks of healthy individuals and the foci of grey matter loss in neurodegenerative disease. This architecture is genetically heritable, is associated ...

  9. Variance Analysis of Genus Ipomoea based on Morphological Characters

    OpenAIRE

    DWI PRIYANTO; SURATMAN; AHMAD DWI SETYAWAN

    2000-01-01

    The objective of this research was to find out the variability of morphological characters of genus Ipomoea, including coefficient variance and phylogenetic relationships. Genus Ipomoea has been identified consisting of four species i.e. Ipomoea crassicaulis Rob, Ipomoea aquatica Forsk., Ipomoea reptans Poir and Ipomoea leari. Four species of the genus took from surround the lake inside the campus of Sebelas Maret University, Surakarta. Comparison of species variability was based on the varia...

  10. Are we underestimating the genetic variances of dimorphic traits?

    OpenAIRE

    Wolak, ME; Roff, DA; Fairbairn, DJ

    2015-01-01

    © 2014 The Authors. Populations often contain discrete classes or morphs (e.g., sexual dimorphisms, wing dimorphisms, trophic dimorphisms) characterized by distinct patterns of trait expression. In quantitative genetic analyses, the different morphs can be considered as different environments within which traits are expressed. Genetic variances and covariances can then be estimated independently for each morph or in a combined analysis. In the latter case, morphs can be considered as separate...

  11. The Column Density Variance-{\\cal M}_s Relationship

    Science.gov (United States)

    Burkhart, Blakesley; Lazarian, A.

    2012-08-01

    Although there is a wealth of column density tracers for both the molecular and diffuse interstellar medium, there are few observational studies investigating the relationship between the density variance (σ2) and the sonic Mach number ({\\cal M}_s). This is in part due to the fact that the σ2-{\\cal M}_s relationship is derived, via MHD simulations, for the three-dimensional (3D) density variance only, which is not a direct observable. We investigate the utility of a 2D column density \\sigma _{\\Sigma /\\Sigma _0}^2-{\\cal M}_s relationship using solenoidally driven isothermal MHD simulations and find that the best fit follows closely the form of the 3D density \\sigma _{\\rho /\\rho _0}^2-{\\cal M}_s trend but includes a scaling parameter A such that \\sigma _{\\ln (\\Sigma /\\Sigma _0)}^2=A\\times \\ln (1+b^2{\\cal M}_s^2), where A = 0.11 and b = 1/3. This relation is consistent with the observational data reported for the Taurus and IC 5146 molecular clouds with b = 0.5 and A = 0.16, and b = 0.5 and A = 0.12, respectively. These results open up the possibility of using the 2D column density values of σ2 for investigations of the relation between the sonic Mach number and the probability distribution function (PDF) variance in addition to existing PDF sonic Mach number relations.

  12. VAPOR: variance-aware per-pixel optimal resource allocation.

    Science.gov (United States)

    Eisenberg, Yiftach; Zhai, Fan; Pappas, Thrasyvoulos N; Berry, Randall; Katsaggelos, Aggelos K

    2006-02-01

    Characterizing the video quality seen by an end-user is a critical component of any video transmission system. In packet-based communication systems, such as wireless channels or the Internet, packet delivery is not guaranteed. Therefore, from the point-of-view of the transmitter, the distortion at the receiver is a random variable. Traditional approaches have primarily focused on minimizing the expected value of the end-to-end distortion. This paper explores the benefits of accounting for not only the mean, but also the variance of the end-to-end distortion when allocating limited source and channel resources. By accounting for the variance of the distortion, the proposed approach increases the reliability of the system by making it more likely that what the end-user sees, closely resembles the mean end-to-end distortion calculated at the transmitter. Experimental results demonstrate that variance-aware resource allocation can help limit error propagation and is more robust to channel-mismatch than approaches whose goal is to strictly minimize the expected distortion. PMID:16479799

  13. Number variance for hierarchical random walks and related fluctuations

    CERN Document Server

    Bojdecki, Tomasz; Talarczyk, Anna

    2010-01-01

    We study an infinite system of independent symmetric random walks on a hierarchical group, in particular, the c-random walks. Such walks are used, e.g., in population genetics. The number variance problem consists in investigating if the variance of the number of "particles" N_n(L) lying in the ball of radius L at a given time n remains bounded, or even better, converges to a finite limit, as $L\\to \\infty$. We give a necessary and sufficient condition and discuss its relationship to transience/recurrence property of the walk. Next we consider normalized fluctuations of N_n(L) around the mean as $n\\to \\infty$ and L is increased in an appropriate way. We prove convergence of finite dimensional distributions to a Gaussian process whose properties are discussed. As the c-random walks mimic symmetric stable processes on R, we compare our results to those obtained by Hambly and Jones (2007,2009), where the number variance problem for an infinite system of symmetric stable processes on R was studied. Since the hiera...

  14. Variance optimal sampling based estimation of subset sums

    CERN Document Server

    Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.

  15. Asymptotically robust variance estimation for person-time incidence rates.

    Science.gov (United States)

    Scosyrev, Emil

    2016-05-01

    Person-time incidence rates are frequently used in medical research. However, standard estimation theory for this measure of event occurrence is based on the assumption of independent and identically distributed (iid) exponential event times, which implies that the hazard function remains constant over time. Under this assumption and assuming independent censoring, observed person-time incidence rate is the maximum-likelihood estimator of the constant hazard, and asymptotic variance of the log rate can be estimated consistently by the inverse of the number of events. However, in many practical applications, the assumption of constant hazard is not very plausible. In the present paper, an average rate parameter is defined as the ratio of expected event count to the expected total time at risk. This rate parameter is equal to the hazard function under constant hazard. For inference about the average rate parameter, an asymptotically robust variance estimator of the log rate is proposed. Given some very general conditions, the robust variance estimator is consistent under arbitrary iid event times, and is also consistent or asymptotically conservative when event times are independent but nonidentically distributed. In contrast, the standard maximum-likelihood estimator may become anticonservative under nonconstant hazard, producing confidence intervals with less-than-nominal asymptotic coverage. These results are derived analytically and illustrated with simulations. The two estimators are also compared in five datasets from oncology studies. PMID:26439107

  16. Hubble flow variance and the cosmic rest frame

    CERN Document Server

    Wiltshire, David L; Mattsson, Teppo; Watkins, Richard

    2013-01-01

    We characterize the radial and angular variance of the Hubble flow in the COMPOSITE sample of 4534 galaxy distances. Independent of any cosmological assumptions other than the existence of a suitably averaged linear Hubble law, we find with decisive Bayesian evidence (ln B >> 5) that the Hubble constant averaged in spherical radial shells is closer to its global value when referred to the rest frame of the Local Group rather than to the standard rest frame of the Cosmic Microwave Background (CMB) radiation. Angular averages reveal a dipole structure in the Hubble flow variance, correlated with structures within a sphere of radius 30/h - 60/h Mpc. Furthermore, the angular map of Hubble flow variance is found to coincide with the angular map of the residual CMB temperature dipole in the Local Group rest frame, with correlation coefficient -0.92. This suggests a new mechanism for the origin of the CMB dipole: in addition to a local boost it is generated by differences in the distance to the surface of last scatt...

  17. Variance Analysis of Genus Ipomoea based on Morphological Characters

    Directory of Open Access Journals (Sweden)

    DWI PRIYANTO

    2000-07-01

    Full Text Available The objective of this research was to find out the variability of morphological characters of genus Ipomoea, including coefficient variance and phylogenetic relationships. Genus Ipomoea has been identified consisting of four species i.e. Ipomoea crassicaulis Rob, Ipomoea aquatica Forsk., Ipomoea reptans Poir and Ipomoea leari. Four species of the genus took from surround the lake inside the campus of Sebelas Maret University, Surakarta. Comparison of species variability was based on the variance coefficient of vegetative and generative morphological characters. The vegetative characters observed were roots, steams and leaves, while the generative characters observed were flowers, fruits, and seeds. Phylogenetic relationship was determined by clustering association coefficient. Coefficient variance analysis of vegetative and generative morphological characters resulted in several groups based on the degree of variability i.e. low, enough, high, very high or none. The phylogenetic relationship showed that Ipomoea aquatica Forsk. and Ipomoea reptans Poir. have higher degree of phylogenetic than Ipomoea leari and Ipomoea crassicaulis Rob.

  18. How a hurricane disturbance influences extreme CO2 fluxes and variance in a tropical forest

    International Nuclear Information System (INIS)

    A current challenge is to understand what are the legacies left by disturbances on ecosystems for predicting response patterns and trajectories. This work focuses on the ecological implications of a major hurricane and analyzes its influence on forest gross primary productivity (GPP; derived from the moderate-resolution imaging spectroradiometer, MODIS) and soil CO2 efflux. Following the hurricane, there was a reduction of nearly 0.5 kgC m−2 yr−1, equivalent to ∼15% of the long-term mean GPP (∼3.0 ± 0.2 kgC m−2 yr−1; years 2003–8). Annual soil CO2 emissions for the year following the hurricane were > 3.9 ± 0.5 kgC m−2 yr−1, whereas for the second year emissions were 1.7 ± 0.4 kgC m−2 yr−1. Higher annual emissions were associated with higher probabilities of days with extreme soil CO2 efflux rates ( > 9.7 μmol CO2 m−2 s−1). The variance of GPP was highly variable across years and was substantially increased following the hurricane. Extreme soil CO2 efflux after the hurricane was associated with deposition of nitrogen-rich fresh organic matter, higher basal soil CO2 efflux rates and changes in variance of the soil temperature. These results show that CO2 dynamics are highly variable following hurricanes, but also demonstrate the strong resilience of tropical forests following these events. (letter)

  19. Impact of nonrandom mating on genetic variance and gene flow in populations with mass selection.

    Science.gov (United States)

    Sánchez, Leopoldo; Woolliams, John A

    2004-01-01

    The mechanisms by which nonrandom mating affects selected populations are not completely understood and remain a subject of scientific debate in the development of tractable predictors of population characteristics. The main objective of this study was to provide a predictive model for the genetic variance and covariance among mates for traits subjected to directional selection in populations with nonrandom mating based on the pedigree. Stochastic simulations were used to check the validity of this model. Our predictions indicate that the positive covariance among mates that is expected to result with preferential mating of relatives can be severely overpredicted from neutral expectations. The covariance expected from neutral theory is offset by an opposing covariance between the genetic mean of an individual's family and the Mendelian sampling term of its mate. This mechanism was able to predict the reduction in covariance among mates that we observed in the simulated populations and, in consequence, the equilibrium genetic variance and expected long-term genetic contributions. Additionally, this study provided confirmatory evidence on the postulated relationships of long-term genetic contributions with both the rate of genetic gain and the rate of inbreeding (deltaF) with nonrandom mating. The coefficient of variation of the expected gene flow among individuals and deltaF was sensitive to nonrandom mating when heritability was low, but less so as heritability increased, and the theory developed in the study was sufficient to explain this phenomenon. PMID:15020441

  20. Wind energy cost reductions

    International Nuclear Information System (INIS)

    Commercial wind turbines manufactured today reliably generate electrical energy at approximately $0.07 - $0.09 per kWh, depending on the wind speeds at the site and the nature of the terrain. This paper reports that to be competitive with other electricity generation technologies these costs must be reduced 30 - 50% if current electricity pricing practices continue. Reductions of this magnitude can be achieved through reductions in wind turbine capital costs, increases in efficiency, and changes in the financial market's perception of wind energy technology. Advanced technology can make a significant contribution in each of these areas

  1. 空气灌肠失败和晚期肠套迭的手术治疗%Surgical Treatment of Advanced Intussusceptions and Failure of Rectal Inflation Reduction

    Institute of Scientific and Technical Information of China (English)

    唐伟椿; 成守礼

    1983-01-01

    From Nov.,1975 to July,1982,80 cases(51 males and 29 females)of intussusceotion were operated on.Among them,31 rectal inflation reduction failed.49 cases were advanced intussusdeption including some small intestinal intussusception.66 cases were primary.62 children were aged under one.Most of them had either enlarged regional mesenteric lymph node or mobile cecum.14 had secondary intussusceptions,13 of whom aged over one.There were 5 cases of Meckel's diverticulum,4 polyps,4 ileal duplications and one allergic purpura complicated with hematoma in the anterior wall of the cecum.Manual reductions were accomplished in 58 patients,together with simultaneous appendectomy.No plication of the cecum was attempted nor relapse noted.Intestinal resection followed by anastomosis was performed in 22 cases for intestinal gangrene.While rectal inflation on two patients with intestinal perforation was not successful,surgical repair was performed immediately.Only one death due to preoperative pneumonia and chickenpox was recorded.Thus mortality rate was 1.25%.%@@ 肠套迭是婴儿常见的急腹症,自从应用空气灌肠治疗以来,早期肠套迭的整复治疗取得了肯定的疗效,显著地降低了手术率.但对于复杂型和晚期肠套迭的病例使用空气灌肠,不但难以奏效,而且往往发生危险,而仍需手术治疗.

  2. Variance as a Leading Indicator of Regime Shift in Ecosystem Services

    Directory of Open Access Journals (Sweden)

    Stephen R. Carpenter

    2006-12-01

    Full Text Available Many environmental conflicts involve pollutants such as greenhouse gas emissions that are dispersed through space and cause losses of ecosystem services. As pollutant emissions rise in one place, a spatial cascade of declining ecosystem services can spread across a larger landscape because of the dispersion of the pollutant. This paper considers the problem of anticipating such spatial regime shifts by monitoring time series of the pollutant or associated ecosystem services. Using such data, it is possible to construct indicators that rise sharply in advance of regime shifts. Specifically, the maximum eigenvalue of the variance-covariance matrix of the multivariate time series of pollutants and ecosystem services rises prior to the regime shift. No specific knowledge of the mechanisms underlying the regime shift is needed to construct the indicator. Such leading indicators of regime shifts could provide useful signals to management agencies or to investors in ecosystem service markets.

  3. CBIR Using Kekre's Transform over Row column Mean and Variance Vectors

    Directory of Open Access Journals (Sweden)

    Dr. H. B. Kekre

    2010-08-01

    Full Text Available We see the advancement in image acquisition technologies and storage systems which always encourages us to design a sophisticated system to retrieve the images effectively. In this paper, we describe the novel approach for image retrieval based on image contents. It involves creation of feature database for all database images which includes formation of feature vector by applying the two methods one is by applying Kekre’s transform over row and column vectors and secondly by applying Kekre’s transform over row - column variance vectors of image. Further we apply a similarity measure to compare the query image and the database images. Finally we retrieve similarimages from database based on the pre determined threshold. The database of 525 images of seven different categories (75 fromeach category is used for demonstration to compare the performance of these algorithms using precision and recall as parameters.

  4. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    Science.gov (United States)

    Budde, M.E.; Tappan, G.; Rowland, J.; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  5. Variance-reduced estimator of the connected two-point function in the presence of a broken Z2-symmetry

    Science.gov (United States)

    Hasenbusch, Martin

    2016-03-01

    The exchange or geometric cluster algorithm allows us to define a variance-reduced estimator of the connected two-point function in the presence of a broken Z2-symmetry. We present numerical tests for the improved Blume-Capel model on the simple-cubic lattice. We perform simulations for the critical isotherm, the low-temperature phase at vanishing external field, and, for comparison, also the high-temperature phase. For the connected two-point function, a substantial reduction of the variance can be obtained, allowing us to compute the correlation length ξ with high precision. Based on these results, estimates for various universal amplitude ratios that characterize the universality class of the three-dimensional Ising model are computed.

  6. Regression between earthquake magnitudes having errors with known variances

    Science.gov (United States)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  7. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  8. FMRI group analysis combining effect estimates and their variances.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W

    2012-03-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

  9. Analysis and application of minimum variance discrete linear system identification

    Science.gov (United States)

    Kotob, S.; Kaufman, H.

    1977-01-01

    An on-line minimum variance (MV) parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise (AMN). The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean-square convergent and mean-square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  10. Two-dimensional finite-element temperature variance analysis

    Science.gov (United States)

    Heuser, J. S.

    1972-01-01

    The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

  11. Variance of surface area estimators using spatial grids of lines

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří; Kubínová, Lucie

    Vol. 2. Kraków : Polish Society for Stereology, 2005 - (Chrapoński, J.; Cwajna, J.; Wojnar, L.), s. 252-256 ISBN 83-917834-4-8. [European Congress on Stereology and Image Analysis /9./ and International Conference on Stereology and Image Analysis in Materials Science STERMAT /7./. Zakopane (PL), 10.05.2005-13.05.2005] R&D Projects: GA AV ČR(CZ) IAA100110502; GA AV ČR(CZ) IAA600110507; GA ČR(CZ) GA304/05/0153 Institutional research plan: CEZ:AV0Z50110509 Keywords : variance * stereology * surface area Subject RIV: BA - General Mathematics

  12. Analysis and application of minimum variance discrete time system identification

    Science.gov (United States)

    Kotob, S.; Kaufman, H.

    1976-01-01

    An on-line minimum variance parameter identifier was developed which embodies both accuracy and computational efficiency. The new formulation resulted in a linear estimation problem with both additive and multiplicative noise. The resulting filter is shown to utilize both the covariance of the parameter vector itself and the covariance of the error in identification. It is proven that the identification filter is mean square covergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  13. A generalization of Talagrand's variance bound in terms of influences

    CERN Document Server

    Kiss, Demeter

    2010-01-01

    Consider a random variable of the form f(X_1,...,X_n), where f is a deterministic function, and where X_1,...,X_n are i.i.d random variables. For the case where X_1 has a Bernoulli distribution, Talagrand (1994) gave an upper bound for the variance of f in terms of the individual influences of the variables X_i for i=1,...,n. We generalize this result to the case where X_1 takes finitely many vales.

  14. Analysis of variance tables based on experimental structure.

    Science.gov (United States)

    Brien, C J

    1983-03-01

    A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362

  15. Variance Risk Premium Differentials and Foreign Exchange Returns

    OpenAIRE

    Arash, Aloosh

    2011-01-01

    The uncovered interest rate parity does not hold in the foreign exchange market (UIP puzzle). I use the cross-country variance risk premium differential to measure the excess foreign exchange return. Consequently, similar to Bansal and Shaliastovich (2010), I provide a risk-based explanation for the violation of UIP. The empirical results, based on the monthly data of ten currency pairs among US Dollar, UK Pound, Japanese Yen, Euro, and Swiss Franc, support the model both in-sample and out-of...

  16. Variance and bias computation for enhanced system identification

    Science.gov (United States)

    Bergmann, Martin; Longman, Richard W.; Juang, Jer-Nan

    1989-01-01

    A study is made of the use of a series of variance and bias confidence criteria recently developed for the eigensystem realization algorithm (ERA) identification technique. The criteria are shown to be very effective, not only for indicating the accuracy of the identification results (especially in terms of confidence intervals), but also for helping the ERA user to obtain better results. They help determine the best sample interval, the true system order, how much data to use and whether to introduce gaps in the data used, what dimension Hankel matrix to use, and how to limit the bias or correct for bias in the estimates.

  17. A Fay-Herriot Model with Different Random Effect Variances

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.

    2011-01-01

    Roč. 40, č. 5 (2011), s. 785-797. ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf

  18. Slowdown in branching Brownian motion with inhomogeneous variance

    OpenAIRE

    Maillard, Pascal; Zeitouni, Ofer

    2013-01-01

    We consider a model of Branching Brownian Motion with time-inhomogeneous variance of the form \\sigma(t/T), where \\sigma is a strictly decreasing function. Fang and Zeitouni (2012) showed that the maximal particle's position M_T is such that M_T-v_\\sigma T is negative of order T^{-1/3}, where v_\\sigma is the integral of the function \\sigma over the interval [0,1]. In this paper, we refine we refine this result and show the existence of a function m_T, such that M_T-m_T converges in law, as T\\t...

  19. A Mean-Variance Portfolio Optimal Under Utility Pricing

    Directory of Open Access Journals (Sweden)

    Hürlimann Werner

    2006-01-01

    Full Text Available An expected utility model of asset choice, which takes into account asset pricing, is considered. The obtained portfolio selection problem under utility pricing is solved under several assumptions including quadratic utility, exponential utility and multivariate symmetric elliptical returns. The obtained unique solution, called optimal utility portfolio, is shown mean-variance efficient in the classical sense. Various questions, including conditions for complete diversification and the behavior of the optimal portfolio under univariate and multivariate ordering of risks as well as risk-adjusted performance measurement, are discussed.

  20. Analysis of variance of thematic mapping experiment data.

    Science.gov (United States)

    Rosenfield, G.H.

    1981-01-01

    As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author