WorldWideScience

Sample records for subsampling variance estimator

  1. Subsampling for graph power spectrum estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2016-10-06

    In this paper we focus on subsampling stationary random signals that reside on the vertices of undirected graphs. Second-order stationary graph signals are obtained by filtering white noise and they admit a well-defined power spectrum. Estimating the graph power spectrum forms a central component of stationary graph signal processing and related inference tasks. We show that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the power spectrum of the graph signal from the subsampled observations, without any spectral priors. In addition, a near-optimal greedy algorithm is developed to design the subsampling scheme.

  2. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  3. Estimating the Modified Allan Variance

    Science.gov (United States)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  4. A GPU-Accelerated 3-D Coupled Subsample Estimation Algorithm for Volumetric Breast Strain Elastography.

    Science.gov (United States)

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-04-01

    Our primary objective of this paper was to extend a previously published 2-D coupled subsample tracking algorithm for 3-D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3-D coupled subsample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking phantom and in vivo breast ultrasound data. The performance of this 3-D subsample tracking algorithm was compared with the conventional 3-D quadratic subsample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3-D subsample estimation algorithm can provide high-quality strain data (i.e., high correlation between the predeformation and the motion-compensated postdeformation radio frequency echo data and high contrast-to-noise ratio strain images), as compared with the conventional 3-D quadratic subsample algorithm. Using the GPU implementation of the 3-D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 s per volume [2.5 cm ×2.5 cm ×2.5 cm]).

  5. Noise variance estimation for Kalman filter

    Science.gov (United States)

    Beniak, Ryszard; Gudzenko, Oleksandr; Pyka, Tomasz

    2017-10-01

    In this paper, we propose an algorithm that evaluates noise variance with a numerical integration method. For noise variance estimation, we use Krogh method with a variable integration step. In line with common practice, we limit our study to fourth-order method. First, we perform simulation tests for randomly generated signals, related to the transition state and steady state. Next, we formulate three methodologies (research hypotheses) of noise variance estimation, and then compare their efficiency.

  6. Fully Pipelined Parallel Architecture for Candidate Block and Pixel-Subsampling-Based Motion Estimation

    Directory of Open Access Journals (Sweden)

    Reeba Korah

    2008-01-01

    Full Text Available This paper presents a low power and high speed architecture for motion estimation with Candidate Block and Pixel Subsampling (CBPS Algorithm. Coarse-to-fine search approach is employed to find the motion vector so that the local minima problem is totally eliminated. Pixel subsampling is performed in the selected candidate blocks which significantly reduces computational cost with low quality degradation. The architecture developed is a fully pipelined parallel design with 9 processing elements. Two different methods are deployed to reduce the power consumption, parallel and pipelined implementation and parallel accessing to memory. For processing 30 CIF frames per second our architecture requires a clock frequency of 4.5 MHz.

  7. Estimating quadratic variation using realized variance

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2002-01-01

    This paper looks at some recent work on estimating quadratic variation using realized variance (RV) - that is, sums of M squared returns. This econometrics has been motivated by the advent of the common availability of high-frequency financial return data. When the underlying process...

  8. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  9. A coupled subsample displacement estimation method for ultrasound-based strain elastography

    Science.gov (United States)

    Jiang, Jingfeng; Hall, Timothy J.

    2015-11-01

    Obtaining accurate displacement estimates along both axial (parallel to the acoustic beam) and lateral (perpendicular to the beam) directions is an important task for several clinical applications such as shear strain imaging, modulus reconstruction and temperature imaging, where a full description of the two or three-dimensional (2D/3D) deformation field is required. In this study we propose an improved speckle tracking algorithm where axial and lateral motion estimations are simultaneously performed to enhance motion tracking accuracy. More specifically, using conventional ultrasound echo data, this algorithm first finds an iso-contour in the vicinity of the peak correlation between two segments of the pre- and post-deformation ultrasound radiofrequency echo data. The algorithm then attempts to find the center of the iso-contour of the correlation function that corresponds to the unknown (sub-sample) motion vector between these two segments of echo data. This algorithm has been tested using computer-simulated data, studies with a tissue-mimicking phantom, and in vivo breast lesion data. Computer simulation results show that the method improves the accuracy of both lateral and axial tracking. Such improvements are more significant when the deformation is small or along the lateral direction. Results from the tissue-mimicking phantom study are consistent with findings observed in computer simulations. Using in vivo breast lesion data we found that, compared to the 2D quadratic subsample displacement estimation methods, higher quality axial strain and shear strain images (e.g. 18.6% improvement in contrast-to-noise ratio for shear strain images) can be obtained for large deformations (up to 5% frame-to-frame and 15% local strains) in a multi-compression technique. Our initial results demonstrated that this conceptually and computationally simple method could improve the image quality of ultrasound-based strain elastography with current clinical equipment.

  10. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  11. Power Estimation in Multivariate Analysis of Variance

    Directory of Open Access Journals (Sweden)

    Jean François Allaire

    2007-09-01

    Full Text Available Power is often overlooked in designing multivariate studies for the simple reason that it is believed to be too complicated. In this paper, it is shown that power estimation in multivariate analysis of variance (MANOVA can be approximated using a F distribution for the three popular statistics (Hotelling-Lawley trace, Pillai-Bartlett trace, Wilk`s likelihood ratio. Consequently, the same procedure, as in any statistical test, can be used: computation of the critical F value, computation of the noncentral parameter (as a function of the effect size and finally estimation of power using a noncentral F distribution. Various numerical examples are provided which help to understand and to apply the method. Problems related to post hoc power estimation are discussed.

  12. An empirical analysis of the precision of estimating the numbers of neurons and glia in human neocortex using a fractionator-design with sub-sampling

    DEFF Research Database (Denmark)

    Lyck, L.; Santamaria, I.D.; Pakkenberg, B.

    2009-01-01

    to be stained and analysed by cell counting was efficiently reduced using a fractionator protocol involving several steps of sub-sampling. Since no mathematical or statistical tools exist to predict the variance originating from repeated sampling in complex structures like the human neocortex, the variance...... at each level of sampling was determined empirically. The methodology was tested in three brains analysing the contribution of the multi-step sampling procedure to the precision on the estimated total numbers of immunohistochemically defined NeuN expressing (NeuN(+)) neurons and CD45(+) microglia....... The results showed that it was possible, but not straight forward, to combine immunohistochemistry and the optical fractionator for estimation of specific subpopulations of brain cells in human neocortex. (C) 2009 Elsevier B.V. All rights reserved Udgivelsesdato: 2009/9/15...

  13. Further results on variances of local stereological estimators

    DEFF Research Database (Denmark)

    Pawlas, Zbynek; Jensen, Eva B. Vedel

    2006-01-01

    in the particle population. It turns out that these two variance components can be estimated separately, from sectional data. We present further results on the variances that can be used to determine the variance by numerical integration for particular choices of particle shapes.......In the present paper the statistical properties of local stereological estimators of particle volume are studied. It is shown that the variance of the estimators can be decomposed into the variance due to the local stereological estimation procedure and the variance due to the variability...

  14. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Directory of Open Access Journals (Sweden)

    Ashton M Verdery

    Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  15. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  16. Using transformation algorithms to estimate (co)variance ...

    African Journals Online (AJOL)

    ... to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of (co)variance components. Results from a simulation study indicate that (co)variance components can be estimated efficiently at a low cost on ...

  17. Improved variance estimation along sample eigenvectors

    NARCIS (Netherlands)

    Hendrikse, A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    Second order statistics estimates in the form of sample eigenvalues and sample eigenvectors give a sub optimal description of the population density. So far only attempts have been made to reduce the bias in the sample eigenvalues. However, because the sample eigenvectors differ from the population

  18. A Class of Modified Ratio Estimators for Estimation of Population Variance

    Directory of Open Access Journals (Sweden)

    Subramani J.

    2015-05-01

    Full Text Available In this paper we have proposed a class of modified ratio type variance estimators for estimation of population variance of the study variable using the known parameters of the auxiliary variable. The bias and mean squared error of the proposed estimators are obtained and also derived the conditions for which the proposed estimators perform better than the traditional ratio type variance estimator and existing modified ratio type variance estimators. Further we have compared the proposed estimators with that of the traditional ratio type variance estimator and existing modified ratio type variance estimators for certain natural populations.

  19. Bounds for the variance of an inverse binomial estimator

    NARCIS (Netherlands)

    A. Sahai; J.M. Buhrman

    1979-01-01

    textabstractSummary  Best [1] found the variance of the minimum variance unbiased estimator of the parameter p of the negative binomial distribution. Mikulski and Sm [2] gave an upper bound to it, easier to calculate than Best's expression and a good approximation for small values of p and large

  20. Using subsampling to estimate the strength of handwriting evidence via score-based likelihood ratios.

    Science.gov (United States)

    Davis, Linda J; Saunders, Christopher P; Hepler, Amanda; Buscaglia, JoAnn

    2012-03-10

    The likelihood ratio paradigm has been studied as a means for quantifying the strength of evidence for a variety of forensic evidence types. Although the concept of a likelihood ratio as a comparison of the plausibility of evidence under two propositions (or hypotheses) is straightforward, a number of issues arise when one considers how to go about estimating a likelihood ratio. In this paper, we illustrate one possible approach to estimating a likelihood ratio in comparative handwriting analysis. The novelty of our proposed approach relies on generating simulated writing samples from a collection of writing samples from a known source to form a database for estimating the distribution associated with the numerator of a likelihood ratio. We illustrate this approach using documents collected from 432 writers under controlled conditions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. On-Line Estimation of Allan Variance Parameters

    National Research Council Canada - National Science Library

    Ford, J

    1999-01-01

    ... (Inertial Measurement Unit) gyros and accelerometers. The on-line method proposes a state space model and proposes parameter estimators for quantities previously measured from off-line data techniques such as the Allan variance graph...

  2. Genetic heterogeneity of residual variance - estimation of variance components using double hierarchical generalized linear models

    Directory of Open Access Journals (Sweden)

    Fikse Freddy

    2010-03-01

    Full Text Available Abstract Background The sensitivity to microenvironmental changes varies among animals and may be under genetic control. It is essential to take this element into account when aiming at breeding robust farm animals. Here, linear mixed models with genetic effects in the residual variance part of the model can be used. Such models have previously been fitted using EM and MCMC algorithms. Results We propose the use of double hierarchical generalized linear models (DHGLM, where the squared residuals are assumed to be gamma distributed and the residual variance is fitted using a generalized linear model. The algorithm iterates between two sets of mixed model equations, one on the level of observations and one on the level of variances. The method was validated using simulations and also by re-analyzing a data set on pig litter size that was previously analyzed using a Bayesian approach. The pig litter size data contained 10,060 records from 4,149 sows. The DHGLM was implemented using the ASReml software and the algorithm converged within three minutes on a Linux server. The estimates were similar to those previously obtained using Bayesian methodology, especially the variance components in the residual variance part of the model. Conclusions We have shown that variance components in the residual variance part of a linear mixed model can be estimated using a DHGLM approach. The method enables analyses of animal models with large numbers of observations. An important future development of the DHGLM methodology is to include the genetic correlation between the random effects in the mean and residual variance parts of the model as a parameter of the DHGLM.

  3. Estimation of the additive and dominance variances in South African ...

    African Journals Online (AJOL)

    Estimates of additive genetic variance were 0.669, 43.46 d2 and 9.02 kg2 for NBA, FI and LWT21, respectively. Corresponding estimates of dominance variance were 0.439, 123.68 d2 and 2.52 kg2, respectively. Dominance effects were important for NBA and FI. Permanent environmental effects were significant for FI and ...

  4. Variance and covariance estimates for weaning weight of Senepol cattle.

    Science.gov (United States)

    Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S

    1991-10-01

    Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively.

  5. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  6. Estimation of the additive and dominance variances in SA Landrace ...

    African Journals Online (AJOL)

    NORRIS

    dominance genetic effects on the estimation of additive and dominance genetic variances. S. Afr. J. Anim. Sci. 32, 353-357. Philipsson, J., 1981. Genetic aspects of female fertility in dairy cattle. Livest. Prod. Sci. 8, 307-319. Quijandria, B., Chauca de Zaldivar, L. & Robison, O.W., 1983. Selection in guinea pigs: I. Estimation of.

  7. Variance component and heritability estimates of early growth traits ...

    African Journals Online (AJOL)

    Restricted Maximum Likelihood (REML) procedures fitting three different models. Estimates were severely biased ... estimates for direct additive variance and heritability (h\\) when fitted simultaneously in an animal model. The genetic ..... demanding than the sire model with respect to CPU time used. For BW, 200 iterations ...

  8. Application of variance components estimation to calibrate geoid error models.

    Science.gov (United States)

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.

  9. Asymptotic variance of grey-scale surface area estimators

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    Grey-scale local algorithms have been suggested as a fast way of estimating surface area from grey-scale digital images. Their asymptotic mean has already been described. In this paper, the asymptotic behaviour of the variance is studied in isotropic and sufficiently smooth settings, resulting...... in a general asymptotic bound. For compact convex sets with nowhere vanishing Gaussian curvature, the asymptotics can be described more explicitly. As in the case of volume estimators, the variance is decomposed into a lattice sum and an oscillating term of at most the same magnitude....

  10. Estimating Additive and Dominance Variance for Litter Traits in ...

    African Journals Online (AJOL)

    Abstract. Reproductive and growth records of 82 purebred California white kits were used to estimate additive and dominance genetic variances using BULPF90PC-PACK. ... The first model included fixed effects and random effects identifying inbreeding depression, additive gene effect and permanent environmental effects.

  11. Estimating Additive and Dominance Variance for Liner Traits in ...

    African Journals Online (AJOL)

    additive and dominance genetic variances using. BULPF90PC-PACK.estimates ... inbreeding depression, additive gene e"ect and pennanent ... litter bearing species with a large expression ofdominance relationships and possibly useful magnitude of dominance effects in reproductive traits, genetic evaluation in rabbits ...

  12. Variance targeting estimation of the BEKK-X model

    OpenAIRE

    Thieu, Le Quyen

    2016-01-01

    This paper studies the BEKK model with exogenous variables (BEKK-X), which intends to take into account the influence of explanatory variables on the conditional covariance of the asset returns. Strong consistency and asymptotic normality of a variance targeting estimator (VTE) is proved. Monte Carlo experiments and an application to financial series illustrate the asymptotic results.

  13. Variance component estimation on female fertility traits in beef cattle

    African Journals Online (AJOL)

    Unknown

    sasas.co.za/Sajas.html. 131. Review Article. Variance component estimation on female fertility traits in beef cattle. T. Rust. 1# and E. Groeneveld. 2. 1 ARC Animal Improvement Institute, Irene 0062, South Africa; 2 Institute of Animal Husbandry and ...

  14. Wavelet-Variance-Based Estimation for Composite Stochastic Processes.

    Science.gov (United States)

    Guerrier, Stéphane; Skaloud, Jan; Stebler, Yannick; Victoria-Feser, Maria-Pia

    2013-09-01

    This article presents a new estimation method for the parameters of a time series model. We consider here composite Gaussian processes that are the sum of independent Gaussian processes which, in turn, explain an important aspect of the time series, as is the case in engineering and natural sciences. The proposed estimation method offers an alternative to classical estimation based on the likelihood, that is straightforward to implement and often the only feasible estimation method with complex models. The estimator furnishes results as the optimization of a criterion based on a standardized distance between the sample wavelet variances (WV) estimates and the model-based WV. Indeed, the WV provides a decomposition of the variance process through different scales, so that they contain the information about different features of the stochastic model. We derive the asymptotic properties of the proposed estimator for inference and perform a simulation study to compare our estimator to the MLE and the LSE with different models. We also set sufficient conditions on composite models for our estimator to be consistent, that are easy to verify. We use the new estimator to estimate the stochastic error's parameters of the sum of three first order Gauss-Markov processes by means of a sample of over 800,000 issued from gyroscopes that compose inertial navigation systems. Supplementary materials for this article are available online.

  15. Deviation of the Variances of Classical Estimators and Negative Integer Moment Estimator from Minimum Variance Bound with Reference to Maxwell Distribution

    Directory of Open Access Journals (Sweden)

    G. R. Pasha

    2006-07-01

    Full Text Available In this paper, we present that how much the variances of the classical estimators, namely, maximum likelihood estimator and moment estimator deviate from the minimum variance bound while estimating for the Maxwell distribution. We also sketch this difference for the negative integer moment estimator. We note the poor performance of the negative integer moment estimator in the said consideration while maximum likelihood estimator attains minimum variance bound and becomes an attractive choice.

  16. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  17. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  18. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    Science.gov (United States)

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  19. Variance component estimates for alternative litter size traits in swine.

    Science.gov (United States)

    Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T

    2015-11-01

    Litter size at d 5 (LS5) has been shown to be an effective trait to increase total number born (TNB) while simultaneously decreasing preweaning mortality. The objective of this study was to determine the optimal litter size day for selection (i.e., other than d 5). Traits included TNB, number born alive (NBA), litter size at d 2, 5, 10, 30 (LS2, LS5, LS10, LS30, respectively), litter size at weaning (LSW), number weaned (NW), piglet mortality at d 30 (MortD30), and average piglet birth weight (BirthWt). Litter size traits were assigned to biological litters and treated as a trait of the sow. In contrast, NW was the number of piglets weaned by the nurse dam. Bivariate animal models included farm, year-season, and parity as fixed effects. Number born alive was fit as a covariate for BirthWt. Random effects included additive genetics and the permanent environment of the sow. Variance components were plotted for TNB, NBA, and LS2 to LS30 using univariate animal models to determine how variances changed over time. Additive genetic variance was minimized at d 7 in Large White and at d 14 in Landrace pigs. Total phenotypic variance for litter size traits decreased over the first 10 d and then stabilized. Heritability estimates increased between TNB and LS30. Genetic correlations between TNB, NBA, and LS2 to LS29 with LS30 plateaued within the first 10 d. A genetic correlation with LS30 of 0.95 was reached at d 4 for Large White and at d 8 for Landrace pigs. Heritability estimates ranged from 0.07 to 0.13 for litter size traits and MortD30. Birth weight had an h of 0.24 and 0.26 for Large White and Landrace pigs, respectively. Genetic correlations among LS30, LSW, and NW ranged from 0.97 to 1.00. In the Large White breed, genetic correlations between MortD30 with TNB and LS30 were 0.23 and -0.64, respectively. These correlations were 0.10 and -0.61 in the Landrace breed. A high genetic correlation of 0.98 and 0.97 was observed between LS10 and NW for Large White and

  20. Estimation models of variance components for farrowing interval in swine

    Directory of Open Access Journals (Sweden)

    Aderbal Cavalcante Neto

    2009-02-01

    Full Text Available The main objective of this study was to evaluate the importance of including maternal genetic, common litter environmental and permanent environmental effects in estimation models of variance components for the farrowing interval trait in swine. Data consisting of 1,013 farrowing intervals of Dalland (C-40 sows recorded in two herds were analyzed. Variance components were obtained by the derivative-free restricted maximum likelihood method. Eight models were tested which contained the fixed effects(contemporary group and covariables and the direct genetic additive and residual effects, and varied regarding the inclusion of the maternal genetic, common litter environmental, and/or permanent environmental random effects. The likelihood-ratio test indicated that the inclusion of these effects in the model was unnecessary, but the inclusion of the permanent environmental effect caused changes in the estimates of heritability, which varied from 0.00 to 0.03. In conclusion, the heritability values obtained indicated that this trait appears to present no genetic gain as response to selection. The common litter environmental and the maternal genetic effects did not present any influence on this trait. The permanent environmental effect, however, should be considered in the genetic models for this trait in swine, because its presence caused changes in the additive genetic variance estimates.Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40, registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos

  1. Direct and maternal variance component estimates for clean fleece ...

    African Journals Online (AJOL)

    ... -3.17o en -2.47o vir LG, SVG en GVD, respektiewelik. Daar word tot die gevolgtrekking gekom dat die maternale komponent weens die relatiewe klein effek op hierdie eienskappe, geigno- reer kan word. Keywords: Merino sheep, maternal variance, variance components. *To whom correspondence should be addressed.

  2. LOCAL CASE-CONTROL SAMPLING: EFFICIENT SUBSAMPLING IN IMBALANCED DATA SETS.

    Science.gov (United States)

    Fithian, William; Hastie, Trevor

    2014-10-01

    For classification problems with significant class imbalance, subsampling can reduce computational costs at the price of inflated variance in estimating model parameters. We propose a method for subsampling efficiently for logistic regression by adjusting the class balance locally in feature space via an accept-reject scheme. Our method generalizes standard case-control sampling, using a pilot estimate to preferentially select examples whose responses are conditionally rare given their features. The biased subsampling is corrected by a post-hoc analytic adjustment to the parameters. The method is simple and requires one parallelizable scan over the full data set. Standard case-control sampling is inconsistent under model misspecification for the population risk-minimizing coefficients θ*. By contrast, our estimator is consistent for θ* provided that the pilot estimate is. Moreover, under correct specification and with a consistent, independent pilot estimate, our estimator has exactly twice the asymptotic variance of the full-sample MLE-even if the selected subsample comprises a miniscule fraction of the full data set, as happens when the original data are severely imbalanced. The factor of two improves to [Formula: see text] if we multiply the baseline acceptance probabilities by c > 1 (and weight points with acceptance probability greater than 1), taking roughly [Formula: see text] times as many data points into the subsample. Experiments on simulated and real data show that our method can substantially outperform standard case-control subsampling.

  3. Spot Variance Path Estimation and its Application to High Frequency Jump Testing

    NARCIS (Netherlands)

    Bos, C.S.; Janus, P.; Koopman, S.J.

    2012-01-01

    This paper considers spot variance path estimation from datasets of intraday high-frequency asset prices in the presence of diurnal variance patterns, jumps, leverage effects, and microstructure noise. We rely on parametric and nonparametric methods. The estimated spot variance path can be used to

  4. Genetic divergence of tomato subsamples

    Directory of Open Access Journals (Sweden)

    André Pugnal Mattedi

    2014-02-01

    Full Text Available Understanding the genetic variability of a species is crucial for the progress of a genetic breeding program and requires characterization and evaluation of germplasm. This study aimed to characterize and evaluate 101 tomato subsamples of the Salad group (fresh market and two commercial controls, one of the Salad group (cv. Fanny and another of the Santa Cruz group (cv. Santa Clara. Four experiments were conducted in a randomized block design with three replications and five plants per plot. The joint analysis of variance was performed and characteristics with significant complex interaction between control and experiment were excluded. Subsequently, the multicollinearity diagnostic test was carried out and characteristics that contributed to severe multicollinearity were excluded. The relative importance of each characteristics for genetic divergence was calculated by the Singh's method (Singh, 1981, and the less important ones were excluded according to Garcia (1998. Results showed large genetic divergence among the subsamples for morphological, agronomic and organoleptic characteristics, indicating potential for genetic improvement. The characteristics total soluble solids, mean number of good fruits per plant, endocarp thickness, mean mass of marketable fruit per plant, total acidity, mean number of unmarketable fruit per plant, internode diameter, internode length, main stem thickness and leaf width contributed little to the genetic divergence between the subsamples and may be excluded in future studies.

  5. Bias-Variance Tradeoffs in Recombination Rate Estimation.

    Science.gov (United States)

    Stone, Eric A; Singh, Nadia D

    2016-02-01

    In 2013, we and coauthors published a paper characterizing rates of recombination within the 2.1-megabase garnet-scalloped (g-sd) region of the Drosophila melanogaster X chromosome. To extract the signal of recombination in our high-throughput sequence data, we adopted a nonparametric smoothing procedure, reducing variance at the cost of biasing individual recombination rates. In doing so, we sacrificed accuracy to gain precision-precision that allowed us to detect recombination rate heterogeneity. Negotiating the bias-variance tradeoff enabled us to resolve significant variation in the frequency of crossing over across the garnet-scalloped region. Copyright © 2016 by the Genetics Society of America.

  6. Variance component and heritability estimates for growth traits in the ...

    African Journals Online (AJOL)

    CANTET, R.J.C., KRESS, D.D., ANDERSON, D,C., DOORNBOS,. D.8., BURFENING, P.J. & BLACKWELL, R.L., 1988. Direct and maternal variances and covariances and maternal phenotypic effects on pre-weaning growth of beef cattle. J. Anint. Sci. 66,648. DEESE, R.E. & KOGER, M., 1967. Maternal effects on pre-weaning.

  7. Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study

    Science.gov (United States)

    Suero, Manuel; Privado, Jesús; Botella, Juan

    2017-01-01

    A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…

  8. Minimum Variance Estimation of Yield Parameters of Rubber Tree ...

    African Journals Online (AJOL)

    Although growth and yield data are available in rubber plantations in Nigeria for aggregate rubber production planning, existing models poorly estimate the yield per rubber tree for the incoming year. Kalman lter, a exible statistical estimator, is used to combine the inexact prediction of the rubber production with an equally ...

  9. Estimation of finite population variance using auxiliary information in sample surveys

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2014-12-01

    Full Text Available This paper addresses the problem of estimating the finite population variance using auxiliary information in sample surveys. Motivated by (Singh and Vishwakarma, 2009 some estimators of finite population variance have been suggested along with their properties in simple random sampling. The theoretical conditions under which the proposed estimators are more efficient than usual unbiased, usual ratio and (Singh et al., 2009 estimators have been obtained. Numerical illustrations are given in support of the present study.

  10. Variance component estimation of a female fertility trait in two ...

    African Journals Online (AJOL)

    USER

    Introduction. In current national genetic evaluations in Southern Africa, estimated breeding values (EBVs) for growth traits are reported without any indication of the reproductive ability of the animals. This void could lead to the assumption that differences between animals in genetic merit for reproduction and fitness traits.

  11. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    The objective of this work was to evaluate alternative measures of feed efficiency for use in genetic evaluation. To meet this objective, genetic parameters were estimated for the components of efficiency. These parameters were then used in multiple-trait animal model genetic evaluations and alternative genetic predictors of ...

  12. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    Mike

    2013-03-09

    Mar 9, 2013 ... evaluate alternative measures of feed efficiency for use in genetic evaluation. To meet this objective, genetic parameters were estimated for the components of efficiency. These parameters were then used in multiple- trait animal model genetic evaluations and alternative genetic predictors of feed efficiency ...

  13. Direct and maternal variance component estimates for clean fleece ...

    African Journals Online (AJOL)

    fleece weight (CFW), body weight (BW) and mean fibre diameter (MFD) in the Grootfontein Merino stud. Direct herita- bilities were estimated as 0.381, ... South African Merino stud as far as management and selection procedures (until 1984) are concerned. ... ance and Progeny Testing Scheme: Body weight (BW) - taken ...

  14. minimum variance estimation of yield parameters of rubber tree

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... raw material supply chain and hence marketing plan. The result matches the wet and dry spell of southern. Nigeria and its effect on rubber tree latex production. The seasonal value by quarter is estimated for each clone to be used for appropriate quarterly adjustment in the aggregate rubber production ...

  15. Estimating bias and variances in bootstrap logistic regression for Umaru and impact data

    Science.gov (United States)

    Fitrianto, Anwar; Cing, Ng Mei

    2014-12-01

    We employed random-x bootstrap in binary logistic regression model. We investigate the effect of sample size and number of bootstrap replication on the bias and variance. The performance of estimated coefficient is measured based on the bias, variance, and confidence interval of the bootstrap estimates. In addition, we also focus on the length of confidence interval of the bootstrap estimates. We found that bias and variance decrease for larger sample size. We noticed that length of confidence intervals decrease as the sample size and number of bootstrap replication are getting large. The results show that the estimated coefficient is more precise as the sample size increases.

  16. Minimum Variance Signal Selection for Aorta Radius Estimation Using Radar

    Directory of Open Access Journals (Sweden)

    Hamran Svein-Erik

    2010-01-01

    Full Text Available Abstract This paper studies the optimum signal choice for the estimation of the aortic blood pressure via aorta radius, using a monostatic radar configuration. The method involves developing the Cramér-Rao lower bound (CRLB for a simplified model. The CRLB for model parameters are compared with simulation results using a grid-based approach for estimation. The CRLBs are within the 99% confidence intervals for all chosen parameter values. The CRLBs show an optimal region within an ellipsoid centered at 1 GHz center frequency and 1.25 GHz bandwidth with axes of 0.5 GHz and 1 GHz, respectively. Calculations show that emitted signal energy to received noise spectral density should exceed for a precision of approximately 0.1 mm for a large range of model parameters. This implies a minimum average power of 0.4  . These values are based on optimistic assumptions. Reflections, improved propagation model, true receiver noise, and parameter ranges should be considered in a practical implementation.

  17. Minimum Variance Signal Selection for Aorta Radius Estimation Using Radar

    Science.gov (United States)

    Solberg, LarsErik; Hamran, Svein-Erik; Berger, Tor; Balasingham, Ilangko

    2010-12-01

    This paper studies the optimum signal choice for the estimation of the aortic blood pressure via aorta radius, using a monostatic radar configuration. The method involves developing the Cramér-Rao lower bound (CRLB) for a simplified model. The CRLB for model parameters are compared with simulation results using a grid-based approach for estimation. The CRLBs are within the 99% confidence intervals for all chosen parameter values. The CRLBs show an optimal region within an ellipsoid centered at 1 GHz center frequency and 1.25 GHz bandwidth with axes of 0.5 GHz and 1 GHz, respectively. Calculations show that emitted signal energy to received noise spectral density should exceed [InlineEquation not available: see fulltext.] for a precision of approximately 0.1 mm for a large range of model parameters. This implies a minimum average power of 0.4 [InlineEquation not available: see fulltext.]. These values are based on optimistic assumptions. Reflections, improved propagation model, true receiver noise, and parameter ranges should be considered in a practical implementation.

  18. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    Science.gov (United States)

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  19. Subsampling for dataset optimisation

    Science.gov (United States)

    Ließ, Mareike

    2017-04-01

    Soil-landscapes have formed by the interaction of soil-forming factors and pedogenic processes. In modelling these landscapes in their pedodiversity and the underlying processes, a representative unbiased dataset is required. This concerns model input as well as output data. However, very often big datasets are available which are highly heterogeneous and were gathered for various purposes, but not to model a particular process or data space. As a first step, the overall data space and/or landscape section to be modelled needs to be identified including considerations regarding scale and resolution. Then the available dataset needs to be optimised via subsampling to well represent this n-dimensional data space. A couple of well-known sampling designs may be adapted to suit this purpose. The overall approach follows three main strategies: (1) the data space may be condensed and de-correlated by a factor analysis to facilitate the subsampling process. (2) Different methods of pattern recognition serve to structure the n-dimensional data space to be modelled into units which then form the basis for the optimisation of an existing dataset through a sensible selection of samples. Along the way, data units for which there is currently insufficient soil data available may be identified. And (3) random samples from the n-dimensional data space may be replaced by similar samples from the available dataset. While being a presupposition to develop data-driven statistical models, this approach may also help to develop universal process models and identify limitations in existing models.

  20. [Variance estimation considering multistage sampling design in multistage complex sample analysis].

    Science.gov (United States)

    Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng

    2016-03-01

    Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.

  1. Channel Impulse Response Length and Noise Variance Estimation for OFDM Systems with Adaptive Guard Interval

    Directory of Open Access Journals (Sweden)

    Gelle Guillaume

    2007-01-01

    Full Text Available A new algorithm estimating channel impulse response (CIR length and noise variance for orthogonal frequency-division multiplexing (OFDM systems with adaptive guard interval (GI length is proposed. To estimate the CIR length and the noise variance, the different statistical characteristics of the additive noise and the mobile radio channels are exploited. This difference is due to the fact that the variance of the channel coefficients depends on the position within the CIR, whereas the noise variance of each estimated channel tap is equal. Moreover, the channel can vary rapidly, but its length changes more slowly than its coefficients. An auxiliary function is established to distinguish these characteristics. The CIR length and the noise variance are estimated by varying the parameters of this function. The proposed method provides reliable information of the estimated CIR length and the noise variance even at signal-to-noise ratio (SNR of 0 dB. This information can be applied to an OFDM system with adaptive GI length, where the length of the GI is adapted to the current length of the CIR. The length of the GI can therefore be optimized. Consequently, the spectral efficiency of the system is increased.

  2. Channel Impulse Response Length and Noise Variance Estimation for OFDM Systems with Adaptive Guard Interval

    Directory of Open Access Journals (Sweden)

    Van Duc Nguyen

    2007-02-01

    Full Text Available A new algorithm estimating channel impulse response (CIR length and noise variance for orthogonal frequency-division multiplexing (OFDM systems with adaptive guard interval (GI length is proposed. To estimate the CIR length and the noise variance, the different statistical characteristics of the additive noise and the mobile radio channels are exploited. This difference is due to the fact that the variance of the channel coefficients depends on the position within the CIR, whereas the noise variance of each estimated channel tap is equal. Moreover, the channel can vary rapidly, but its length changes more slowly than its coefficients. An auxiliary function is established to distinguish these characteristics. The CIR length and the noise variance are estimated by varying the parameters of this function. The proposed method provides reliable information of the estimated CIR length and the noise variance even at signal-to-noise ratio (SNR of 0 dB. This information can be applied to an OFDM system with adaptive GI length, where the length of the GI is adapted to the current length of the CIR. The length of the GI can therefore be optimized. Consequently, the spectral efficiency of the system is increased.

  3. Variance estimation of allele-based odds ratio in the absence of Hardy-Weinberg equilibrium.

    Science.gov (United States)

    Zintzaras, Elias

    2008-01-01

    In gene-disease association studies, deviation from Hardy-Weinberg equilibrium in controls may cause bias in estimating the allele-based estimates of genetic effects. An approach to adjust the variance of allele-based odds ratio for Hardy-Weinberg equilibrium deviation is proposed. Such adjustments have been introduced for estimating relative risks of genotype contrasts and differences in allele frequency; however, an adjustment of odds ratios for allele frequencies still does not exist. The approach was based on the delta method in combination with the Woolf's logit interval method and the disequilibrium coefficient. The proposed variance adjustment provided better power than the unadjusted one to detect significant estimates of odds ratio and it improved the variance estimation.

  4. Concerns about a variance approach to X-ray diffractometric estimation of microfibril angle in wood

    Science.gov (United States)

    Steve P. Verrill; David E. Kretschmann; Victoria L. Herian; Michael C. Wiemann; Harry A. Alden

    2011-01-01

    In this article, we raise three technical concerns about Evans’ 1999 Appita Journal “variance approach” to estimating microfibril angle (MFA). The first concern is associated with the approximation of the variance of an X-ray intensity half-profile by a function of the MFA and the natural variability of the MFA. The second concern is associated with the approximation...

  5. The effect of some estimators of between-study variance on random ...

    African Journals Online (AJOL)

    analysis. Samson Henry Dogo. Abstract. There are different methods for estimating the between-study variance, 2 in meta-analysis, however each of the methods differs in terms of precision and bias in estimation. Consequently, each of the ...

  6. Robust Variance Estimation in Meta-Regression with Binary Dependent Effects

    Science.gov (United States)

    Tipton, Elizabeth

    2013-01-01

    Dependent effect size estimates are a common problem in meta-analysis. Recently, a robust variance estimation method was introduced that can be used whenever effect sizes in a meta-analysis are not independent. This problem arises, for example, when effect sizes are nested or when multiple measures are collected on the same individuals. In this…

  7. A note on the variance of the estimate of the fixation index F

    Indian Academy of Sciences (India)

    In the two-allele case, the formulas for the estimated vari- ances of allelic frequency p = 1−q and fixation index (aver- age inbreeding coefficient) F are known in the specialized literature of statistical genetics. Besides presenting here an alternative manner to estimate the variance of both param- eters, we also derive a very ...

  8. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    Science.gov (United States)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  9. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  10. Online estimation of Allan variance coefficients based on a neural-extended Kalman filter.

    Science.gov (United States)

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-23

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor.

  11. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  12. Genomic Variance Estimation Based on Genotyping-by-Sequencing with Different Coverage in Perennial Ryegrass

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Fé, Dario; Jensen, Just

    2014-01-01

    on optimizing methods and models utilizing F2 family phenotype records and NGS information from F2 family pools in perennial ryegrass. Genomic variance was estimated using genomic relationship matrices based on different coverage depths to verify effects of coverage depth. Example traits were seed yield, rust...

  13. Variance component and heritability estimates for first and second ...

    African Journals Online (AJOL)

    46 S Afr] Anim Sci 1998 28(1). Variance component and heritability estimates for first and second lactation milk traits in the South African Ayrshire breed. G.J. Hallowell*. Agricultural Research Council, Animal Improvement Institute, Private Bag X2, Irene,. 0062 Republic of South Africa. J. van der Westhuizen and J8. van Wyk.

  14. Bias-variance analysis in estimating true query model for information retrieval

    OpenAIRE

    Zhang, Peng; Song, Dawei; Wang, Jun; Yue HOU

    2014-01-01

    The estimation of query model is an important task in language modeling (LM) approaches to information retrieval (IR). The ideal estimation is expected to be not only effective in terms of high mean retrieval performance over all queries, but also stable in terms of low variance of retrieval performance across different queries. In practice, however, improving effectiveness can sacrifice stability, and vice versa. In this paper, we propose to study this tradeoff from a new perspective, i.e., ...

  15. Variance Estimation of Change in Poverty Rates: an Application to the Turkish EU-SILC Survey

    Directory of Open Access Journals (Sweden)

    Oguz Alper Melike

    2015-06-01

    Full Text Available Interpreting changes between point estimates at different waves may be misleading if we do not take the sampling variation into account. It is therefore necessary to estimate the standard error of these changes in order to judge whether or not the observed changes are statistically significant. This involves the estimation of temporal correlations between cross-sectional estimates, because correlations play an important role in estimating the variance of a change in the cross-sectional estimates. Standard estimators for correlations cannot be used because of the rotation used in most panel surveys, such as the European Union Statistics on Income and Living Conditions (EU-SILC surveys. Furthermore, as poverty indicators are complex functions of the data, they require special treatment when estimating their variance. For example, poverty rates depend on poverty thresholds which are estimated from medians. We propose using a multivariate linear regression approach to estimate correlations by taking into account the variability of the poverty threshold. We apply the approach proposed to the Turkish EU-SILC survey data.

  16. Bias and variance reduction in estimating the proportion of true-null hypotheses.

    Science.gov (United States)

    Cheng, Yebin; Gao, Dexiang; Tong, Tiejun

    2015-01-01

    When testing a large number of hypotheses, estimating the proportion of true nulls, denoted by π(0), becomes increasingly important. This quantity has many applications in practice. For instance, a reliable estimate of π(0) can eliminate the conservative bias of the Benjamini-Hochberg procedure on controlling the false discovery rate. It is known that most methods in the literature for estimating π(0) are conservative. Recently, some attempts have been paid to reduce such estimation bias. Nevertheless, they are either over bias corrected or suffering from an unacceptably large estimation variance. In this paper, we propose a new method for estimating π(0) that aims to reduce the bias and variance of the estimation simultaneously. To achieve this, we first utilize the probability density functions of false-null p-values and then propose a novel algorithm to estimate the quantity of π(0). The statistical behavior of the proposed estimator is also investigated. Finally, we carry out extensive simulation studies and several real data analysis to evaluate the performance of the proposed estimator. Both simulated and real data demonstrate that the proposed method may improve the existing literature significantly. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey

    2014-01-06

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  18. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    Science.gov (United States)

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. MMSE-based algorithm for joint signal detection, channel and noise variance estimation for OFDM systems

    CERN Document Server

    Savaux, Vincent

    2014-01-01

    This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr

  20. Bayesian Variance Component Estimation Using the Inverse-Gamma Class of Priors in a Nested Generalizability Design

    Science.gov (United States)

    Arenson, Ethan A.

    2009-01-01

    One of the problems inherent in variance component estimation centers around inadmissible estimates. Such estimates occur when there is more variability within groups, relative to between groups. This paper suggests a Bayesian approach to resolve inadmissibility by placing noninformative inverse-gamma priors on the variance components, and…

  1. Estimating the variance of cancer prevalence from population-based registries.

    Science.gov (United States)

    Gigli, Anna; Mariotto, Angela; Clegg, Limin X; Tavilla, Andrea; Corazziari, Isabella; Capocaccia, Riccardo; Hachey, Mark; Steve, Scoppa

    2006-06-01

    Cancer prevalence is the proportion of people in a population diagnosed with cancer in the past and still alive. One way to estimate prevalence is via population-based registries, where data on diagnosis and life status of all incidence cases occurring in the covered population are collected. In this paper, a method to estimate the complete prevalence and its variance from population-based registries is presented. In order to obtain unbiased estimates of the complete prevalence, its calculation can be thought as made by three steps. Step 1 counts the incidence cases diagnosed during the period of registration and still alive. Step 2 estimates the expected number of survivors among cases lost to follow-up. Step 3 estimates the complete prevalence by taking into account cases diagnosed before the start of registration. The combination of steps 1+2 is defined as the counting method, to estimate the limited duration prevalence; step 3 is the completeness index method, to estimate the complete prevalence. For early established registries, steps 1+2 are more important than step 3, because observation time is long enough to include all past diagnosed cases still alive in the prevalence data. For more recently established registries, step 3 is by far the most critical because a large part of prevalence might have been diagnosed before the period of registration (Corazziari I, Mariotto A, Capocaccia R. Correcting the completeness bias of observed prevalence. Tumori 1999; 85: 370-81). The work by Clegg LX, Gail MH, Feuer EJ. Estimating the variance of disease-prevalence estimates from population-based registries. Biometrics 2002; 55: 1137-44. considers the problem of the variability of the estimated prevalence up to step 2. To our knowledge, no other work has considered the variability induced by correcting for the unobserved cases diagnosed before the period of registration, crucial to estimate the prevalence in recent registries. An analytic approach is considered to

  2. On the Choice of Difference Sequence in a Unified Framework for Variance Estimation in Nonparametric Regression

    KAUST Repository

    Dai, Wenlin

    2017-09-01

    Difference-based methods do not require estimating the mean function in nonparametric regression and are therefore popular in practice. In this paper, we propose a unified framework for variance estimation that combines the linear regression method with the higher-order difference estimators systematically. The unified framework has greatly enriched the existing literature on variance estimation that includes most existing estimators as special cases. More importantly, the unified framework has also provided a smart way to solve the challenging difference sequence selection problem that remains a long-standing controversial issue in nonparametric regression for several decades. Using both theory and simulations, we recommend to use the ordinary difference sequence in the unified framework, no matter if the sample size is small or if the signal-to-noise ratio is large. Finally, to cater for the demands of the application, we have developed a unified R package, named VarED, that integrates the existing difference-based estimators and the unified estimators in nonparametric regression and have made it freely available in the R statistical program http://cran.r-project.org/web/packages/.

  3. Estimation of (co)variances for genomic regions of flexible sizes

    DEFF Research Database (Denmark)

    Sørensen, Lars P; Janss, Luc; Madsen, Per

    2012-01-01

    BACKGROUND: Multi-trait genomic models in a Bayesian context can be used to estimate genomic (co)variances, either for a complete genome or for genomic regions (e.g. per chromosome) for the purpose of multi-trait genomic selection or to gain further insight into the genomic architecture of related...... traits such as mammary disease traits in dairy cattle. METHODS: Data on progeny means of six traits related to mastitis resistance in dairy cattle (general mastitis resistance and five pathogen-specific mastitis resistance traits) were analyzed using a bivariate Bayesian SNP-based genomic model...... with a common prior distribution for the marker allele substitution effects and estimation of the hyperparameters in this prior distribution from the progeny means data. From the Markov chain Monte Carlo samples of the allele substitution effects, genomic (co)variances were calculated on a whole-genome level...

  4. Robust Hotelling T2 control chart using reweighted minimum vector variance estimators

    Science.gov (United States)

    Ali, Hazlina; Yahaya, Sharipah Soaad Syed; Omar, Zurni

    2014-12-01

    Hotelling T2 control chart is employed to monitor the stability of a multivariate process in Phase I and II. Traditional Hotelling T2 control chart using classical estimators in Phase I, however, suffers from masking and swamping effects and thus jeorpadizes its performance. To alleviate this problem, robust location and scale estimators are recommended instead. In this paper, a new Hotelling T2 control chart based on highly robust and efficient estimators of location and scatter estimators, known as reweighted minimum vector variance estimators, is proposed. Numerical results show that the new chart is not only capable of detecting outliers but it can also control the alarm rates better than the existing charts.

  5. Estimation of variance components including competitive effects of Large White growing gilts.

    Science.gov (United States)

    Arango, J; Misztal, I; Tsuruta, S; Culbertson, M; Herring, W

    2005-06-01

    Records of on-test ADG of Large White gilts were analyzed to estimate variance components of direct and associative genetic effects. Models included the effects of contemporary group (farm-barn-batch), birth litter, pen group, and direct and associative additive genetic effects. The area of each pen was 14 m2. The additive genetic variance was a function of the number of competitors in a group, the additive relationships between the animal performing the record and its pen mates, and the additive relationships between pen mates. To partially account for differences in the number of pen mates, a covariable (qi = 1, 1/n, or 1/n(1/2)) was added to the associative genetic effect. There were 4,946 records from 2,409 litters and 362 pen groups. Pen group size ranged from 12 to 16 gilts. Analyses by REML converged very slowly. A grid search showed that the likelihood function was almost flat when the additive genetic associative effect was fitted. Estimates of direct and associative heritability were 0.15 and 0.03, respectively. Within the BLUPF90 family of programs, the mixed-model equations can be set up directly. For variance component estimation, simple programs (REMLF90 and GIBBSF90) worked without modifications, but more optimized programs did not. Estimates obtained using the three values of qi were similar. With the data structure available for this study and under an environment with relative low competition among animals, accurate estimation of associative genetic effects was not possible. Estimation of competitive effects with large pen size is difficult. The magnitude of competition effects may be larger in commercial populations, where housing is denser and food is limited.

  6. Estimate of genetic variances in eight creole maize varieties for the mexican low-land region

    Directory of Open Access Journals (Sweden)

    Luis Ángel Muñoz Romero

    2017-04-01

    Full Text Available The aim of this work was to estimate the combinatory aptitude, genetic variance and heterosis of eight creole corn varieties. The research work was carried in Irapuato, Guanajuato, México, during 2008 and 2009. A randomized complete block design with three replications was used to evaluate the twenty-eight crosses under method 4 Griffing (1956. Each experimental plot included four rows five meters long with a separation of 0,75 m. The general combing ability and specific (ACG and ACE were highly significant (P<0.01 for all traits except flowering days. The dominance variance (σ2D was larger and more important than additive variance (σ2A for most of the traits, indicating that non- additive genetic genes were important on the expression of those traits on crosses. It was observed that varieties P6 (creole #5, P7 (creole #2 and P8 (creole San Antonio had larger variance effects (σ2ACE for long cob, number of rows per cob, total cob number, and grain yield. Some outstanding crosses were identified for their high grain yield as well as heterosis, mainly those that included germoplasm of creole #5, #2 and San Antonio. According to the aforementioned we recommend to draw lines from the above populations and cross them to produce hybrids.

  7. Meta-analysis of binary data: which within study variance estimate to use?

    Science.gov (United States)

    Chang, B H; Waternaux, C; Lipsitz, S

    2001-07-15

    We applied a mixed effects model to investigate between- and within-study variation in improvement rates of 180 schizophrenia outcome studies. The between-study variation was explained by the fixed study characteristics and an additional random study effect. Both rate difference and logit models were used. For a binary proportion outcome p(i) with sample size n(i) in the ith study, (circumflexp(i)(1-circumflexp(i))n)(-1) is the usual estimate of the within-study variance sigma(i)(2) in the logit model, where circumflexpi) is the sample mean of the binary outcome for subjects in study i. This estimate can be highly correlated with logit(circumflexp(i)). We used (macronp(i)(1-macronp)n(i))(-1) as an alternative estimate of sigma(i)(2), where macronp is the weighted mean of circumflexp(i)'s. We estimated regression coefficients (beta) of the fixed effects and the variance (tau(2)) of the random study effect using a quasi-likelihood estimating equations approach. Using the schizophrenia meta-analysis data, we demonstrated how the choice of the estimate of sigma(2)(i) affects the resulting estimates of beta and tau(2). We also conducted a simulation study to evaluate the performance of the two estimates of sigma(2)(i) in different conditions, where the conditions vary by number of studies and study size. Using the schizophrenia meta-analysis data, the estimates of beta and tau(2) were quite different when different estimates of sigma(2)(i) were used in the logit model. The simulation study showed that the estimates of beta and tau(2) were less biased, and the 95 per cent CI coverage was closer to 95 per cent when the estimate of sigma(2)(i) was (macronp(1-macronp)n(i))(-1) rather than (circumflexp(i)(1-circumflexp)n(i))(-1). Finally, we showed that a simple regression analysis is not appropriate unless tau(2) is much larger than sigma(2)(i), or a robust variance is used. Copyright 2001 John Wiley & Sons, Ltd.

  8. Variance components estimation for farrowing traits of three purebred pigs in Korea

    Directory of Open Access Journals (Sweden)

    Bryan Irvine Lopez

    2017-09-01

    Full Text Available Objective This study was conducted to estimate breed-specific variance components for total number born (TNB, number born alive (NBA and mortality rate from birth through weaning including stillbirths (MORT of three main swine breeds in Korea. In addition, the importance of including maternal genetic and service sire effects in estimation models was evaluated. Methods Records of farrowing traits from 6,412 Duroc, 18,020 Landrace, and 54,254 Yorkshire sows collected from January 2001 to September 2016 from different farms in Korea were used in the analysis. Animal models and the restricted maximum likelihood method were used to estimate variances in animal genetic, permanent environmental, maternal genetic, service sire and residuals. Results The heritability estimates ranged from 0.072 to 0.102, 0.090 to 0.099, and 0.109 to 0.121 for TNB; 0.087 to 0.110, 0.088 to 0.100, and 0.099 to 0.107 for NBA; and 0.027 to 0.031, 0.050 to 0.053, and 0.073 to 0.081 for MORT in the Duroc, Landrace and Yorkshire breeds, respectively. The proportion of the total variation due to permanent environmental effects, maternal genetic effects, and service sire effects ranged from 0.042 to 0.088, 0.001 to 0.031, and 0.001 to 0.021, respectively. Spearman rank correlations among models ranged from 0.98 to 0.99, demonstrating that the maternal genetic and service sire effects have small effects on the precision of the breeding value. Conclusion Models that include additive genetic and permanent environmental effects are suitable for farrowing traits in Duroc, Landrace, and Yorkshire populations in Korea. This breed-specific variance components estimates for litter traits can be utilized for pig improvement programs in Korea.

  9. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    Science.gov (United States)

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  10. Estimation of stable boundary-layer height using variance processing of backscatter lidar data

    Science.gov (United States)

    Saeed, Umar; Rocadenbosch, Francesc

    2017-04-01

    Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.

  11. Estimating posterior image variance with sparsity-based object priors for MRI

    Science.gov (United States)

    Chen, Yujia; Lou, Yang; Eldeniz, Cihat; An, Hongyu; Anastasio, Mark A.

    2017-03-01

    Point estimates, such as the maximum a posteriori (MAP) estimate, are commonly computed in image re- construction tasks. However, such point estimates provide no information about the range of highly probable solutions, namely the uncertainty in the computed estimate. Bayesian inference methods that seek to compute the posterior probability distribution function (PDF) of the object can provide exactly this information, but are generally computationally intractable. Markov Chain Monte Carlo (MCMC) methods, which avoid explicit posterior computation by directly sampling from the PDF, require considerable expertise to run in a proper way. This work investigates a computationally efficient variational Bayesian inference approach for computing the posterior image variance with application to MRI. The methodology employs a sparse object prior model that is consistent with the model assumed in most sparse reconstruction methods. The posterior variance map generated by the proposed method provides valuable information that reveals how data-acquisition parameters and the specification of the object prior affect the reliability of a reconstructed MAP image. The proposed method is demonstrated by use of computer-simulated MRI data.

  12. Simultaneous Estimation of Noise Variance and Number of Peaks in Bayesian Spectral Deconvolution

    Science.gov (United States)

    Tokuda, Satoru; Nagata, Kenji; Okada, Masato

    2017-02-01

    The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multipeak spectra into single peaks statistically and consists of two steps. The first step is estimating both the noise variance and the number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multipeak models are nonlinear and hierarchical. Our framework enables the escape from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy via the multiple histogram method. We discuss a simulation demonstrating how efficient our framework is and show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation.

  13. A Bivariate Markov Regime Switching GARCH Approach to Estimate Time Varying Minimum Variance Hedge Ratios

    OpenAIRE

    Hsiang-Tai Lee; Jonathan Yoder

    2005-01-01

    This paper develops a new bivariate Markov regime switching BEKK-GARCH (RS-BEKK-GARCH) model. The model is a state-dependent bivariate BEKK- GARCH model, and an extension of Gray’s univariate generalized regime- switching (GRS) model to the bivariate case. To solve the path- dependency problem inherent in the bivariate regime switching BEKK-GARCH model, we propose a recombining method for the covariance term in the conditional variance-covariance matrix. The model is applied to estimate time-...

  14. Estimate of genetic variances in eight creole maize varieties for the mexican low-land region

    OpenAIRE

    Luis Ángel Muñoz Romero; Enrique Navarro Guerrero; Manuel De la Rosa Ibarra; Luis Pérez Romero; Ángel Enrique Caamal Dzul

    2017-01-01

    The aim of this work was to estimate the combinatory aptitude, genetic variance and heterosis of eight creole corn varieties. The research work was carried in Irapuato, Guanajuato, México, during 2008 and 2009. A randomized complete block design with three replications was used to evaluate the twenty-eight crosses under method 4 Griffing (1956). Each experimental plot included four rows five meters long with a separation of 0,75 m. The general combing ability and specific (ACG and ACE) were h...

  15. An empirical Bayes method for robust variance estimation in detecting DEGs using microarray data.

    Science.gov (United States)

    You, Na; Wang, Xueqin

    2017-10-01

    The microarray technology is widely used to identify the differentially expressed genes due to its high throughput capability. The number of replicated microarray chips in each group is usually not abundant. It is an efficient way to borrow information across different genes to improve the parameter estimation which suffers from the limited sample size. In this paper, we use a hierarchical model to describe the dispersion of gene expression profiles and model the variance through the gene expression level via a link function. A heuristic algorithm is proposed to estimate the hyper-parameters and link function. The differentially expressed genes are identified using a multiple testing procedure. Compared to SAM and LIMMA, our proposed method shows a significant superiority in term of detection power as the false discovery rate being controlled.

  16. Bias in slope estimates for the linear errors in variables model by the variance ratio method.

    Science.gov (United States)

    Edland, S D

    1996-03-01

    Slope estimates for linear measurement error (errors in variables) models based on assumed knowledge of the ratio of measurement error variances are biased if the underlying linear relationship is anything other than a completely deterministic, law-like relationship. This paper describes an eight-parameter linear measurement error model of general applicability that includes an optional "errors in equations" term (Malinvaud, E., 1980, Statistical Methods of Econometrics) that allows the explicit characterization of the asymptotic bias of such slope estimates when the assumption of a law-like relationship does not hold. This bias may be large, underscoring the importance of recognizing the potential influence of errors in equations in measurement error models.

  17. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference

    Directory of Open Access Journals (Sweden)

    Heringstad Bjørg

    2010-07-01

    Full Text Available Abstract Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (covariance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative" or "non-informative" with respect to genetic (covariance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean being completely confounded with a single residual on the underlying liability scale. For threshold models, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full relationship matrix, but genetic (covariance components are inferred from the sampled breeding values and relationships between "informative" individuals (usually parents only. The latter is analogous to a sire-dam model (in cases with no individual records on the parents. Results When applied to simulated data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual records exist for the parents. Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to

  18. Variance Owing to Observer, Repeat Imaging, and Fundus Camera Type on Cup-to-disc Ratio Estimates by Stereo Planimetry

    NARCIS (Netherlands)

    Kwon, Young H.; Adix, Michael; Zimmerman, M. Bridget; Piette, Scott; Greenlee, Emily C.; Alward, Wallace L. M.; Abramoff, M.D.

    2009-01-01

    Objective: To determine and compare variance components in linear cup-to-disc ratio (LCDR) estimates by computer-assisted planimetry by human experts, and automated machine algorithm (digital automated planimetry). Design: Prospective case series for evaluation of planimetry.

  19. Cooperative Localization Algorithm for Multiple Mobile Robot System in Indoor Environment Based on Variance Component Estimation

    Directory of Open Access Journals (Sweden)

    Qian Sun

    2017-06-01

    Full Text Available The Multiple Mobile Robot (MMR cooperative system is becoming a focus of study in various fields due to its advantages, such as high efficiency and good fault tolerance. However, the uncertainty and nonlinearity problems severely limit the cooperative localization accuracy of the MMR system. Thus, to solve the problems mentioned above, this manuscript presents a cooperative localization algorithm for MMR systems based on Cubature Kalman Filter (CKF and adaptive Variance Component Estimation (VCE methods. In this novel algorithm, a nonlinear filter named CKF is used to enhance the cooperative localization accuracy and reduce the computational load. On the other hand, the adaptive VCE method is introduced to eliminate the effects of unknown system noise. Furthermore, the performance of the proposed algorithm is compared with that of the cooperative localization algorithm based on normal CKF by utilizing the real experiment data. In addition, the results demonstrate that the proposed algorithm outperforms the CKF cooperative localization algorithm both in accuracy and consistency.

  20. Variance decomposition for single-subject task-based fMRI activity estimates across many sessions.

    Science.gov (United States)

    Gonzalez-Castillo, Javier; Chen, Gang; Nichols, Thomas E; Bandettini, Peter A

    2017-07-01

    Here we report an exploratory within-subject variance decomposition analysis conducted on a task-based fMRI dataset with an unusually large number of repeated measures (i.e., 500 trials in each of three different subjects) distributed across 100 functional scans and 9 to 10 different sessions. Within-subject variance was segregated into four primary components: variance across-sessions, variance across-runs within a session, variance across-blocks within a run, and residual measurement/modeling error. Our results reveal inhomogeneous and distinct spatial distributions of these variance components across significantly active voxels in grey matter. Measurement error is dominant across the whole brain. Detailed evaluation of the remaining three components shows that across-session variance is the second largest contributor to total variance in occipital cortex, while across-runs variance is the second dominant source for the rest of the brain. Network-specific analysis revealed that across-block variance contributes more to total variance in higher-order cognitive networks than in somatosensory cortex. Moreover, in some higher-order cognitive networks across-block variance can exceed across-session variance. These results help us better understand the temporal (i.e., across blocks, runs and sessions) and spatial distributions (i.e., across different networks) of within-subject natural variability in estimates of task responses in fMRI. They also suggest that different brain regions will show different natural levels of test-retest reliability even in the absence of residual artifacts and sufficiently high contrast-to-noise measurements. Further confirmation with a larger sample of subjects and other tasks is necessary to ensure generality of these results. Published by Elsevier Inc.

  1. Outlier detection for particle image velocimetry data using a locally estimated noise variance

    Science.gov (United States)

    Lee, Yong; Yang, Hua; Yin, ZhouPing

    2017-03-01

    This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.

  2. SAMPLE SIZE DETERMINATION IN CLINICAL TRIALS BASED ON APPROXIMATION OF VARIANCE ESTIMATED FROM LIMITED PRIMARY OR PILOT STUDIES

    Directory of Open Access Journals (Sweden)

    B SOLEYMANI

    2001-06-01

    Full Text Available In many casses the estimation of variance which is used to determine sample size in clinical trials, derives from limited primary or pilot studies in which number of samples is small. since in such casses the estimation of variance may be much far from the real variance, the size of samples is suspected to be less or more than what is really needed. In this article an attempt has been made to give a solution to this problem. in the case of normal distribution. Based on distribution of (n-1 S2/?2 which is chi-square for normal variables, an appropriate estimation of variance is determined an used to calculate sample size. Also, total probability to ensure specific precision and power has been achived. In method presented here, The probability for getting desired precision and power is more than that of usual method, but results of two methods get closer when sample size increases in primary studies.

  3. Principal component approach in variance component estimation for international sire evaluation

    Directory of Open Access Journals (Sweden)

    Jakobsen Jette

    2011-05-01

    Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in

  4. Bias Reduction in Estimating Variance Components of Phytoplankton Existence at Na Thap River Based on Logistics Linear Mixed Models

    Science.gov (United States)

    Arisanti, R.; Notodiputro, K. A.; Sadik, K.; Lim, A.

    2017-03-01

    There are two approaches in estimating variance components, i.e. linearity and integral approaches. However the estimates of variance components produced by both methods are known to be biased. Firth (1993) has introduced parameter estimation for correcting the bias of the maximum likelihood estimates. This method is within the class of linear models, especially the Restricted Maximum Likelihood (REML) method, and the resulting estimator is known as the Firth estimator. In this paper we discuss the bias correction method applied to a logistic linear mixed model in analyzing the existence of Synedra phytoplankton along Na Thap river in Thailand. The Firth adjusted Maximum Likelihood Estimation (MLE) is similar to REML but it shows the characteristic of generalized linear mixed model. We evaluated the Firth adjustment method by means of simulations and the result showed that the unadjusted MLE produced 95% confidence intervals which were narrower when compare to the Firth method. However, the probability coverage of the interval for unadjusted MLE was lower than 95%, whereas for the Firth method the probability coverage is approximately 95%. These results were also consistent with the variance estimation of the Synedra phytoplankton existence. It was shown that the variance estimates of Firth adjusted MLE was lower than the unadjusted MLE.

  5. Concerns about a variance approach to the X-ray diffractometric estimation of microfibril angle in wood

    Science.gov (United States)

    Steve P. Verrill; David E. Kretschmann; Victoria L. Herian; Michael Wiemann; Harry A. Alden

    2010-01-01

    In this paper we raise three technical concerns about Evans’s 1999 Appita Journal “variance approach” to estimating microfibril angle. The first concern is associated with the approximation of the variance of an X-ray intensity half-profile by a function of the microfibril angle and the natural variability of the microfibril angle, S2...

  6. Estimation of variance components for somatic cell counts to determine thresholds for uninfected quarters.

    Science.gov (United States)

    Schepers, A J; Lam, T J; Schukken, Y H; Wilmink, J B; Hanekamp, W J

    1997-08-01

    The objective of this study was to determine the factors affecting somatic cell count (SCC), to estimate variance components of these factors, and to calculate and evaluate the thresholds for intramammary infection based on SCC. The infection status from 22,467 quarter milk samples from 544 cows in seven herds was determined. Infections status was the most important factor affecting SCC. The increase in SCC was more pronounced for major pathogens than for minor pathogens. Even after adjustment for infection status, the interaction between stage of lactation and parity was significant. For culture-negative samples within a lactation, the shape of the SCC curve was inversely related to the shape of the milk production curve. The shape of the SCC curve was flat for first lactation cows compared with the shape of the SCC curve for cows in subsequent lactations. The effect of clinical mastitis on SCC was significant. The use of SCC thresholds for specific parities and stages of lactation to detect intramammary infection improved quality parameters only slightly over a fixed threshold of 200,000 cells/ml.

  7. Variance-Constrained Robust Estimation for Discrete-Time Systems with Communication Constraints

    Directory of Open Access Journals (Sweden)

    Baofeng Wang

    2014-01-01

    Full Text Available This paper is concerned with a new filtering problem in networked control systems (NCSs subject to limited communication capacity, which includes measurement quantization, random transmission delay, and packets loss. The measurements are first quantized via a logarithmic quantizer and then transmitted through a digital communication network with random delay and packet loss. The three communication constraints phenomena which can be seen as a class of uncertainties are formulated by a stochastic parameter uncertainty system. The purpose of the paper is to design a linear filter such that, for all the communication constraints, the error state of the filtering process is mean square bounded and the steady-state variance of the estimation error for each state is not more than the individual prescribed upper bound. It is shown that the desired filtering can effectively be solved if there are positive definite solutions to a couple of algebraic Riccati-like inequalities or linear matrix inequalities. Finally, an illustrative numerical example is presented to demonstrate the effectiveness and flexibility of the proposed design approach.

  8. Estimates of (co)variance components and genetic parameters for growth traits of Avikalin sheep.

    Science.gov (United States)

    Prince, Leslie Leo L; Gowane, Gopal R; Chopra, Ashish; Arora, Amrit L

    2010-08-01

    (Co)variance components and genetic parameters for various growth traits of Avikalin sheep maintained at Central Sheep and Wool Research Institute, Avikanagar, Rajasthan, India, were estimated by Restricted Maximum Likelihood, fitting six animal models with various combinations of direct and maternal effects. Records of 3,840 animals descended from 257 sires and 1,194 dams were taken for this study over a period of 32 years (1977-2008). Direct heritability estimates (from best model as per likelihood ratio test) for weight at birth, weaning, 6 and 12 months of age, and average daily gain from birth to weaning, weaning to 6 months, and 6 to 12 months were 0.28 +/- 0.03, 0.20 +/- 0.03, 0.28 +/- 0.07, 0.15 +/- 0.04, 0.21 +/- 0.03, 0.16 and 0.03 +/- 0.03, respectively. Maternal heritability for traits declined as animal grows older and it was not at all evident at adult age and for post-weaning daily gain. Maternal permanent environmental effect (c(2)) declined significantly with advancement of age of animal. A small effect of c(2) on post-weaning weights was probably a carryover effect of pre-weaning maternal influence. A significant large negative genetic correlation was observed between direct and maternal genetic effects for all the traits, indicating antagonistic pleiotropy, which needs special care while formulating breeding plans. A fair rate of genetic progress seems possible in the flock by selection for all traits, but direct and maternal genetic correlation needs to be taken in to consideration.

  9. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  10. Estimation of variance components including competitive effects of Large White growing gilts

    National Research Council Canada - National Science Library

    Arango, J; Misztal, I; Tsuruta, S; Culbertson, M; Herring, W

    2005-01-01

    .... The area of each pen was 14 m 2 . The additive genetic variance was a function of the number of competitors in a group, the additive relationships between the animal performing the record and its pen mates, and the additive...

  11. Effects of subsampling of passive acoustic recordings on acoustic metrics.

    Science.gov (United States)

    Thomisch, Karolin; Boebel, Olaf; Zitterbart, Daniel P; Samaran, Flore; Van Parijs, Sofie; Van Opzeeland, Ilse

    2015-07-01

    Passive acoustic monitoring is an important tool in marine mammal studies. However, logistics and finances frequently constrain the number and servicing schedules of acoustic recorders, requiring a trade-off between deployment periods and sampling continuity, i.e., the implementation of a subsampling scheme. Optimizing such schemes to each project's specific research questions is desirable. This study investigates the impact of subsampling on the accuracy of two common metrics, acoustic presence and call rate, for different vocalization patterns (regimes) of baleen whales: (1) variable vocal activity, (2) vocalizations organized in song bouts, and (3) vocal activity with diel patterns. To this end, above metrics are compared for continuous and subsampled data subject to different sampling strategies, covering duty cycles between 50% and 2%. The results show that a reduction of the duty cycle impacts negatively on the accuracy of both acoustic presence and call rate estimates. For a given duty cycle, frequent short listening periods improve accuracy of daily acoustic presence estimates over few long listening periods. Overall, subsampling effects are most pronounced for low and/or temporally clustered vocal activity. These findings illustrate the importance of informed decisions when applying subsampling strategies to passive acoustic recordings or analyses for a given target species.

  12. Estimates for Genetic Variance Components in Reciprocal Recurrent Selection in Populations Derived from Maize Single-Cross Hybrids

    Directory of Open Access Journals (Sweden)

    Matheus Costa dos Reis

    2014-01-01

    Full Text Available This study was carried out to obtain the estimates of genetic variance and covariance components related to intra- and interpopulation in the original populations (C0 and in the third cycle (C3 of reciprocal recurrent selection (RRS which allows breeders to define the best breeding strategy. For that purpose, the half-sib progenies of intrapopulation (P11 and P22 and interpopulation (P12 and P21 from populations 1 and 2 derived from single-cross hybrids in the 0 and 3 cycles of the reciprocal recurrent selection program were used. The intra- and interpopulation progenies were evaluated in a 10×10 triple lattice design in two separate locations. The data for unhusked ear weight (ear weight without husk and plant height were collected. All genetic variance and covariance components were estimated from the expected mean squares. The breakdown of additive variance into intrapopulation and interpopulation additive deviations (στ2 and the covariance between these and their intrapopulation additive effects (CovAτ found predominance of the dominance effect for unhusked ear weight. Plant height for these components shows that the intrapopulation additive effect explains most of the variation. Estimates for intrapopulation and interpopulation additive genetic variances confirm that populations derived from single-cross hybrids have potential for recurrent selection programs.

  13. Using SNP Markers to Estimate Additive, Dominance and Imprinting Genetic Variance

    NARCIS (Netherlands)

    Lopes, M.S.; Bastiaansen, J.W.M.; Janss, L.L.G.; Bovenhuis, H.; Knol, E.F.

    2014-01-01

    The contributions of additive, dominance and imprinting effects to the variance of number of teats (NT) were evaluated in two purebred pig populations using SNP markers. Three different random regression models were evaluated, accounting for the mean and: 1) additive effects (MA), 2) additive and

  14. Estimates of genetic and environmental (co)variances for live weight ...

    African Journals Online (AJOL)

    Schalk Cloete

    co)variances for live and fleece weight in New Zealand Coopworth sheep. Livest. Prod. Sci. 58, 137-150. Neser. F.W.C., Erasmus, G.J. & Van Wyk, J.B., 2000. Genetic studies on the South African Mutton Merino: growth traits. S. Afr. J. Anim. Sci.

  15. Sampling variance of flood quantiles from the generalised logistic distribution estimated using the method of L-moments

    Science.gov (United States)

    Kjeldsen, Thomas R.; Jones, David A.

    The method of L-moments is the recommended method for fitting the three parameters (location, scale and shape) of a Generalised Logistic (GLO) distribution when conducting flood frequency analyses in the UK. This paper examines the sampling uncertainty of quantile estimates obtained using the GLO distribution for single site analysis using the median to estimate the location parameter. Analytical expressions for the mean and variance of the quantile estimates were derived, based on asymptotic theory. This has involved deriving expressions for the covariance between the sampling median (location parameter) and the quantiles of the estimated unit-median GLO distribution (growth curve). The accuracy of the asymptotic approximations for many of these intermediate results and for the quantile estimates was investigated by comparing the approximations to the outcome of a series of Monte Carlo experiments. The approximations were found to be adequate for GLO shape parameter values between -0.35 and 0.25, which is an interval that includes the shape parameter estimates for most British catchments. An investigation into the contribution of different components to the total uncertainty showed that for large returns periods, the variance of the growth curve is larger than the contribution of the median. Therefore, statistical methods using regional information to estimate the growth curve should be considered when estimating design events at large return periods.

  16. An Empirical Comparison of Heterogeneity Variance Estimators in 12,894 Meta-Analyses

    Science.gov (United States)

    Langan, Dean; Higgins, Julian P. T.; Simmonds, Mark

    2015-01-01

    Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and…

  17. Using SNP markers to estimate additive, dominance and imprinting genetic variance

    DEFF Research Database (Denmark)

    Lopes, M S; Bastiaansen, J W M; Janss, Luc

    .01 to 0.02. Dominance effects make an important contribution to the genetic variation of NT in the two lines evaluated. Imprinting effects appeared less important for NT than additive and dominance effects. The SNP random regression model presented and evaluated in this study is a feasible approach......The contributions of additive, dominance and imprinting effects to the variance of number of teats (NT) were evaluated in two purebred pig populations using SNP markers. Three different random regression models were evaluated, accounting for the mean and: 1) additive effects (MA), 2) additive...

  18. The contribution of dominance and inbreeding depression in estimating variance components for litter size in Pannon White rabbits.

    Science.gov (United States)

    Nagy, I; Gorjanc, G; Curik, I; Farkas, J; Kiszlinger, H; Szendrő, Zs

    2013-08-01

    In a synthetic closed population of Pannon White rabbits, additive (VA ), dominance (VD ) and permanent environmental (VPe ) variance components as well as doe (bF d ) and litter (bF l ) inbreeding depression were estimated for the number of kits born alive (NBA), number of kits born dead (NBD) and total number of kits born (TNB). The data set consisted of 18,398 kindling records of 3883 does collected from 1992 to 2009. Six models were used to estimate dominance and inbreeding effects. The most complete model estimated VA and VD to contribute 5.5 ± 1.1% and 4.8 ± 2.4%, respectively, to total phenotypic variance (VP ) for NBA; the corresponding values for NBD were 1.9 ± 0.6% and 5.3 ± 2.4%, for TNB, 6.2 ± 1.0% and 8.1 ± 3.2% respectively. These results indicate the presence of considerable VD . Including dominance in the model generally reduced VA and VPe estimates, and had only a very small effect on inbreeding depression estimates. Including inbreeding covariates did not affect estimates of any variance component. A 10% increase in doe inbreeding significantly increased NBD (bF d  = 0.18 ± 0.07), while a 10% increase in litter inbreeding significantly reduced NBA (bF l  = -0.41 ± 0.11) and TNB (bF l  = -0.34 ± 0.10). These findings argue for including dominance effects in models of litter size traits in populations that exhibit significant dominance relationships. © 2012 Blackwell Verlag GmbH.

  19. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form... for allometric relationships - and identifies the information that needs to be provided with the allometric equation if it is to be used with confidence. Correct estimation of tree biomass with known error is very important when trees are being planted...

  20. Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); D. Thierens (Dirk); D. Thierens (Dirk)

    2007-01-01

    htmlabstractRecent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we

  1. A note on the variance of the estimate of the fixation index F

    Indian Academy of Sciences (India)

    representation of estimate of p, namely ˆp = ˆd + ˆh/2, with ˆd nowhere defined in their note. In summation, the research note of Otto and Lemes (2015) contributes very little to the vast literature of sampling vari- ance of estimators of Wright's fixation indices, for which computationally efficient formulae exist for any arbitrary.

  2. Unbiased Estimators of Ability Parameters, of Their Variance, and of Their Parallel-Forms Reliability.

    Science.gov (United States)

    Lord, Frederic M.

    This paper is primarily concerned with determining the statistical bias in the maximum likelihood estimate of the examinee ability parameter in item response theory, and of certain functions of such parameters. Given known item parameters, unbiased estimators are derived for (1) an examinee's ability parameter and proportion-correct true score;…

  3. Finding Efficiency in the Design of Large Multisite Evaluations: Estimating Variances for Science Achievement Studies

    Science.gov (United States)

    Westine, Carl D.

    2016-01-01

    Little is known empirically about intraclass correlations (ICCs) for multisite cluster randomized trial (MSCRT) designs, particularly in science education. In this study, ICCs suitable for science achievement studies using a three-level (students in schools in districts) MSCRT design that block on district are estimated and examined. Estimates of…

  4. Calculating the variance and prediction intervals for estimates obtained from allometric relationships

    CSIR Research Space (South Africa)

    Nickless, A

    2010-09-01

    Full Text Available . These relationships are referred to as allometric equations. In science it is important to quantify the error associated with an estimate in order to determine the reliability of the estimate. Therefore, prediction intervals or standard errors are usually quoted...

  5. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    size variability within benign and malignant nuclear populations can for all practical purposes be reduced to 2-D measurement of nuclear profile areas. These new powerful stereological estimators of nuclear volume and nuclear size variability provide an attractive approach to quantitative......The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... probability in a physical disector and Cavalieri's direct estimator of volume, the unbiased, number-weighted mean nuclear volume, nuclear vN, of the same benign and malignant nuclear populations is also estimated. Having obtained estimates of nuclear volume in both the volume- and number distribution...

  6. Subsampling-based compression and flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Agranovsky, Alexy; Camp, David; Joy, I; Childs, Hank

    2016-01-19

    As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction, and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques. Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.

  7. Subsampling and inpainting approaches for electron tomography.

    Science.gov (United States)

    Sanders, Toby; Dwyer, Christian

    2017-11-01

    With the aim of addressing the issue of sample damage during electron tomography data acquisition, we propose a number of new reconstruction strategies based on subsampling (which uses only a subset of a full image) and inpainting (recovery of a full image from subsampled one). We point out that the total-variation (TV) inpainting model commonly used to inpaint subsampled images may be inappropriate for 2D projection images of typical TEM specimens. Thus, we propose higher-order TV (HOTV) inpainting, which accommodates the fact that projection images may be inherently smooth, as a more suitable image inpainting scheme. We also describe how the HOTV method can be extended to 3D, a scheme which makes use of both image data and sinogram data. Additionally, we propose gradient subsampling as a more efficient scheme than random subsampling. We make a rigorous comparison of our proposed new reconstruction schemes with existing ones. The new schemes are demonstrated to perform better than or as well as existing schemes, and we show that they outperform existing schemes at low subsampling rates. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A comparison of non-iterative and iterative estimators of heterogeneity variance for the standardized mortality ratio.

    Science.gov (United States)

    Böhning, Dankmar; Sarol, Jesus; Rattanasiri, Sasivimol; Viwatwongkasem, Chukiat; Biggeri, Annibale

    2004-01-01

    This paper continues work presented in Böhning et al. (2002b, Annals of the Institute of Statistical Mathematics 54, 827-839, henceforth BMSRB) where a class of non-iterative estimators of the variance of the heterogeneity distribution for the standardized mortality ratio was discussed. Here, these estimators are further investigated by means of a simulation study. In addition, iterative estimators including the Clayton-Kaldor procedure as well as the pseudo-maximum-likelihood (PML) approach are added in the comparison. Among all candidates, the PML estimator often has the smallest mean square error, followed by the non-iterative estimator where the weights are proportional to the external expected counts. This confirms the theoretical result in BMSRB in which an asymptotic efficiency could be proved for this estimator (in the class of non-iterative estimators considered). Surprisingly, the Clayton-Kaldor iterative estimator (often recommended and used by practitioners) performed poorly with respect to the MSE. Given the widespread use of these estimators in disease mapping, medical surveillance, meta-analysis and other areas of public health, the results of this study might be of considerable interest.

  9. Stereological estimation of the mean and variance of nuclear volume from vertical sections

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1991-01-01

    The application of assumption-free, unbiased stereological techniques for estimation of the volume-weighted mean nuclear volume, nuclear vv, from vertical sections of benign and malignant nuclear aggregates in melanocytic skin tumours is described. Combining sampling of nuclei with uniform...... size variability within benign and malignant nuclear populations can for all practical purposes be reduced to 2-D measurement of nuclear profile areas. These new powerful stereological estimators of nuclear volume and nuclear size variability provide an attractive approach to quantitative...

  10. Unbiased Estimation of Gene Diversity in Samples Containing Related Individuals: Exact Variance and Arbitrary Ploidy

    OpenAIRE

    DeGiorgio, Michael; Jankovic, Ivana; Rosenberg, Noah A.

    2010-01-01

    Gene diversity, a commonly used measure of genetic variation, evaluates the proportion of heterozygous individuals expected at a locus in a population, under the assumption of Hardy–Weinberg equilibrium. When using the standard estimator of gene diversity, the inclusion of related or inbred individuals in a sample produces a downward bias. Here, we extend a recently developed estimator shown to be unbiased in a diploid autosomal sample that includes known related or inbred individuals to the ...

  11. Lower Bound on Estimation Variance of the Ultrasonic Attenuation Coefficient Using the Spectral-Difference Reference-phantom Method.

    Science.gov (United States)

    Samimi, Kayvan; Varghese, Tomy

    2017-05-01

    Ultrasonic attenuation is one of the primary parameters of interest in Quantitative Ultrasound (QUS). Non-invasive monitoring of tissue attenuation can provide valuable diagnostic and prognostic information to the physician. The Reference Phantom Method (RPM) was introduced as a way of mitigating some of the system-related effects and biases to facilitate clinical QUS applications. In this paper, under the assumption of diffuse scattering, a probabilistic model of the backscattered signal spectrum is used to derive a theoretical lower bound on the estimation variance of the attenuation coefficient using the Spectral-Difference RPM. The theoretical lower bound is compared to simulated and experimental attenuation estimation statistics in tissue-mimicking (TM) phantoms. Estimation standard deviation (STD) of the sample attenuation in a region of interest (ROI) of the TM phantom is measured for various combinations of processing parameters, including Radio-Frequency (RF) data block length (i.e., window length) from 3 to 17 mm, RF data block width from 10 to 100 A-lines, and number of RF data blocks per attenuation estimation ROI from 3 to 10. In addition to the Spectral-Difference RPM, local attenuation estimation for simulated and experimental data sets was also performed using a modified implementation of the Spectral Fit Method (SFM). Estimation statistics of the SFM are compared to theoretical variance predictions from the literature.(1) Measured STD curves are observed to lie above the theoretical lower bound curves, thus experimentally verifying the validity of the derived bounds. This theoretical framework benefits tissue characterization efforts by isolating processing parameter ranges that could provide required precision levels in estimation of the ultrasonic attenuation coefficient using Spectral Difference methods.

  12. Sex estimation from modern American humeri and femora, accounting for sample variance structure

    DEFF Research Database (Denmark)

    Boldsen, Jesper L; Milner, George R; Boldsen, Søren K

    2015-01-01

    information about the sample's sex ratio, if known. Material and methods: Three measurements useful for estimating the sex of adult skeletons, the humeral and femoral head diameters and the humeral epicondylar breadth, were collected from 258 Americans born between 1893 and 1980 who died within the past...... estimates correctly classifies 88.3% of the skeletons, with 10.8% considered unknown and 0.9% assigned to the wrong sex. Discussion: Probabilities of correct assignments are a better means of categorizing individuals as male or female than the sectioning points commonly used in skeletal studies....... That is because it is possible to estimate the observer's certainty that the individual represented by measured bones was one sex or the other. A computer program is available that simultaneously considers samples of unequal sex composition. It is useful when there is contextual information available about...

  13. The Effect of Some Estimators of Between-Study Variance on Random

    African Journals Online (AJOL)

    Samson Henry Dogo

    Section 2 presents confidence interval, coverage probability and the estimators of 2 considered in the analysis. Section 3 report on simulation study. Section 4 is the summary and conclusion. MATERIALS AND METHOD. Methodology. The statistical tool used for the analysis is the coverage probability introduced below.

  14. Variance estimation, design effects, and sample size calculations for respondent-driven sampling.

    Science.gov (United States)

    Salganik, Matthew J

    2006-11-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling.

  15. Small Area Variance Estimation for the Siuslaw NF in Oregon and Some Results

    Science.gov (United States)

    S. Lin; D. Boes; H.T. Schreuder

    2006-01-01

    The results of a small area prediction study for the Siuslaw National Forest in Oregon are presented. Predictions were made for total basal area, number of trees and mortality per ha on a 0.85 mile grid using data on a 1.7 mile grid and additional ancillary information from TM. A reliable method of estimating prediction errors for individual plot predictions called the...

  16. Precision of the age-length increments of three cyprinids: effects of fish number and sub-sampling strategy.

    Science.gov (United States)

    Busst, G M; Britton, J R

    2014-06-01

    The effects of number of fish that are aged and scale sub-sampling strategies on the precision of estimates of mean age-length increments from populations of Rutilus rutilus, Leuciscus leuciscus and Leuciscus cephalus were tested. Analyses used data derived from river fish communities in eastern England, U.K.. Regarding the number of fishes analysed in each age group, for each species and mean fork-length increment at age, significant relationships were detected between sample size (n) and the coefficient of variation of the mean (Z) and mean length increment x‾ and measured variance (s(2)). This enabled calculation of the number of scales for producing a mean length increment at age according to n=ax‾(b(-2))Zx‾(-2). Outputs indicated that the number of scales requiring ageing increased substantially as precision increased, but with little variation between species per age category. Ageing between seven and 12 scales per age group would thus provide estimates at 10% precision. As the ages of fishes are not known in advance of scale ageing, the effect of scale sub-sampling regime on precision was also tested using randomized strategies of 10 fish per 5 mm, five per 5 mm, three per 5 mm, 10 per 10 mm, five per 10 mm and three per 10 mm. These were applied to the datasets and the consequences of their reduction in the number of scales for precision were determined using Z=a(0.5)x‾((b/2)(-1))n(-0.5). When compared to no sub-sampling, three per 10 mm always significantly reduced data precision, whereas 10 per 5 mm never significantly reduced precision. These outputs can thus be applied to the design of fish sampling protocols where age and growth estimates are required, with the randomized sub-sampling likely to be the most useful strategy. © 2014 The Fisheries Society of the British Isles.

  17. Variance owing to observer, repeat imaging, and fundus camera type on cup-to-disc ratio estimates by stereo planimetry.

    Science.gov (United States)

    Kwon, Young H; Adix, Michael; Zimmerman, M Bridget; Piette, Scott; Greenlee, Emily C; Alward, Wallace L M; Abràmoff, Michael D

    2009-01-01

    To determine and compare variance components in linear cup-to-disc ratio (LCDR) estimates by computer-assisted planimetry by human experts, and automated machine algorithm (digital automated planimetry). Prospective case series for evaluation of planimetry. Forty-four eyes of 44 consecutive patients from the outpatient Glaucoma Service at University of Iowa with diagnosis of glaucoma or glaucoma suspect were studied. Six stereo pairs of optic nerve photographs were taken per eye: 3 repeat stereo pairs using simultaneous fixed-stereo base fundus camera (Nidek 3Dx) and another 3 repeat stereo pairs using sequential variable-stereo base fundus camera (Zeiss). Each optic disc stereo pair was digitized and segmented into cup and rim by 3 glaucoma specialists (computer-assisted planimetry) and using a computer algorithm (digital automated planimetry), and LCDR was calculated for each segmentation (either specialist or algorithm). A linear mixed model was used to estimate mean, SD, and variance components of measurements. Average LCDR, interobserver, interrepeat, intercamera coefficients of variation (CV) of LCDR and their 95% tolerance limits. There was a significant difference in LCDR estimates among the 3 glaucoma specialists. The interobserver CV of 10.65% was larger than interrepeat (6.7%) or intercamera CV (7.6%). For the algorithm, the LCDR estimate was significantly higher for simultaneous stereo fundus images (Nidek, mean: 0.66) than for sequential stereo fundus images (Zeiss, mean: 0.64), whereas interrepeat CV for Nidek (4.4%) was lower than Zeiss (6.36%); the algorithm's interrepeat and intercamera CV were 5.47% and 7.26%, respectively. Interobserver variability was the largest source of variation for glaucoma specialists, whereas their interrepeat and intercamera variability is comparable with that of the algorithm. DAP reduces variability on LCDR estimates from simultaneous stereo images, such as the Nidek 3Dx.

  18. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially. John Wiley & Sons, Ltd.

  19. A Bayesian approach to estimating variance components within a multivariate generalizability theory framework.

    Science.gov (United States)

    Jiang, Zhehan; Skorupski, William

    2017-12-12

    In many behavioral research areas, multivariate generalizability theory (mG theory) has been typically used to investigate the reliability of certain multidimensional assessments. However, traditional mG-theory estimation-namely, using frequentist approaches-has limits, leading researchers to fail to take full advantage of the information that mG theory can offer regarding the reliability of measurements. Alternatively, Bayesian methods provide more information than frequentist approaches can offer. This article presents instructional guidelines on how to implement mG-theory analyses in a Bayesian framework; in particular, BUGS code is presented to fit commonly seen designs from mG theory, including single-facet designs, two-facet crossed designs, and two-facet nested designs. In addition to concrete examples that are closely related to the selected designs and the corresponding BUGS code, a simulated dataset is provided to demonstrate the utility and advantages of the Bayesian approach. This article is intended to serve as a tutorial reference for applied researchers and methodologists conducting mG-theory studies.

  20. Estimation of Variance Components and Genetic Parameters for Direct and Maternal Effects on Birth Weight in Brown Swiss Cattle

    Directory of Open Access Journals (Sweden)

    Ali Kaygisiz*, Galip Bakir1, Isa Yilmaz2 and Yusuf Vanli3

    2011-01-01

    Full Text Available The purpose of this study was to estimate the variance components and genetic parameters for birth weight in Brown Swiss cattle reared at Malya and Konuklar State Farms, Türkiye. The least square means of birth weight were 39.91±0.005 and 42.26±0.09kg for the calves raised at Malya and Konuklar State Farms, respectively. The effects of calving year, parity and calf sex on birth weight were significant (P<0.05. The effect of calving season on birth weight was highly significant (P<0.01 for Malya State Farm, while it was non-significant for Konuklar State Farm. Direct heritability (h2d, maternal heritability (h2m, total heritability (h2T and the fraction of variance due to maternal permanent environmental effects (c2 were 0.09, 0.04, 0.11 and 0.04, respectively for birth weights of the calves raised at Malya State Farms. The corresponding values of birth weight for calves raised at Konuklar State Farm were 0.39, 0.015, 0.29 and 0.018, respectively.

  1. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    2017-07-15

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.

  2. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    Science.gov (United States)

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  3. Subsampled Hessian Newton Methods for Supervised Learning.

    Science.gov (United States)

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  4. Estimation of variance components and prediction of breeding values in rubber tree breeding using the REML/BLUP procedure

    Directory of Open Access Journals (Sweden)

    Renata Capistrano Moreira Furlani

    2005-01-01

    Full Text Available The present paper deals with estimation of variance components, prediction of breeding values and selection in a population of rubber tree [Hevea brasiliensis (Willd. ex Adr. de Juss. Müell.-Arg.] from Rio Branco, State of Acre, Brazil. The REML/BLUP (restricted maximum likelihood/best linear unbiased prediction procedure was applied. For this purpose, 37 rubber tree families were obtained and assessed in a randomized complete block design, with three unbalanced replications. The field trial was carried out at the Experimental Station of UNESP, located in Selvíria, State of Mato Grosso do Sul, Brazil. The quantitative traits evaluated were: girth (G, bark thickness (BT, number of latex vessel rings (NR, and plant height (PH. Given the unbalanced condition of the progeny test, the REML/BLUP procedure was used for estimation. The narrow-sense individual heritability estimates were 0.43 for G, 0.18 for BT, 0.01 for NR, and 0.51 for PH. Two selection strategies were adopted: one short-term (ST - selection intensity of 8.85% and the other long-term (LT - selection intensity of 26.56%. For G, the estimated genetic gains in relation to the population average were 26.80% and 17.94%, respectively, according to the ST and LT strategies. The effective population sizes were 22.35 and 46.03, respectively. The LT and ST strategies maintained 45.80% and 28.24%, respectively, of the original genetic diversity represented in the progeny test. So, it can be inferred that this population has potential for both breeding and ex situ genetic conservation as a supplier of genetic material for advanced rubber tree breeding programs.

  5. Test day-milk yields variance component estimation using repeatability or random regression models in the Rendena breed

    Directory of Open Access Journals (Sweden)

    Roberto Mantovani

    2010-01-01

    Full Text Available This study has aimed to compare Repeatability (RP-TDm and Random-Regression Test Day models (RR-TDm in genetic evaluations of milk (M, fat (F and protein (P yields in Rendena breed. Variance estimations for Milk (M, Fat (F and Protein (P were obtained on a sample of 43,842 TD belonging to 2,692 animals controlled over 15 years (1990-2005. RP-TDm estimates of h2 were of 0.21 for M and 0.17 for both F and P, whereas RR-TDM provided a trend of h2 ranging from 0.15-0.34 for M, 0.15-0.31 for F and 0.10-0.24 for P. Both RP-TDm and RR-TDm results agreed with literature, even though RR-TDm provided a pattern of h2 along the lactation different from other studies, with the lowest h2 at the beginning and at the end of lactation. PSB, MAD and -2Log L parameters revealed lower power of RP-TDm as compare with the RR-TDm.

  6. Pressurized subsampling system for pressured gas-hydrate-bearing sediment: Microscale imaging using X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yusuke, E-mail: u-jin@aist.go.jp; Konno, Yoshihiro; Nagao, Jiro [Production Technology Team, Methane Hydrate Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Sapporo 062-8517 (Japan)

    2014-09-01

    A pressurized subsampling system was developed for pressured gas hydrate (GH)-bearing sediments, which have been stored under pressure. The system subsamples small amounts of GH sediments from cores (approximately 50 mm in diameter and 300 mm in height) without pressure release to atmospheric conditions. The maximum size of the subsamples is 12.5 mm in diameter and 20 mm in height. Moreover, our system transfers the subsample into a pressure vessel, and seals the pressure vessel by screwing in a plug under hydraulic pressure conditions. In this study, we demonstrated pressurized subsampling from artificial xenon-hydrate sediments and nondestructive microscale imaging of the subsample, using a microfocus X-ray computed tomography (CT) system. In addition, we estimated porosity and hydrate saturation from two-dimensional X-ray CT images of the subsamples.

  7. Estimation of the variance of noise in digital imaging for quality control; Estimacion de la varianza del ruido en imagen digital para control de calidad

    Energy Technology Data Exchange (ETDEWEB)

    Soro Bua, M.; Otero Martinez, C.; Vazquez Vazquez, R.; Santamarina Vazquez, F.; Lobato Busto, R.; Luna Vega, V.; Mosquera Sueiro, J.; Sanchez Garcia, M.; Pombar Camean, M.

    2011-07-01

    In this work is estimated variance kerma function pixel values for the real response curve nonlinear digital image system, without resorting to any approximation to the behavior of the detector. This result is compared with that obtained for the linearized version of the response curve.

  8. Reporting explained variance

    Science.gov (United States)

    Good, Ron; Fletcher, Harold J.

    The importance of reporting explained variance (sometimes referred to as magnitude of effects) in ANOVA designs is discussed in this paper. Explained variance is an estimate of the strength of the relationship between treatment (or other factors such as sex, grade level, etc.) and dependent variables of interest to the researcher(s). Three methods that can be used to obtain estimates of explained variance in ANOVA designs are described and applied to 16 studies that were reported in recent volumes of this journal. The results show that, while in most studies the treatment accounts for a relatively small proportion of the variance in dependent variable scores., in., some studies the magnitude of the treatment effect is respectable. The authors recommend that researchers in science education report explained variance in addition to the commonly reported tests of significance, since the latter are inadequate as the sole basis for making decisions about the practical importance of factors of interest to science education researchers.

  9. A combination of parabolic and grid slope interpolation for 2D tissue displacement estimations.

    Science.gov (United States)

    Albinsson, John; Ahlgren, Åsa Rydén; Jansson, Tomas; Cinthio, Magnus

    2017-08-01

    Parabolic sub-sample interpolation for 2D block-matching motion estimation is computationally efficient. However, it is well known that the parabolic interpolation gives a biased motion estimate for displacements greater than |y.2| samples (y = 0, 1, …). Grid slope sub-sample interpolation is less biased, but it shows large variability for displacements close to y.0. We therefore propose to combine these sub-sample methods into one method (GS15PI) using a threshold to determine when to use which method. The proposed method was evaluated on simulated, phantom, and in vivo ultrasound cine loops and was compared to three sub-sample interpolation methods. On average, GS15PI reduced the absolute sub-sample estimation errors in the simulated and phantom cine loops by 14, 8, and 24% compared to sub-sample interpolation of the image, parabolic sub-sample interpolation, and grid slope sub-sample interpolation, respectively. The limited in vivo evaluation of estimations of the longitudinal movement of the common carotid artery using parabolic and grid slope sub-sample interpolation and GS15PI resulted in coefficient of variation (CV) values of 6.9, 7.5, and 6.8%, respectively. The proposed method is computationally efficient and has low bias and variance. The method is another step toward a fast and reliable method for clinical investigations of longitudinal movement of the arterial wall.

  10. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference Genetics Selection Evolution 2010, 42:29

    DEFF Research Database (Denmark)

    Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg

    2010-01-01

    Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative...... data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual...

  11. Estimation of Genetic Variance Components Including Mutation and Epistasis using Bayesian Approach in a Selection Experiment on Body Weight in Mice

    DEFF Research Database (Denmark)

    Widyas, Nuzul; Jensen, Just; Nielsen, Vivi Hunnicke

    Selection experiment was performed for weight gain in 13 generations of outbred mice. A total of 18 lines were included in the experiment. Nine lines were allotted to each of the two treatment diets (19.3 and 5.1 % protein). Within each diet three lines were selected upwards, three lines were...... selected downwards and three lines were kept as controls. Bayesian statistical methods are used to estimate the genetic variance components. Mixed model analysis is modified including mutation effect following the methods by Wray (1990). DIC was used to compare the model. Models including mutation effect...... have better fit compared to the model with only additive effect. Mutation as direct effect contributes 3.18% of the total phenotypic variance. While in the model with interactions between additive and mutation, it contributes 1.43% as direct effect and 1.36% as interaction effect of the total variance...

  12. Branching into the Unknown: Inferring collective dynamical states from subsampled systems

    CERN Document Server

    Wilting, Jens

    2016-01-01

    When studying the dynamics of complex systems, one can rarely sample the state of all components. We show that this spatial subsampling typically leads to severe underestimation of the risk of instability in systems with propagation of events. We analytically derived a subsampling-invariant estimator and applied it to non-linear network simulations and case reports of various diseases, recovering a close relation between vaccination rate and spreading behavior. The estimator can be particularly useful in countries with unreliable case reports, and promises early warning if e.g. antibiotic resistant bacteria increase their infectiousness. In neuroscience, subsampling has led to contradictory hypotheses about the collective spiking dynamics: asynchronous-irregular or critical. With the novel estimator, we demonstrated for rat, cat and monkey that collective dynamics lives in a narrow subspace between the two. Functionally, this subspace can combine the different computational properties associated with the two ...

  13. Mixture-model based estimation of gene expression variance from public database improves identification of differentially expressed genes in small sized microarray data

    Science.gov (United States)

    Kim, Mingoo; Cho, Sung Bum; Kim, Ju Han

    2010-01-01

    Motivation: The small number of samples in many microarray experiments is a challenge for the correct identification of differentially expressed gens (DEGs) by conventional statistical means. Information from public microarray databases can help more efficient identification of DEGs. To model various experimental conditions of a public microarray database, we applied Gaussian mixture model and extracted bi- or tri-modal distributions of gene expression. Prior variance of Baldi's Bayesian framework was estimate for the analysis of the small sample-sized datasets. Results: First, we estimated the prior variance of a gene expression by pooling variances obtained from mixture modeling of large samples in the public microarray database. Then, using the prior variance, we identified DEGs in small sample-sized test datasets using the Baldi's framework. For benchmark study, we generated test datasets having several samples from relatively large datasets. Our proposed method outperformed other benchmark methods in terms of detecting gold-standard DEGs from the test datasets. The results may be a challenging evidence for usage of public microarray databases in microarray data analysis. Availability: Supplementary data are available at http://www.snubi.org/publication/MixBayes Contact: juhan@snu.ac.kr PMID:20015947

  14. RF low power subsampling architecture for wireless communication applications

    National Research Council Canada - National Science Library

    Meng, Fanzhen; Liu, Hong; Wang, Mingliang; Zhang, Xiaolin; Tian, Tong

    2016-01-01

    ...) transmission devices, especially the RF receiver. In order to alleviate this problem, an RF low power subsampling architecture for wireless communication applications is proposed in this paper...

  15. Asymptotic Approximations to the Bias and Variance of a Kernel-Type Estimator of the Intensity of the Cyclic Poisson Process with the Linear Trend

    Directory of Open Access Journals (Sweden)

    I Wayan Mangku

    2012-02-01

    Full Text Available From the previous research, a kernel-type estimator of the intensity ofthe cyclic Poisson process with the linear trend has been constructed using a singlerealization of the Poisson process observed in a bounded interval. This proposedestimator has been proved to be consistent as the size of the observation intervaltends to innity. In this paper, asymptotic approximations to its bias, variance andMSE (Mean-Squared-Error are computed. Asymptotically optimal bandwidth isalso derived.

  16. Analytic quantification of bias and variance of coil sensitivity profile estimators for improved image reconstruction in MRI.

    Science.gov (United States)

    Stamm, Aymeric; Singh, Jolene; Afacan, Onur; Warfield, Simon K

    2015-10-01

    Magnetic resonance (MR) imaging provides a unique in-vivo capability of visualizing tissue in the human brain non-invasively, which has tremendously improved patient care over the past decades. However, there are still prominent artifacts, such as intensity inhomogeneities due to the use of an array of receiving coils (RC) to measure the MR signal or noise amplification due to accelerated imaging strategies. It is critical to mitigate these artifacts for both visual inspection and quantitative analysis. The cornerstone to address this issue pertains to the knowledge of coil sensitivity profiles (CSP) of the RCs, which describe how the measured complex signal decays with the distance to the RC. Existing methods for CSP estimation share a number of limitations: (i) they primarily focus on CSP magnitude, while it is known that the solution to the MR image reconstruction problem involves complex CSPs and (ii) they only provide point estimates of the CSPs, which makes the task of optimizing the parameters and acquisition protocol for their estimation difficult. In this paper, we propose a novel statistical framework for estimating complex-valued CSPs. We define a CSP estimator that uses spatial smoothing and additional body coil data for phase normalization. The main contribution is to provide detailed information on the statistical distribution of the CSP estimator, which yields automatic determination of the optimal degree of smoothing for ensuring minimal bias and provides guidelines to the optimal acquisition strategy.

  17. Approximating the variance of estimated means for systematic random sampling, illustrated with data of the French Soil Monitoring Network

    NARCIS (Netherlands)

    Brus, D.J.; Saby, N.P.A.

    2016-01-01

    In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based

  18. Small-Sample Adjustments for Tests of Moderators and Model Fit Using Robust Variance Estimation in Meta-Regression

    Science.gov (United States)

    Tipton, Elizabeth; Pustejovsky, James E.

    2015-01-01

    Meta-analyses often include studies that report multiple effect sizes based on a common pool of subjects or that report effect sizes from several samples that were treated with very similar research protocols. The inclusion of such studies introduces dependence among the effect size estimates. When the number of studies is large, robust variance…

  19. Estimation of quantal size and number of functional active zones at the calyx of Held synapse by nonstationary EPSC variance analysis.

    Science.gov (United States)

    Meyer, A C; Neher, E; Schneggenburger, R

    2001-10-15

    At the large excitatory calyx of Held synapse, the quantal size during an evoked EPSC and the number of active zones contributing to transmission are not known. We developed a nonstationary variant of EPSC fluctuation analysis to determine these quantal parameters. AMPA receptor-mediated EPSCs were recorded in slices of young (postnatal 8-10 d) rats after afferent fiber stimulation, delivered in trains to induce synaptic depression. The means and the variances of EPSC amplitudes were calculated across trains for each stimulus number. During 10 Hz trains at 2 mm Ca(2+) concentration ([Ca(2+)]), we found linear EPSC variance-mean relationships, with a slope that was in good agreement with the quantal size obtained from amplitude distributions of spontaneous miniature EPSCs. At high release probability with 10 or 15 mm [Ca(2+)], competitive antagonists were used to partially block EPSCs. Under these conditions, the EPSC variance-mean plots could be fitted with parabolas, giving estimates of quantal size and of the binomial parameter N. With the rapidly dissociating antagonist kynurenic acid, quantal sizes were larger than with a slowly dissociating antagonist, suggesting that the effective glutamate concentration was increased at high release probability. Considering the possibility of multivesicular release and moderate saturation of postsynaptic AMPA receptors, we conclude that the binomial parameter N (637 +/- 117; mean +/- SEM) represents an upper limit estimate of the number of functional active zones. We estimate that during normal synaptic transmission, the probability of vesicle fusion at single active zones is in the range of 0.25-0.4.

  20. ESTIMATION OF GENETIC ADDITIVE VARIANCE OF MILK PRODUCTION IN DAIRY ROMANIAN BLACK SPOTTED BREED FROM PESTRESTI-ALBA FARM

    Directory of Open Access Journals (Sweden)

    GH. NISTOR

    2008-10-01

    Full Text Available The objective of this study was to estimate the heritability of milk, milk fat and milk protein productions in Romanian Black Spotted breed cattle from SC Dorin&Sanda SRL private farm, Petresti-Alba. A total of 101 lactations cows were used to estimate heritability variabilites among milk yield (kilograms, fat and protein content. The data were collected over a period of two years (2006-2007. Heritabilities were 0.31, 0.68 and 0.73 for milk yield, fat content and protein content, respectively. Heritabilities were among the limits found in the literature. Data can be use as a guide for selection to improve milk yield while maintaining fat and protein contents.

  1. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    Science.gov (United States)

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only

  2. Subsampling effects in neuronal avalanche distributions recorded in vivo

    Directory of Open Access Journals (Sweden)

    Munk Matthias HJ

    2009-04-01

    Full Text Available Abstract Background Many systems in nature are characterized by complex behaviour where large cascades of events, or avalanches, unpredictably alternate with periods of little activity. Snow avalanches are an example. Often the size distribution f(s of a system's avalanches follows a power law, and the branching parameter sigma, the average number of events triggered by a single preceding event, is unity. A power law for f(s, and sigma = 1, are hallmark features of self-organized critical (SOC systems, and both have been found for neuronal activity in vitro. Therefore, and since SOC systems and neuronal activity both show large variability, long-term stability and memory capabilities, SOC has been proposed to govern neuronal dynamics in vivo. Testing this hypothesis is difficult because neuronal activity is spatially or temporally subsampled, while theories of SOC systems assume full sampling. To close this gap, we investigated how subsampling affects f(s and sigma by imposing subsampling on three different SOC models. We then compared f(s and sigma of the subsampled models with those of multielectrode local field potential (LFP activity recorded in three macaque monkeys performing a short term memory task. Results Neither the LFP nor the subsampled SOC models showed a power law for f(s. Both, f(s and sigma, depended sensitively on the subsampling geometry and the dynamics of the model. Only one of the SOC models, the Abelian Sandpile Model, exhibited f(s and sigma similar to those calculated from LFP activity. Conclusion Since subsampling can prevent the observation of the characteristic power law and sigma in SOC systems, misclassifications of critical systems as sub- or supercritical are possible. Nevertheless, the system specific scaling of f(s and sigma under subsampling conditions may prove useful to select physiologically motivated models of brain function. Models that better reproduce f(s and sigma calculated from the physiological

  3. The Relative Impacts of Design Effects and Multiple Imputation on Variance Estimates: A Case Study with the 2008 National Ambulatory Medical Care Survey

    Directory of Open Access Journals (Sweden)

    Lewis Taylor

    2014-03-01

    Full Text Available The National Ambulatory Medical Care Survey collects data on office-based physician care from a nationally representative, multistage sampling scheme where the ultimate unit of analysis is a patient-doctor encounter. Patient race, a commonly analyzed demographic, has been subject to a steadily increasing item nonresponse rate. In 1999, race was missing for 17 percent of cases; by 2008, that figure had risen to 33 percent. Over this entire period, single imputation has been the compensation method employed. Recent research at the National Center for Health Statistics evaluated multiply imputing race to better represent the missing-data uncertainty. Given item nonresponse rates of 30 percent or greater, we were surprised to find many estimates’ ratios of multiple-imputation to single-imputation estimated standard errors close to 1. A likely explanation is that the design effects attributable to the complex sample design largely outweigh any increase in variance attributable to missing-data uncertainty.

  4. Fixed effects analysis of variance

    CERN Document Server

    Fisher, Lloyd; Birnbaum, Z W; Lukacs, E

    1978-01-01

    Fixed Effects Analysis of Variance covers the mathematical theory of the fixed effects analysis of variance. The book discusses the theoretical ideas and some applications of the analysis of variance. The text then describes topics such as the t-test; two-sample t-test; the k-sample comparison of means (one-way analysis of variance); the balanced two-way factorial design without interaction; estimation and factorial designs; and the Latin square. Confidence sets, simultaneous confidence intervals, and multiple comparisons; orthogonal and nonorthologonal designs; and multiple regression analysi

  5. Kernel-based variance component estimation and whole-genome prediction of pre-corrected phenotypes and progeny tests for dairy cow health traits

    Science.gov (United States)

    Morota, Gota; Boddhireddy, Prashanth; Vukasinovic, Natascha; Gianola, Daniel; DeNise, Sue

    2014-01-01

    Prediction of complex trait phenotypes in the presence of unknown gene action is an ongoing challenge in animals, plants, and humans. Development of flexible predictive models that perform well irrespective of genetic and environmental architectures is desirable. Methods that can address non-additive variation in a non-explicit manner are gaining attention for this purpose and, in particular, semi-parametric kernel-based methods have been applied to diverse datasets, mostly providing encouraging results. On the other hand, the gains obtained from these methods have been smaller when smoothed values such as estimated breeding value (EBV) have been used as response variables. However, less emphasis has been placed on the choice of phenotypes to be used in kernel-based whole-genome prediction. This study aimed to evaluate differences between semi-parametric and parametric approaches using two types of response variables and molecular markers as inputs. Pre-corrected phenotypes (PCP) and EBV obtained for dairy cow health traits were used for this comparison. We observed that non-additive genetic variances were major contributors to total genetic variances in PCP, whereas additivity was the largest contributor to variability of EBV, as expected. Within the kernels evaluated, non-parametric methods yielded slightly better predictive performance across traits relative to their additive counterparts regardless of the type of response variable used. This reinforces the view that non-parametric kernels aiming to capture non-linear relationships between a panel of SNPs and phenotypes are appealing for complex trait prediction. However, like past studies, the gain in predictive correlation was not large for either PCP or EBV. We conclude that capturing non-additive genetic variation, especially epistatic variation, in a cross-validation framework remains a significant challenge even when it is important, as seems to be the case for health traits in dairy cows. PMID:24715901

  6. Kernel-based variance component estimation and whole-genome prediction of pre-corrected phenotypes and progeny tests for dairy cow health traits

    Directory of Open Access Journals (Sweden)

    Gota eMorota

    2014-03-01

    Full Text Available Prediction of complex trait phenotypes in the presence of unknown gene action is an ongoing challenge in animals, plants, and humans. Development of flexible predictive models that perform well irrespective of genetic and environmental architectures is desirable. Methods that can address non-additive variation in a non-explicit manner are gaining attention for this purpose and, in particular, semi-parametric kernel-based methods have been applied to diverse datasets, mostly providing encouraging results. On the other hand, the gains obtained from these methods have been smaller when smoothed values such as estimated breeding value (EBV have been used as response variables. However, less emphasis has been placed on the choice of phenotypes to be used in kernel-based whole-genome prediction. This study aimed to evaluate differences between semi-parametric and parametric approaches using two types of response variables and molecular markers as inputs. Pre-corrected phenotypes (PCP and EBV obtained for dairy cow health traits were used for this comparison. We observed that non-additive genetic variances were major contributors to total genetic variances in PCP, whereas additivity was the largest contributor to variability of EBV, as expected. Within the kernels evaluated, non-parametric methods yielded slightly better predictive performance across traits relative to their additive counterparts regardless of the type of response variable used. This reinforces the view that non-parametric kernels aiming to capture non-linear relationships between a panel of SNPs and phenotypes are appealing for complex trait prediction. However, like past studies, the gain in predictive correlation was not large for either PCP or EBV. We conclude that capturing non-additive genetic variation, especially epistatic variation, in a cross-validation framework remains a significant challenge even when it is important, as seems to be the case for health traits in dairy cows.

  7. A Wavelet Perspective on the Allan Variance.

    Science.gov (United States)

    Percival, Donald B

    2016-04-01

    The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.

  8. Clustering with position-specific constraints on variance: applying redescending M-estimators to label-free LC-MS data analysis.

    Science.gov (United States)

    Frühwirth, Rudolf; Mani, D R; Pyne, Saumyadipta

    2011-08-31

    Clustering is a widely applicable pattern recognition method for discovering groups of similar observations in data. While there are a large variety of clustering algorithms, very few of these can enforce constraints on the variation of attributes for data points included in a given cluster. In particular, a clustering algorithm that can limit variation within a cluster according to that cluster's position (centroid location) can produce effective and optimal results in many important applications ranging from clustering of silicon pixels or calorimeter cells in high-energy physics to label-free liquid chromatography based mass spectrometry (LC-MS) data analysis in proteomics and metabolomics. We present MEDEA (M-Estimator with DEterministic Annealing), an M-estimator based, new unsupervised algorithm that is designed to enforce position-specific constraints on variance during the clustering process. The utility of MEDEA is demonstrated by applying it to the problem of "peak matching"--identifying the common LC-MS peaks across multiple samples--in proteomic biomarker discovery. Using real-life datasets, we show that MEDEA not only outperforms current state-of-the-art model-based clustering methods, but also results in an implementation that is significantly more efficient, and hence applicable to much larger LC-MS data sets. MEDEA is an effective and efficient solution to the problem of peak matching in label-free LC-MS data. The program implementing the MEDEA algorithm, including datasets, clustering results, and supplementary information is available from the author website at http://www.hephy.at/user/fru/medea/.

  9. Clustering with position-specific constraints on variance: Applying redescending M-estimators to label-free LC-MS data analysis

    Directory of Open Access Journals (Sweden)

    Mani D R

    2011-08-01

    Full Text Available Abstract Background Clustering is a widely applicable pattern recognition method for discovering groups of similar observations in data. While there are a large variety of clustering algorithms, very few of these can enforce constraints on the variation of attributes for data points included in a given cluster. In particular, a clustering algorithm that can limit variation within a cluster according to that cluster's position (centroid location can produce effective and optimal results in many important applications ranging from clustering of silicon pixels or calorimeter cells in high-energy physics to label-free liquid chromatography based mass spectrometry (LC-MS data analysis in proteomics and metabolomics. Results We present MEDEA (M-Estimator with DEterministic Annealing, an M-estimator based, new unsupervised algorithm that is designed to enforce position-specific constraints on variance during the clustering process. The utility of MEDEA is demonstrated by applying it to the problem of "peak matching"--identifying the common LC-MS peaks across multiple samples--in proteomic biomarker discovery. Using real-life datasets, we show that MEDEA not only outperforms current state-of-the-art model-based clustering methods, but also results in an implementation that is significantly more efficient, and hence applicable to much larger LC-MS data sets. Conclusions MEDEA is an effective and efficient solution to the problem of peak matching in label-free LC-MS data. The program implementing the MEDEA algorithm, including datasets, clustering results, and supplementary information is available from the author website at http://www.hephy.at/user/fru/medea/.

  10. Blind Recovery of Sparse Signals from Subsampled Convolution

    OpenAIRE

    Lee, Kiryung; Li, Yanjun; Junge, Marius; Bresler, Yoram

    2015-01-01

    Subsampled blind deconvolution is the recovery of two unknown signals from samples of their convolution. To overcome the ill-posedness of this problem, solutions based on priors tailored to specific application have been developed in practical applications. In particular, sparsity models have provided promising priors. However, in spite of empirical success of these methods in many applications, existing analyses are rather limited in two main ways: by disparity between the theoretical assump...

  11. A Sub-Sampling Approach for Data Acquisition in Gamma Ray Emission Tomography

    Science.gov (United States)

    Fysikopoulos, Eleftherios; Kopsinis, Yannis; Georgiou, Maria; Loudos, George

    2016-06-01

    State of the art data acquisition systems for small animal imaging gamma ray detectors often rely on free running Analog to Digital Converters (ADCs) and high density Field Programmable Gate Arrays (FPGA) devices for digital signal processing. In this work, a sub-sampling acquisition approach, which exploits a priori information regarding the shape of the obtained detector pulses is proposed. Output pulses shape depends on the response of the scintillation crystal, photodetector's properties and amplifier/shaper operation. Using these known characteristics of the detector pulses prior to digitization, one can model the voltage pulse derived from the shaper (a low-pass filter, last in the front-end electronics chain), in order to reduce the desirable sampling rate of ADCs. Fitting with a small number of measurements, pulse shape estimation is then feasible. In particular, the proposed sub-sampling acquisition approach relies on a bi-exponential modeling of the pulse shape. We show that the properties of the pulse that are relevant for Single Photon Emission Computed Tomography (SPECT) event detection (i.e., position and energy) can be calculated by collecting just a small fraction of the number of samples usually collected in data acquisition systems used so far. Compared to the standard digitization process, the proposed sub-sampling approach allows the use of free running ADCs with sampling rate reduced by a factor of 5. Two small detectors consisting of Cerium doped Gadolinium Aluminum Gallium Garnet (Gd3Al2Ga3O12 : Ce or GAGG:Ce) pixelated arrays (array elements: 2 × 2 × 5 mm3 and 1 × 1 × 10 mm3 respectively) coupled to a Position Sensitive Photomultiplier Tube (PSPMT) were used for experimental evaluation. The two detectors were used to obtain raw images and energy histograms under 140 keV and 661.7 keV irradiation respectively. The sub-sampling acquisition technique (10 MHz sampling rate) was compared with a standard acquisition method (52 MHz sampling

  12. The effects of subsampling and sampling frequency on the use of surface-floating pupal exuviae to measure Chironomidae (Diptera) communities in wadeable temperate streams.

    Science.gov (United States)

    Bouchard, Raymond William; Ferrington, Leonard C

    2011-10-01

    Community, diversity, and biological index metrics for chironomid surface-floating pupal exuviae (SFPE) were assessed at different subsample sizes and sampling frequencies from wadeable streams in Minnesota (USA). Timed collections of SFPE were made using a biweekly sampling interval in groundwater-dominated (GWD) and surface-water-dominated (SWD) streams. These two types of stream were sampled because they support different Chironomidae communities with different phenologies which could necessitate sampling methodologies specific to each stream type. A subsample size of 300 individuals was sufficient to collect on average 85% of total taxa richness and to estimate most metrics with an error of about 1% relative to 1,000 count samples. SWD streams required larger subsample sizes to achieve similar estimates of taxa richness and metric error compared to GWD streams, but these differences were not large enough to recommend different subsampling methods for these stream types. Analysis of sample timing determined that 97% of emergence occurred from April through September. We recommend in studies where estimation of winter emergence is not important that sampling be limited to this period. Sampling frequency also affected the proportion of the community collected. To maximize the portion of the community, collected samples should be taken across seasons although no specific sampling interval is recommended. Subsampling and sampling frequency was also assessed simultaneously. When using a 300-count subsample, a 4-week sampling interval from April through September was required to collect on average 71% of the community. Due to differences in elements of the chironomid community evaluated by different studies (e.g., biological condition, phenology, and taxonomic composition), richness estimates are documented for five sampling intervals (2, 4, 6, 8, 10, and 12 weeks) and five subsample sizes (100, 200, 300, 500, and 1,000 counts). This research will enhance future

  13. Subsampling phase retrieval for rapid thermal measurements of heated microstructures.

    Science.gov (United States)

    Taylor, Lucas N; Talghader, Joseph J

    2016-07-15

    A subsampling technique for real-time phase retrieval of high-speed thermal signals is demonstrated with heated metal lines such as those found in microelectronic interconnects. The thermal signals were produced by applying a current through aluminum resistors deposited on soda-lime-silica glass, and the resulting refractive index changes were measured using a Mach-Zehnder interferometer with a microscope objective and high-speed camera. The temperatures of the resistors were measured both by the phase-retrieval method and by monitoring the resistance of the aluminum lines. The method used to analyze the phase is at least 60× faster than the state of the art but it maintains a small spatial phase noise of 16 nm, remaining comparable to the state of the art. For slowly varying signals, the system is able to perform absolute phase measurements over time, distinguishing temperature changes as small as 2 K. With angular scanning or structured illumination improvements, the system could also perform fast thermal tomography.

  14. Comparison of large networks with sub-sampling strategies

    Science.gov (United States)

    Ali, Waqar; Wegner, Anatol E.; Gaunt, Robert E.; Deane, Charlotte M.; Reinert, Gesine

    2016-07-01

    Networks are routinely used to represent large data sets, making the comparison of networks a tantalizing research question in many areas. Techniques for such analysis vary from simply comparing network summary statistics to sophisticated but computationally expensive alignment-based approaches. Most existing methods either do not generalize well to different types of networks or do not provide a quantitative similarity score between networks. In contrast, alignment-free topology based network similarity scores empower us to analyse large sets of networks containing different types and sizes of data. Netdis is such a score that defines network similarity through the counts of small sub-graphs in the local neighbourhood of all nodes. Here, we introduce a sub-sampling procedure based on neighbourhoods which links naturally with the framework of network comparisons through local neighbourhood comparisons. Our theoretical arguments justify basing the Netdis statistic on a sample of similar-sized neighbourhoods. Our tests on empirical and synthetic datasets indicate that often only 10% of the neighbourhoods of a network suffice for optimal performance, leading to a drastic reduction in computational requirements. The sampling procedure is applicable even when only a small sample of the network is known, and thus provides a novel tool for network comparison of very large and potentially incomplete datasets.

  15. Computer simulations of the ROUSE model: an analytic simulation technique and a comparison between the error variance-covariance and bootstrap methods for estimating parameter confidence.

    Science.gov (United States)

    Huber, David E

    2006-11-01

    This article provides important mathematical descriptions and computer algorithms in relation to the responding optimally with unknown sources of evidence (ROUSE) model of Huber, Shiffrin, Lyle, and Ruys (2001), which has been applied to short-term priming phenomena. In the first section, techniques for obtaining parameter confidence intervals and parameter correlations are described, which are generally applicable to any mathematical model. In the second section, a technique for producing analytic ROUSE predictions is described. Huber et al. (2001) averaged many stochastic trials to obtain stable behavior. By appropriately weighting all possible combinations of feature states, an alternative analytic version is developed, yielding asymptotic model behavior with fewer computations. The third section ties together these separate techniques, obtaining parameter confidence and correlations for the analytic version of the ROUSE model. In doing so, previously unreported behaviors of the model are revealed. In particular, complications due to local minima are discussed, in terms of both variance-covariance analyses and bootstrap sampling analyses.

  16. Choosing the filter for catenary image enhancement method based on the non-subsampled contourlet transform

    Science.gov (United States)

    Wu, Changdong; Liu, Zhigang; Jiang, Hua

    2017-05-01

    The quality of image enhancement plays an important role in the catenary fault diagnosis system based on the image processing technique. It is necessary to enhance the low contrast image of catenary for better detecting the state of catenary part. The Non-subsampled Contourlet transform (NSCT) is the improved Contourlet transform (CT), which can effectively solve the problem of artifact phenomenon in the enhanced catenary image. Besides, choosing the enhancement function and the filter of the NSCT will directly influence the image enhancement effect. In this paper, the proposed method is implemented by combining the NSCT with the nonlinear enhancement function to enhance the catenary image. First, how to choose the filter of the NSCT is discussed. Second, the NSCT is used to decompose the image. Then, the chosen nonlinear enhancement function is used to process the decomposed coefficient of the NSCT. Finally, the NSCT is inversed to obtain the enhanced image. In this paper, we evaluate our algorithm using the lifting wavelet transform, retinex enhancement method, dark channel enhancement method, curvelet transform, and CT method as a comparison to enhance a group of randomly selected low contrast catenary images, respectively. The results of comparative experiments conducted show that the proposed method can effectively enhance the catenary image, the contrast of image is improved, the catenary parts are obvious, and the artifact phenomenon is effectively eliminated, where image details (edges, textures, or smooth areas) are also well preserved. Besides, the values (detail variance-background variance, signal-to-noise ratio, and edge preservation index) of measuring the image enhancement capacity are improved, while the mean squared error value is decreased when compared to the CT method. These indicate that the proposed method is an excellent catenary image enhancement approach.

  17. Choosing the filter for catenary image enhancement method based on the non-subsampled contourlet transform.

    Science.gov (United States)

    Wu, Changdong; Liu, Zhigang; Jiang, Hua

    2017-05-01

    The quality of image enhancement plays an important role in the catenary fault diagnosis system based on the image processing technique. It is necessary to enhance the low contrast image of catenary for better detecting the state of catenary part. The Non-subsampled Contourlet transform (NSCT) is the improved Contourlet transform (CT), which can effectively solve the problem of artifact phenomenon in the enhanced catenary image. Besides, choosing the enhancement function and the filter of the NSCT will directly influence the image enhancement effect. In this paper, the proposed method is implemented by combining the NSCT with the nonlinear enhancement function to enhance the catenary image. First, how to choose the filter of the NSCT is discussed. Second, the NSCT is used to decompose the image. Then, the chosen nonlinear enhancement function is used to process the decomposed coefficient of the NSCT. Finally, the NSCT is inversed to obtain the enhanced image. In this paper, we evaluate our algorithm using the lifting wavelet transform, retinex enhancement method, dark channel enhancement method, curvelet transform, and CT method as a comparison to enhance a group of randomly selected low contrast catenary images, respectively. The results of comparative experiments conducted show that the proposed method can effectively enhance the catenary image, the contrast of image is improved, the catenary parts are obvious, and the artifact phenomenon is effectively eliminated, where image details (edges, textures, or smooth areas) are also well preserved. Besides, the values (detail variance-background variance, signal-to-noise ratio, and edge preservation index) of measuring the image enhancement capacity are improved, while the mean squared error value is decreased when compared to the CT method. These indicate that the proposed method is an excellent catenary image enhancement approach.

  18. Subsampled open-reference clustering creates consistent, comprehensive OTU definitions and scales to billions of sequences

    Directory of Open Access Journals (Sweden)

    Jai Ram Rideout

    2014-08-01

    Full Text Available We present a performance-optimized algorithm, subsampled open-reference OTU picking, for assigning marker gene (e.g., 16S rRNA sequences generated on next-generation sequencing platforms to operational taxonomic units (OTUs for microbial community analysis. This algorithm provides benefits over de novo OTU picking (clustering can be performed largely in parallel, reducing runtime and closed-reference OTU picking (all reads are clustered, not only those that match a reference database sequence with high similarity. Because more of our algorithm can be run in parallel relative to “classic” open-reference OTU picking, it makes open-reference OTU picking tractable on massive amplicon sequence data sets (though on smaller data sets, “classic” open-reference OTU clustering is often faster. We illustrate that here by applying it to the first 15,000 samples sequenced for the Earth Microbiome Project (1.3 billion V4 16S rRNA amplicons. To the best of our knowledge, this is the largest OTU picking run ever performed, and we estimate that our new algorithm runs in less than 1/5 the time than would be required of “classic” open reference OTU picking. We show that subsampled open-reference OTU picking yields results that are highly correlated with those generated by “classic” open-reference OTU picking through comparisons on three well-studied datasets. An implementation of this algorithm is provided in the popular QIIME software package, which uses uclust for read clustering. All analyses were performed using QIIME’s uclust wrappers, though we provide details (aided by the open-source code in our GitHub repository that will allow implementation of subsampled open-reference OTU picking independently of QIIME (e.g., in a compiled programming language, where runtimes should be further reduced. Our analyses should generalize to other implementations of these OTU picking algorithms. Finally, we present a comparison of parameter settings in

  19. Parametric, bootstrap, and jackknife variance estimators for the k-Nearest Neighbors technique with illustrations using forest inventory and satellite image data

    Science.gov (United States)

    Ronald E. McRoberts; Steen Magnussen; Erkki O. Tomppo; Gherardo. Chirici

    2011-01-01

    Nearest neighbors techniques have been shown to be useful for estimating forest attributes, particularly when used with forest inventory and satellite image data. Published reports of positive results have been truly international in scope. However, for these techniques to be more useful, they must be able to contribute to scientific inference which, for sample-based...

  20. Denoising of high resolution small animal 3D PET data using the non-subsampled Haar wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Ochoa Domínguez, Humberto de Jesús, E-mail: hochoa@uacj.mx [Departamento de Ingeniería Eléctrica y computación, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez, Chih. (Mexico); Máynez, Leticia O. [Departamento de Ingeniería Eléctrica y computación, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez, Chih. (Mexico); Vergara Villegas, Osslan O. [Departamento de Ingeniería Industrial, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez, Chih. (Mexico); Mederos, Boris; Mejía, José M.; Cruz Sánchez, Vianey G. [Departamento de Ingeniería Eléctrica y computación, Universidad Autónoma de Ciudad Juárez, Ciudad Juárez, Chih. (Mexico)

    2015-06-01

    PET allows functional imaging of the living tissue. However, one of the most serious technical problems affecting the reconstructed data is the noise, particularly in images of small animals. In this paper, a method for high-resolution small animal 3D PET data is proposed with the aim to reduce the noise and preserve details. The method is based on the estimation of the non-subsampled Haar wavelet coefficients by using a linear estimator. The procedure is applied to the volumetric images, reconstructed without correction factors (plane reconstruction). Results show that the method preserves the structures and drastically reduces the noise that contaminates the image.

  1. Nyström type subsampling analyzed as a regularized projection

    Science.gov (United States)

    Kriukova, Galyna; Pereverzyev, Sergiy, Jr.; Tkachenko, Pavlo

    2017-07-01

    In the statistical learning theory the Nyström type subsampling methods are considered as tools for dealing with big data. In this paper we consider Nyström subsampling as a special form of the projected Lavrentiev regularization, and study it using the approaches developed in the regularization theory. As a result, we prove that the same capacity independent learning rates that are guaranteed for standard algorithms running with quadratic computational complexity can be obtained with subquadratic complexity by the Nyström subsampling approach, provided that the subsampling size is chosen properly. We propose a priori rule for choosing the subsampling size and a posteriori strategy for dealing with uncertainty in the choice of it. The theoretical results are illustrated by numerical experiments.

  2. Heterogeneity of variance and its implications on dairy cattle breeding

    African Journals Online (AJOL)

    ... and evaluated for within herd variation using univariate animal model procedures. Variance components were estimated by derivative free REML algorithm, and significance tests done using the Fmax procedure. Phenotypic, additive genetic and residual variances were heterogeneous across production environments.

  3. A General Model for Repeated Audit Controls Using Monotone Subsampling

    NARCIS (Netherlands)

    Raats, V.M.; van der Genugten, B.B.; Moors, J.J.A.

    2002-01-01

    In categorical repeated audit controls, fallible auditors classify sample elements in order to estimate the population fraction of elements in certain categories.To take possible misclassifications into account, subsequent checks are performed with a decreasing number of observations.In this paper a

  4. Chroma Subsampling Influence on the Perceived Video Quality for Compressed Sequences in High Resolutions

    Directory of Open Access Journals (Sweden)

    Miroslav Uhrina

    2017-01-01

    Full Text Available This paper deals with the influence of chroma subsampling on perceived video quality measured by subjective metrics. The evaluation was done for two most used video codecs H.264/AVC and H.265/HEVC. Eight types of video sequences with Full HD and Ultra HD resolutions depending on content were tested. The experimental results showed that observers did not see the difference between unsubsampled and subsampled sequences, so using subsampled videos is preferable even 50 % of the amount of data can be saved. Also, the minimum bitrates to achieve the good and fair quality by each codec and resolution were determined.

  5. Variance in the chemical composition of dry beans determined from UV spectral fingerprints

    Science.gov (United States)

    Nine varieties of dry beans representing 5 market classes were grown in 3 states (Maryland, Michigan, and Nebraska) and sub-samples were collected for each variety (row composites from each plot). Aqueous methanol extracts were analyzed in triplicate by UV spectrophotometry. Analysis of variance-p...

  6. Spectral Ambiguity of Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  7. APOLLO 15 HEAT FLOW THERMAL CONDUCTIVITY RDR SUBSAMPLED V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set comprises a reduced, subsampled set of the data returned from the Apollo 15 Heat Flow Experiment from 31 July 1971 through 31 December 1974. The...

  8. A sub-sampled approach to extremely low-dose STEM

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, A. [OptimalSensing, Southlake, Texas 76092, USA; Duke University, ECE, Durham, North Carolina 27708, USA; Luzi, L. [Rice University, ECE, Houston, Texas 77005, USA; Yang, H. [Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA; Kovarik, L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Mehdi, B. L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom; Liyu, A. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Gehm, M. E. [Duke University, ECE, Durham, North Carolina 27708, USA; Browning, N. D. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom

    2018-01-22

    The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e-2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis of the node distribution in metal-organic frameworks (MOFs).

  9. A novel case-control subsampling approach for rapid model exploration of large clustered binary data.

    Science.gov (United States)

    Wright, Stephen T; Ryan, Louise M; Pham, Tung

    2017-12-11

    In many settings, an analysis goal is the identification of a factor, or set of factors associated with an event or outcome. Often, these associations are then used for inference and prediction. Unfortunately, in the big data era, the model building and exploration phases of analysis can be time-consuming, especially if constrained by computing power (ie, a typical corporate workstation). To speed up this model development, we propose a novel subsampling scheme to enable rapid model exploration of clustered binary data using flexible yet complex model set-ups (GLMMs with additive smoothing splines). By reframing the binary response prospective cohort study into a case-control-type design, and using our knowledge of sampling fractions, we show one can approximate the model estimates as would be calculated from a full cohort analysis. This idea is extended to derive cluster-specific sampling fractions and thereby incorporate cluster variation into an analysis. Importantly, we demonstrate that previously computationally prohibitive analyses can be conducted in a timely manner on a typical workstation. The approach is applied to analysing risk factors associated with adverse reactions relating to blood donation. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Cross-sectional study of HPV-16 infection in a population-based subsample of Hispanic adults

    Science.gov (United States)

    Ortiz, A P; Unger, E R; Muñoz, C; Panicker, G; Tortolero-Luna, G; Soto-Salgado, M; Otero, Y; Suárez, E; Pérez, C M

    2014-01-01

    Objective This study aimed to estimate the prevalence and correlates of seropositivity to human papillomavirus (HPV)-16 in a subsample of adults who participated in the parent study Epidemiology of Hepatitis C in the adult population of Puerto Rico (PR). Setting The parent study was a population-based household survey aimed to estimate the seroprevalence of hepatitis C and other viral infections (hepatitis A, hepatitis B, HIV, and herpes simplex type 2) in PR (n=1654) between 2005 and 2008. Participants A subsample of the last 450 consecutive adults aged 21–64 years, recruited between February 2007 and January 2008, who participated in the parent study and agreed to participate in HPV testing. Primary and secondary outcome measures The samples were tested by ELISA for HPV-16 viral-like particle-specific immunoglobulin G. Information on sociodemographic, health, and lifestyle characteristics was collected. Logistic regression modelling was used to estimate the prevalence odds ratio (POR) to assess factors associated to HPV-16 seropositivity. Results Prevalence of seropositivity to HPV-16 was 11.3%. Seroprevalence was higher in women (15.8%) than men (5.6%; p=0.001). After adjusting for age and sex, ever smokers (POR 2.06, 95% CI 1.08 to 3.92) and participants with at least five lifetime sexual partners (POR 2.91, 95% CI 1.24 to 6.81) were more likely to be HPV-16 seropositive. Conclusions HPV-16 seropositivity is similar to that reported in the USA (10.4%) for NHANES 2003–2004 participants, although different assays were used in these studies. While future studies should evaluate HPV seroprevalence using a larger population-based sample, our results highlight the need to further understand the burden of HPV infection and HPV-related malignancies in PR, population with a low vaccine uptake. PMID:24496698

  11. Freezing-thawing and sub-sampling influence the marination performance of chicken breast meat

    Science.gov (United States)

    Vacuum-tumbling marination is often used to improve the yield and quality of whole or portioned boneless broiler breast fillets. The relationship between the marination performance of whole Pectoralis major muscles and breast fillet sub-samples is not well understood. The objective of this study wa...

  12. MODIS/Terra Level 1B Subsampled Calibrated Radiance 5Km - NRT

    Data.gov (United States)

    National Aeronautics and Space Administration — This Near Real Time (NRT) data type (MOD02SSH) is a subsample from the MODIS Level 1B 1-km data. Every fifth pixel is taken from the MOD021KM product and written out...

  13. MODIS/Terra Near Real Time (NRT) Level 1B Subsampled Calibrated Radiance 5Km

    Data.gov (United States)

    National Aeronautics and Space Administration — This Near Real Time (NRT) data type (MOD02SSH) is a subsample from the MODIS Level 1B 1-km data. Every fifth pixel is taken from the MOD021KM product and written out...

  14. MODIS/Aqua Near Real Time (NRT) Level 1B Subsampled Calibrated Radiance 5Km

    Data.gov (United States)

    National Aeronautics and Space Administration — This Near Real Time (NRT) data type (MYD02SSH) is a subsample from the MODIS Level 1B 1-km data. Every fifth pixel is taken from the MYD021KM product and written out...

  15. MODIS/Aqua Level 1B Subsampled Calibrated Radiance 5Km - NRT

    Data.gov (United States)

    National Aeronautics and Space Administration — This Near Real Time (NRT) data type (MYD02SSH) is a subsample from the MODIS Level 1B 1-km data. Every fifth pixel is taken from the MYD021KM product and written out...

  16. Directional variance analysis of annual rings

    Science.gov (United States)

    Kumpulainen, P.; Marjanen, K.

    2010-07-01

    The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.

  17. Realized Variance and Market Microstructure Noise

    DEFF Research Database (Denmark)

    Hansen, Peter R.; Lunde, Asger

    2006-01-01

    estimation based on high-frequency data. Finally, we apply cointegration techniques to decompose transaction prices and bid-ask quotes into an estimate of the efficient price and noise. This framework enables us to study the dynamic effects on transaction prices and quotes caused by changes in the efficient......We study market microstructure noise in high-frequency data and analyze its implications for the realized variance (RV) under a general specification for the noise. We show that kernel-based estimators can unearth important characteristics of market microstructure noise and that a simple kernel...

  18. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding to these ......This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed likelihood function, or estimating function, corresponding...

  19. Compact multipurpose sub-sampling and processing of in-situ cores with press (pressurized core sub-sampling and extrusion system)

    Energy Technology Data Exchange (ETDEWEB)

    Anders, E.; Muller, W.H. [Technical Univ. of Berlin, Berlin (Germany). Chair of Continuum Mechanics and Material Theory

    2008-07-01

    Climate change, declining resources and over-consumption result in a need for sustainable resource allocation, habitat conservation and claim for new technologies and prospects for damage-containment. In order to increase knowledge of the environment and to define potential hazards, it is necessary to get an understanding of the deep biosphere. In addition, the benthic conditions of sediment structure and gas hydrates, temperature, pressure and bio-geochemistry must be maintained during the sequences of sampling, retrieval, transfer, storage and downstream analysis. In order to investigate highly instable gas hydrates, which decomposes under pressure and temperature change, a suite of research technologies have been developed by the Technische Universitat Berlin (TUB), Germany. This includes the pressurized core sub-sampling and extrusion system (PRESS) that was developed in the European Union project called HYACE/HYACINTH. The project enabled well-defined sectioning and transfer of drilled pressure-cores obtained by a rotary corer and fugro pressure corer into transportation and investigation chambers. This paper described HYACINTH pressure coring and the HYACINTH core transfer. Autoclave coring tools and HYACINTH core logging, coring tools, and sub-sampling were also discussed. It was concluded that possible future applications include, but were not limited to, research in shales and other tight formations, carbon dioxide sequestration, oil and gas exploration, coalbed methane, and microbiology of the deep biosphere. To meet the corresponding requirements and to incorporate the experiences from previous expeditions, the pressure coring system would need to be redesigned to adapt it to the new applications. 3 refs., 5 figs.

  20. Generalized analysis of molecular variance.

    Directory of Open Access Journals (Sweden)

    Caroline M Nievergelt

    2007-04-01

    Full Text Available Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA, requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by

  1. Small Drinking Water System Variances

    Science.gov (United States)

    Small system variances allow a small system to install and maintain technology that can remove a contaminant to the maximum extent that is affordable and protective of public health in lieu of technology that can achieve compliance with the regulation.

  2. Estimadores de componentes de variância em delineamento de blocos aumentados com tratamentos novos de uma ou mais populações Estimators of variance components in the augmented block design with new treatments from one or more populations

    Directory of Open Access Journals (Sweden)

    João Batista Duarte

    2001-09-01

    Full Text Available O objetivo do trabalho foi comparar, por meio de simulação, as estimativas de componentes de variância produzidas pelos métodos ANOVA (análise da variância, ML (máxima verossimilhança, REML (máxima verossimilhança restrita e MIVQUE(0 (estimador quadrático não viesado de variância mínima, no delineamento de blocos aumentados com tratamentos adicionais (progênies de uma ou mais procedências (cruzamentos. Os resultados indicaram superioridade relativa do método MIVQUE(0. O método ANOVA, embora não tendencioso, apresentou as estimativas de menor precisão. Os métodos de máxima verossimilhança, sobretudo ML, tenderam a subestimar a variância do erro experimental ( e a superestimar as variâncias genotípicas (, em especial nos experimentos de menor tamanho (n/>0,5. Contudo, o método produziu as piores estimativas de variâncias genotípicas quando as progênies vieram de diferentes cruzamentos e os experimentos foram pequenos.This work compares by simulation estimates of variance components produced by the ANOVA (analysis of variance, ML (maximum likelihood, REML (restricted maximum likelihood, and MIVQUE(0 (minimum variance quadratic unbiased estimator methods for augmented block design with additional treatments (progenies stemming from one or more origins (crosses. Results showed the superiority of the MIVQUE(0 estimation. The ANOVA method, although unbiased, showed estimates with lower precision. The ML and REML methods produced downwards biased estimates for error variance (, and upwards biased estimates for genotypic variances (, particularly the ML method. Biases for the REML estimation became negligible when progenies were derived from a single cross, and experiments were of larger size with ratios />0.5. This method, however, provided the worst estimates for genotypic variances when progenies were derived from several crosses and the experiments were of small size (n<120 observations.

  3. Estimativas de (CoVariâncias e Tendências Genéticas para Pesos em um Rebanho Canchim Estimates of Variances and Covariances and Genetic Trends for Body Weights in a Canchim Herd

    Directory of Open Access Journals (Sweden)

    Silvio de Paula Mello

    2002-07-01

    Full Text Available Os objetivos deste trabalho foram obter estimativas de (covariâncias e estimar a tendência genética para os pesos ao nascimento (PN, à desmama (PD e a um ano de idade (P12 em um rebanho da raça Canchim. Foram utilizados dados de 6.517 animais, nascidos de 1953 a 1996, para estimar os valores genéticos pelo método da máxima verossimilhança restrita livre de derivadas, empregando-se um modelo que incluiu os efeitos fixos de grupo de contemporâneos (ano/época de nascimento/sexo do bezerro e a covariável idade da vaca ao parto (efeitos linear e quadrático e os efeitos aleatórios genéticos direto e materno e de ambiente permanente. Foram estimadas tendências genéticas para os efeitos aditivos diretos, maternos e do total maternal, pela regressão ponderada das médias anuais (ou de gerações dos valores genéticos diretos, maternos e do total maternal sobre o ano de nascimento (ou geração dos animais. As estimativas de herdabilidade direta foram iguais a 0,39; 0,48; e 0,63 para PN, PD e P12, respectivamente, enquanto as estimativas de herdabilidade materna foram, na mesma ordem, 0,03; 0,04; e 0,05, respectivamente. As tendências genéticas diretas anuais foram iguais a 0,046; 1,336; e 1,619 kg para PN, PD e P12, respectivamente, representando cerca de 0,13; 0,66; e 0,75% das médias do rebanho. Por geração, as tendências foram, na mesma ordem, 0,269; 7,715; e 9,599 kg, respectivamente. As tendências genéticas maternas e do total maternal foram, em geral, lineares e positivas. Os resultados mostraram que os critérios de seleção utilizados resultaram em progresso genético para PN, PD e P12, contudo, o progresso obtido ficou bem aquém do possível.The objectives of this study were to estimate variances and covariances and to evaluate the genetic trends for body weight at birth (BW, weaning (WW and twelve months of age (YW in a Canchim (5/8 Charolais + 3/8 Zebu herd. Data on 6.517 animals, born from 1953 through 1996, were

  4. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By de…nition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modi…ed like- lihood function, or estimating function, corresponding...

  5. Multivariate Variance Targeting in the BEKK-GARCH Model

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard; Rahbek, Anders

    2014-01-01

    This paper considers asymptotic inference in the multivariate BEKK model based on (co-)variance targeting (VT). By definition the VT estimator is a two-step estimator and the theory presented is based on expansions of the modified likelihood function, or estimating function, corresponding...

  6. Revision: Variance Inflation in Regression

    Directory of Open Access Journals (Sweden)

    D. R. Jensen

    2013-01-01

    the intercept; and (iv variance deflation may occur, where ill-conditioned data yield smaller variances than their orthogonal surrogates. Conventional VIFs have all regressors linked, or none, often untenable in practice. Beyond these, our models enable the unlinking of regressors that can be unlinked, while preserving dependence among those intrinsically linked. Moreover, known collinearity indices are extended to encompass angles between subspaces of regressors. To reaccess ill-conditioned data, we consider case studies ranging from elementary examples to data from the literature.

  7. A special core liner for sub-sampling of aqueous sediments

    Digital Repository Service at National Institute of Oceanography (India)

    Valsangkar, A.B.

    relatively short (<60 cm) samples of the sea floor 1 . The modern gravity corers are 6 m long against 4.6 m in the initial stages 2 , and the piston corers are 10–15 m long against 6.1 m earlier 2 . Considering the changes in temperature and pressure... in use for sub-sampling the cored sediments. Generally, the long (6 m or more) core liner(s) from the gravity or piston corer are cut in 1 m sections, capped and stored in the refrigerated room at 1–4°C for handling and preservation purpose 3...

  8. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  9. Sub-Sampling Framework Comparison for Low-Power Data Gathering: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Bojan Milosevic

    2015-03-01

    Full Text Available A key design challenge for successful wireless sensor network (WSN deployment is a good balance between the collected data resolution and the overall energy consumption. In this paper, we present a WSN solution developed to efficiently satisfy the requirements for long-term monitoring of a historical building. The hardware of the sensor nodes and the network deployment are described and used to collect the data. To improve the network’s energy efficiency, we developed and compared two approaches, sharing similar sub-sampling strategies and data reconstruction assumptions: one is based on compressive sensing (CS and the second is a custom data-driven latent variable-based statistical model (LV. Both approaches take advantage of the multivariate nature of the data collected by a heterogeneous sensor network and reduce the sampling frequency at sub-Nyquist levels. Our comparative analysis highlights the advantages and limitations: signal reconstruction performance is assessed jointly with network-level energy reduction. The performed experiments include detailed performance and energy measurements on the deployed network and explore how the different parameters can affect the overall data accuracy and the energy consumption. The results show how the CS approach achieves better reconstruction accuracy and overall efficiency, with the exception of cases with really aggressive sub-sampling policies.

  10. Sub-sampling framework comparison for low-power data gathering: a comparative analysis.

    Science.gov (United States)

    Milosevic, Bojan; Caione, Carlo; Farella, Elisabetta; Brunelli, Davide; Benini, Luca

    2015-03-02

    A key design challenge for successful wireless sensor network (WSN) deployment is a good balance between the collected data resolution and the overall energy consumption. In this paper, we present a WSN solution developed to efficiently satisfy the requirements for long-term monitoring of a historical building. The hardware of the sensor nodes and the network deployment are described and used to collect the data. To improve the network's energy efficiency, we developed and compared two approaches, sharing similar sub-sampling strategies and data reconstruction assumptions: one is based on compressive sensing (CS) and the second is a custom data-driven latent variable-based statistical model (LV). Both approaches take advantage of the multivariate nature of the data collected by a heterogeneous sensor network and reduce the sampling frequency at sub-Nyquist levels. Our comparative analysis highlights the advantages and limitations: signal reconstruction performance is assessed jointly with network-level energy reduction. The performed experiments include detailed performance and energy measurements on the deployed network and explore how the different parameters can affect the overall data accuracy and the energy consumption. The results show how the CS approach achieves better reconstruction accuracy and overall efficiency, with the exception of cases with really aggressive sub-sampling policies.

  11. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  12. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maître, O. P.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  13. Variance based OFDM frame synchronization

    Directory of Open Access Journals (Sweden)

    Z. Fedra

    2012-04-01

    Full Text Available The paper deals with a new frame synchronization scheme for OFDM systems and calculates the complexity of this scheme. The scheme is based on the computing of the detection window variance. The variance is computed in two delayed times, so a modified Early-Late loop is used for the frame position detection. The proposed algorithm deals with different variants of OFDM parameters including guard interval, cyclic prefix, and has good properties regarding the choice of the algorithm's parameters since the parameters may be chosen within a wide range without having a high influence on system performance. The verification of the proposed algorithm functionality has been performed on a development environment using universal software radio peripheral (USRP hardware.

  14. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances

    OpenAIRE

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2013-01-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analys...

  15. Monitoring structural change in variance

    OpenAIRE

    Carsoule, F.; Franses, Ph.H.B.F.

    1999-01-01

    In this paper we propose a sequential testing approach for a structural change in the variance of a time series, which amounts to a procedure with a controlled asymptotic size as we repeat the test. Our approach builds on that taken in Chu, Stinchcombe and White (1996) for structural change in the parameters of a linear regression model. We provide simulation evidence to examine the empirical size and power of our procedure. We apply our approach to 14 weekly observed European exchange rates ...

  16. Weighted Quantile Regression for AR model with Infinite Variance Errors.

    Science.gov (United States)

    Chen, Zhao; Li, Runze; Wu, Yaohua

    2012-09-01

    Autoregressive (AR) models with finite variance errors have been well studied. This paper is concerned with AR models with heavy-tailed errors, which is useful in various scientific research areas. Statistical estimation for AR models with infinite variance errors is very different from those for AR models with finite variance errors. In this paper, we consider a weighted quantile regression for AR models to deal with infinite variance errors. We further propose an induced smoothing method to deal with computational challenges in weighted quantile regression. We show that the difference between weighted quantile regression estimate and its smoothed version is negligible. We further propose a test for linear hypothesis on the regression coefficients. We conduct Monte Carlo simulation study to assess the finite sample performance of the proposed procedures. We illustrate the proposed methodology by an empirical analysis of a real-life data set.

  17. A multi-variance analysis in the time domain

    Science.gov (United States)

    Walter, Todd

    1993-01-01

    Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.

  18. Variance inflation in high dimensional Support Vector Machines

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2013-01-01

    is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables...... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...

  19. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou [Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah 84108 and Department of Radiology, University of Utah, Salt Lake City, Utah 84108 (United States); Todd, Nick; Minalga, Emilee; Payne, Allison; Parker, Dennis L. [Department of Radiology, University of Utah, Salt Lake City, Utah 84108 (United States)

    2014-09-15

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes

  20. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    Science.gov (United States)

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes

  1. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    Science.gov (United States)

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.

  2. Improvement of Breast Cancer Detection Using Non-subsampled Contourlet Transform and Super-Resolution Technique in Mammographic Images

    Directory of Open Access Journals (Sweden)

    Fatemeh Pak

    2015-05-01

    Full Text Available Introduction Breast cancer is one of the most life-threatening conditions among women. Early detection of this disease is the only way to reduce the associated mortality rate. Mammography is a standard method for the early detection of breast cancer. Today, considering the importance of breast cancer detection, computer-aided detection techniques have been employed to increase the quality of mammographic images and help physicians reduce false positive rate (FPR. Materials and Methods In this study, a method was proposed for improving the quality of mammographic images to help radiologists establish a prompt and accurate diagnosis. The proposed approach included three major parts including pre-processing, feature extraction, and classification. In the pre-processing stage, the region of interest was determined and the image quality was improved by non-subsampled contourlet transform and super-resolution algorithm. In the feature extraction stage, some features of image components were extracted and skewness of each feature was calculated. Finally, a support vector machine was utilized to classify the features and determine the probability of benignity or malignancy of the disease. Results Based on the obtained results using Mammographic Image Analysis Society (MIAS database, the mean accuracy was estimated at 87.26% and maximum accuracy was 96.29%. Also, the mean and minimum FPRs were estimated at 9.55% and 2.87%, respectively.     Conclusion The results obtained using MIAS database indicated the superiority of the proposed method to other techniques. The reduced FPR in the proposed method was a significant finding in the present article.

  3. Genetic heterogeneity of residual variance in broiler chickens

    Directory of Open Access Journals (Sweden)

    Hill William G

    2006-11-01

    Full Text Available Abstract Aims were to estimate the extent of genetic heterogeneity in environmental variance. Data comprised 99 535 records of 35-day body weights from broiler chickens reared in a controlled environment. Residual variance within dam families was estimated using ASREML, after fitting fixed effects such as genetic groups and hatches, for each of 377 genetically contemporary sires with a large number of progeny (> 100 males or females each. Residual variance was computed separately for male and female offspring, and after correction for sampling, strong evidence for heterogeneity was found, the standard deviation between sires in within variance amounting to 15–18% of its mean. Reanalysis using log-transformed data gave similar results, and elimination of 2–3% of outlier data reduced the heterogeneity but it was still over 10%. The correlation between estimates for males and females was low, however. The correlation between sire effects on progeny mean and residual variance for body weight was small and negative (-0.1. Using a data set bigger than any yet presented and on a trait measurable in both sexes, this study has shown evidence for heterogeneity in the residual variance, which could not be explained by segregation of major genes unless very few determined the trait.

  4. FRACTIONAL TREND OF THE VARIANCE IN CAVALIERI SAMPLING

    Directory of Open Access Journals (Sweden)

    Marta García-Fiñana

    2011-05-01

    Full Text Available Cavalieri sampling is often used to estimate the volume of an object with systematic sections a constant distance T apart. The variance of the corresponding estimator can be expressed as the sum of the extension term (which gives the overall trend of the variance and is used to estimate it, the 'Zitterbewegung' (which oscillates about zero and higher order terms. The extension term is of order T2m+2 for small T, where m is the order of the first non-continuous derivative of the measurement function f, (namely of the area function if the target is the volume. A key condition is that the jumps of the mth derivative f (m of f are finite. When this is not the case, then the variance exhibits a fractional trend, and the current theory fails. Indeed, in practice the mentioned trend is often of order T2q+2, typically with 0 variance, and thereby of the extension term, by means of a new Euler-MacLaurin formula involving fractional derivatives of f. We also present a new and general estimator of the variance, see Eq. 26a, b, and apply it to real data (white matter of a human brain.

  5. Associated factors of estimated desaturase activity in the EPIC-Potsdam study.

    Science.gov (United States)

    Schiller, K; Jacobs, S; Jansen, E; Weikert, C; di Giuseppe, R; Boeing, H; Schulze, M B; Kröger, J

    2014-05-01

    Altered activity of desaturase enzymes may be involved in the development of metabolic diseases like type 2-diabetes. Desaturase activities might be modifiable by diet and lifestyle-related factors, but no study has systematically investigated such factors so far. We aimed to evaluate the association of demographic, anthropometric, dietary and lifestyle characteristics with estimated Δ5-, Δ6- and Δ9-desaturase activity. A subsample (n = 1782) of the EPIC-Potsdam study was used for a cross-sectional analysis, involving men and women, mainly aged 35-65 years. Fatty acid (FA) product-to-precursor ratios, derived from the FA composition of erythrocyte membrane phospholipids, were used to estimate desaturase activities. Multiple linear regression models were used with estimated Δ5-, Δ6- and Δ9-desaturase activity as outcome and demographic (age, sex), anthropometric (BMI, WHR), dietary intake (FAs, carbohydrates) and lifestyle (physical activity, smoking, alcohol consumption) factors as exposure variables. Alcohol intake was positively associated with estimated Δ6- (explained variance in desaturase activity: 1.52%) and estimated Δ9-desaturase activity (explained variance: 5.53%). BMI and WHR showed a weak inverse association with estimated Δ5-desaturase activity (explained variance: BMI: 1.07%; WHR: 1.02%) and weak positive associations with estimated Δ6-(explained variance: BMI: 1.17%; WHR: 1.19%) and estimated Δ9-desaturase activities (explained variance: BMI: 0.70%; WHR: 0.96%). Age, sex, physical activity, smoking and dietary factors were only weakly associated with the estimated desaturase activities. Our findings suggest that alcohol intake as well as obesity measures are associated with the FA ratios reflecting desaturase activity. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. The Third-Difference Approach to Modified Allan Variance

    Science.gov (United States)

    Greenhall, C. A.

    1995-01-01

    This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.

  7. Confidence Levels in Statistical Analyses. Analysis of Variances. Case Study.

    Directory of Open Access Journals (Sweden)

    Ileana Brudiu

    2010-05-01

    Full Text Available Applying a statistical test to check statistical assumptions offers a positive or negative response regarding the veracity of the issued hypothesis. In case of variance analysis it’s necessary to apply a post hoc test to determine differences within the group. Statistical estimation using confidence levels provides more information than a statistical test, it shows the high degree of uncertainty resulting from small samples and builds conclusions in terms of "marginally significant" or "almost significant (p being close to 0,05 . The case study shows how the statistical estimation completes the application form the analysis of variance test and Tukey test.

  8. Multi-focus image fusion algorithm based on non-subsampled shearlet transform and focus measure

    Science.gov (United States)

    Wang, Hongmei; Ahmed, Mir Soban

    2017-12-01

    novel multi-focus image fusion algorithm is proposed in the Sheartlet domain. The core idea of this paper is to utilize the focus measure to detect the focused region from the multi-focus images. The proposed algorithm can be divided into three procedures: image decomposition, sub-bands coefficients selection and image reconstruction. At first, the multi-focus images are decomposed by non-subsampled Sheartlet transform (NSST), and the low frequency sub-bands and high frequency sub-bands can be obtained. For the low frequency sub-bands, saliency detection and improved sum-modified-Laplacian are combined to detect the focused regions. A modified edge measure algorithm is utilized to guide the coefficients combination for high frequency sub-bands at different levels. Moreover, in order to avoid the erroneous results introduced by the above procedures, mathematical morphology technique is used to revise the decision maps of the low frequency sub-bands and high frequency sub-bands. The final fused image can be obtained by taken the inverse NSST. The performance of the proposed method is tested on series of multi-focus images extensively. Experimental results indicate that the proposed method outperformed some state-of-the-art fusion methods, in terms of both subjective observation and objective evaluations.

  9. Phase unwrapping in digital holography based on non-subsampled contourlet transform

    Science.gov (United States)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-01-01

    In the digital holographic measurement of complex surfaces, phase unwrapping is a critical step for accurate reconstruction. The phases of the complex amplitudes calculated from interferometric holograms are disturbed by speckle noise, thus reliable unwrapping results are difficult to be obtained. Most of existing unwrapping algorithms implement denoising operations first to obtain noise-free phases and then conduct phase unwrapping pixel by pixel. This approach is sensitive to spikes and prone to unreliable results in practice. In this paper, a robust unwrapping algorithm based on the non-subsampled contourlet transform (NSCT) is developed. The multiscale and directional decomposition of NSCT enhances the boundary between adjacent phase levels and henceforth the influence of local noise can be eliminated in the transform domain. The wrapped phase map is segmented into several regions corresponding to different phase levels. Finally, an unwrapped phase map is obtained by elevating the phases of a whole segment instead of individual pixels to avoid unwrapping errors caused by local spikes. This algorithm is suitable for dealing with complex and noisy wavefronts. Its universality and superiority in the digital holographic interferometry have been demonstrated by both numerical analysis and practical experiments.

  10. Variance swap payoffs, risk premia and extreme market conditions

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    This paper estimates the Variance Risk Premium (VRP) directly from synthetic variance swap payoffs. Since variance swap payoffs are highly volatile, we extract the VRP by using signal extraction techniques based on a state-space representation of our model in combination with a simple economic...... constraint. Our approach, only requiring option implied volatilities and daily returns for the underlying, provides measurement error free estimates of the part of the VRP related to normal market conditions, and allows constructing variables indicating agents' expectations under extreme market conditions....... The latter variables and the VRP generate different return predictability on the major US indices. A factor model is proposed to extract a market VRP which turns out to be priced when considering Fama and French portfolios....

  11. Large sample neutron activation analysis avoids representative sub-sampling and sample preparation difficulties : An added value for forensic analysis

    NARCIS (Netherlands)

    Bode, P.; Romanò, Sabrina; Romolo, Francesco Saverio

    2017-01-01

    A crucial part of any chemical analysis is the degree of representativeness of the measurand(s) in the test portion for the same measurands in the object, originally collected for investigation. Such an object usually may have either to be homogenized and sub-sampled, or digested/dissolved. Any

  12. Bias-correcting the realized range-based variance in the presence of market microstructure noise

    OpenAIRE

    Christensen, Kim; Podolskij, Mark; Vetter, Mathias

    2007-01-01

    Market microstructure noise is a challenge to high-frequency based estimation of the integrated variance, because the noise accumulates with the sampling frequency. In this paper, we analyze the impact of microstructure noise on the realized range-based variance and propose a bias-correction to the range-statistic. The new estimator is shown to be consistent for the integrated variance and asymptotically mixed Gaussian under simple forms of microstructure noise, and we can select ...

  13. Variance sources and ratios to estimate energy and nutrient intakes in a sample of adolescents from public schools, Natal, Brazil Fontes e razões de variância para estimar a ingestão de energia e nutrientes de uma amostra de adolescentes de escolas públicas

    Directory of Open Access Journals (Sweden)

    Severina Carla Vieira Cunha Lima

    2013-04-01

    Full Text Available OBJECTIVE: The aim of this study was to describe the sources of dietary variance, and determine the variance ratios and the number of days needed for estimating the habitual diet of adolescents. METHODS: Two 24 hour food recalls were used for estimating the energy, macronutrient, fatty acid, fiber and cholesterol intakes of 366 adolescents attending Public Schools in Natal, Rio Grande do Norte, Brazil. The variance ratio between the intrapersonal and interpersonal variances, determined by Analysis of Variance, was calculated. The number of days needed for estimating the habitual intake of each nutrient was given by the hypothetical correlation (r>0.9 between the actual and observed nutrient intakes. RESULTS: Sources of interpersonal variation were higher for all nutrients and in both genders. Variance ratios were OBJETIVO: O objetivo deste estudo foi descrever as fontes de variância da dieta, determinar as razões de variâncias e o número de dias necessários para estimar a dieta habitual em adolescentes. MÉTODOS: A ingestão de energia, macronutrientes, ácidos graxos, fibra e colesterol foram estimadas por meio de dois recordatórios de 24 horas, aplicados em 366 adolescentes de escolas públicas de Natal, Rio Grande do Norte. A razão de variância foi calculada entre o componente da variância intrapessoal e interpessoal, determinada pela Análise de Variância. A definição do número de dias para a estimativa da ingestão habitual de cada nutriente foi obtida considerando a correlação hipotética de (r>0,9, entre a verdadeira ingestão de nutrientes e a observada. RESULTADOS: As fontes de variância interpessoal foram maiores para todos os nutrientes e em ambos os sexos. As razões de variâncias foram <1 para todos os nutrientes, e mais elevadas no sexo feminino. Dois dias de recordatórios de 24 horas seriam suficientes para avaliar com precisão o consumo de energia, carboidratos, fibra, ácidos graxos saturados e monoinsaturados

  14. Estruturas de variância residual para estimação de funções de covariância para o peso de bovinos da raça Canchim Residual variance structures to estimate covariance functions for weight of Canchim beef cattle

    Directory of Open Access Journals (Sweden)

    Fábio Luiz Buranelo Toral

    2009-11-01

    Full Text Available Este trabalho foi realizado com o objetivo de avaliar a utilização de diferentes estruturas de variância residual para estimação de funções de covariância para o peso de bovinos da raça Canchim. As funções de covariância foram estimadas pelo método da Máxima Verossimilhança Restrita em um modelo animal com os efeitos fixos de grupo de contemporâneos (ano e mês de nascimento e sexo, idade da vaca ao parto como covariável (efeitos linear e quadrático e da trajetória média de crescimento, enquanto os efeitos aleatórios considerados foram os efeitos genéticos aditivos direto e materno, de ambiente permanente individual e materno e residual. Foram utilizadas diversas estruturas para a variância residual: funções de variâncias de ordem linear até quíntica e 1, 5, 10, 15 ou 20 classes de idades. A utilização de variância residual homogênea não foi adequada. A utilização da função de variância residual quártica e a divisão da variância residual em 20 classes proporcionaram os melhores ajustes, e a divisão em classes foi mais eficiente que a utilização de funções. As estimativas de herdabilidade direta se situaram entre 0,16 e 0,25 na maioria das idades consideradas e as maiores estimativas foram obtidas próximo aos 360 dias de idade e no final do período estudado. Em geral, as estimativas de herdabilidade direta foram semelhantes para os modelos com variância residual homogênea, função de variância residual quártica ou com 20 classes de idade. A melhor descrição das variâncias residuais para o peso em diversas idades de bovinos da raça Canchim foi a que considerou 20 classes heterogêneas. Entretanto, como existem classes com variâncias semelhantes, é possível agrupar algumas delas e reduzir o número de parâmetros estimados.This study was carried out to evaluate the use of different residual variance structures to estimate covariance functions for weight of Canchim beef cattle. The

  15. Variance components and genetic parameters for body weight and ...

    African Journals Online (AJOL)

    Variance components resulting from direct additive genetic effects, maternal additive genetic effects, maternal permanent environmental effects, as well as the relationship between direct and maternal genetic effects for several body weight and fleece traits, were estimated by DFREML procedures. Traits analysed included ...

  16. Genetic variance components for residual feed intake and feed ...

    African Journals Online (AJOL)

    Feeding costs of animals is a major determinant of profitability in livestock production enterprises. Genetic selection to improve feed efficiency aims to reduce feeding cost in beef cattle and thereby improve profitability. This study estimated genetic (co)variances between weaning weight and other production, reproduction ...

  17. Properties of realized variance under alternative sampling schemes

    NARCIS (Netherlands)

    Oomen, R.C.A.

    2006-01-01

    This paper investigates the statistical properties of the realized variance estimator in the presence of market microstructure noise. Different from the existing literature, the analysis relies on a pure jump process for high frequency security prices and explicitly distinguishes among alternative

  18. Testing for causality in variance using multivariate GARCH models

    NARCIS (Netherlands)

    C.M. Hafner (Christian); H. Herwartz

    2004-01-01

    textabstractTests of causality in variance in multiple time series have been proposed recently, based on residuals of estimated univariate models. Although such tests are applied frequently little is known about their power properties. In this paper we show that a convenient alternative to residual

  19. (co) variance components and genetic parameters for live weight ...

    African Journals Online (AJOL)

    admin

    Against this background the present study estimated the (co)variance components as well as the genetic, phenotypic, environmental and maternal correlations for 16-months live weight and objectively measured wool traits in a South African Merino flock. Materials and Methods. The experimental flock is maintained on the ...

  20. Heterogeneidade dos componentes de variância na produção de leite e seus efeitos nas estimativas de herdabilidade e repetibilidade Heterogeneity of variance components in milk production and their effects on estimates of heritability and repeatability

    Directory of Open Access Journals (Sweden)

    Elmer Francisco Valencia Tapia

    2011-06-01

    Full Text Available Avaliou-se a heterogeneidade dos componentes de variância e seu efeito nas estimativas de herdabilidade e repetibilidade da produção de leite de bovinos da raça Holandesa. Os rebanhos foram agrupados de acordo com o nível de produção (baixo, médio e alto e avaliados na escala não transformada, raiz quadrada e logarítmica. Os componentes de variância foram estimados pelo método de máxima verossimilhança restrita. O modelo animal incluiu os efeitos fixos de rebanho-ano-estação e das covariáveis duração da lactação (efeito linear e idade da vaca ao parto (efeito linear e quadrático e os efeitos aleatórios genético aditivo direto, de ambiente permanente e residual. Na escala não transformada, todos os componentes de variância foram heterogêneos entre os três níveis de produção. Nesta escala, a variância residual e a fenotípica estavam associadas positivamente com o nível de produção enquanto que na escala logarítmica a associação foi negativa. A heterogeneidade da variância fenotípica e de seus componentes afetou mais as estimativas de herdabilidade que as da repetibilidade. A eficiência do processo de seleção para produção de leite poderá ser afetada pelo nível de produção em que forem estimados os parâmetros genéticos.It was evaluated the heterogeneity of components of phenotypic variance and its effects on the heritability and repeatability estimates for milk yield in Holstein cattle. The herds were grouped according to their level of production (low, medium and high and evaluated in the non-transformed, square-root and logarithmic scale. Variance components were estimated using a restricted maximum likelihood method based on an animal model that included fixed effects of herd-year-season, and as covariates the linear effect of lactation duration and the linear and quadratic effects of cow's age at calving and the random direct additive genetic, permanent environment and residual effects. In the

  1. Variance components for body weight in Japanese quails (Coturnix japonica

    Directory of Open Access Journals (Sweden)

    RO Resende

    2005-03-01

    Full Text Available The objective of this study was to estimate the variance components for body weight in Japanese quails by Bayesian procedures. The body weight at hatch (BWH and at 7 (BW07, 14 (BW14, 21 (BW21 and 28 days of age (BW28 of 3,520 quails was recorded from August 2001 to June 2002. A multiple-trait animal model with additive genetic, maternal environment and residual effects was implemented by Gibbs sampling methodology. A single Gibbs sampling with 80,000 rounds was generated by the program MTGSAM (Multiple Trait Gibbs Sampling in Animal Model. Normal and inverted Wishart distributions were used as prior distributions for the random effects and the variance components, respectively. Variance components were estimated based on the 500 samples that were left after elimination of 30,000 rounds in the burn-in period and 100 rounds of each thinning interval. The posterior means of additive genetic variance components were 0.15; 4.18; 14.62; 27.18 and 32.68; the posterior means of maternal environment variance components were 0.23; 1.29; 2.76; 4.12 and 5.16; and the posterior means of residual variance components were 0.084; 6.43; 22.66; 31.21 and 30.85, at hatch, 7, 14, 21 and 28 days old, respectively. The posterior means of heritability were 0.33; 0.35; 0.36; 0.43 and 0.47 at hatch, 7, 14, 21 and 28 days old, respectively. These results indicate that heritability increased with age. On the other hand, after hatch there was a marked reduction in the maternal environment variance proportion of the phenotypic variance, whose estimates were 0.50; 0.11; 0.07; 0.07 and 0.08 for BWH, BW07, BW14, BW21 and BW28, respectively. The genetic correlation between weights at different ages was high, except for those estimates between BWH and weight at other ages. Changes in body weight of quails can be efficiently achieved by selection.

  2. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  3. Assessment of heterogeneity of residual variances using changepoint techniques

    Directory of Open Access Journals (Sweden)

    Toro Miguel A

    2000-07-01

    Full Text Available Abstract Several studies using test-day models show clear heterogeneity of residual variance along lactation. A changepoint technique to account for this heterogeneity is proposed. The data set included 100 744 test-day records of 10 869 Holstein-Friesian cows from northern Spain. A three-stage hierarchical model using the Wood lactation function was employed. Two unknown changepoints at times T1 and T2, (0 T1 T2 tmax, with continuity of residual variance at these points, were assumed. Also, a nonlinear relationship between residual variance and the number of days of milking t was postulated. The residual variance at a time t( in the lactation phase i was modeled as: for (i = 1, 2, 3, where λι is a phase-specific parameter. A Bayesian analysis using Gibbs sampling and the Metropolis-Hastings algorithm for marginalization was implemented. After a burn-in of 20 000 iterations, 40 000 samples were drawn to estimate posterior features. The posterior modes of T1, T2, λ1, λ2, λ3, , , were 53.2 and 248.2 days; 0.575, -0.406, 0.797 and 0.702, 34.63 and 0.0455 kg2, respectively. The residual variance predicted using these point estimates were 2.64, 6.88, 3.59 and 4.35 kg2 at days of milking 10, 53, 248 and 305, respectively. This technique requires less restrictive assumptions and the model has fewer parameters than other methods proposed to account for the heterogeneity of residual variance during lactation.

  4. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    Science.gov (United States)

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  5. Stratospheric Air Sub-sampler (SAS) and its application to analysis of Delta O-17(CO2) from small air samples collected with an AirCore

    NARCIS (Netherlands)

    Mrozek, Dorota Janina; van der Veen, Carina; Hofmann, Magdalena E. G.; Chen, Huilin; Kivi, Rigel; Heikkinen, Pauli; Rockmann, Thomas

    2016-01-01

    We present the set-up and a scientific application of the Stratospheric Air Sub-sampler (SAS), a device to collect and to store the vertical profile of air collected with an AirCore (Karion et al., 2010) in numerous sub-samples for later analysis in the laboratory. The SAS described here is a 20m

  6. 40 CFR 142.41 - Variance request.

    Science.gov (United States)

    2010-07-01

    ... Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of a... enforcement responsibility by submitting a request for a variance in writing to the Administrator. Suppliers... and evidence of the best available treatment technology and techniques. (2) Economic and legal factors...

  7. Nonlinear Epigenetic Variance: Review and Simulations

    Science.gov (United States)

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  8. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...

  9. Application of Test-day Models for Variance Components Estimation ...

    African Journals Online (AJOL)

    Julio Carvalheira

    the random effect of the animal, LTE is the random effect of the long-term environmental effects accounting for the autocorrelations generated by the cow across repeated lactations, STE is the random effect of short term environmental effects accounting for the autocorrelations due to cow within each lactation, and e is the.

  10. minimum variance estimation of yield parameters of rubber tree

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... prove it in some way- filtering out noise. The task of filtering is to eliminate by some means as much of the noise as possible through processing of the measure- ments. This task is achieved by using measured .... and planting success, girth, resistance to disease, evolution of tapping density and yield per ...

  11. General score tests for regression models incorporating 'robust' variance estimates

    OpenAIRE

    David Clayton; Joanna Howson

    2002-01-01

    Stata incorporates commands for carrying out two of the three general approaches to asymptotic significance testing in regression models, namely likelihood ratio (lrtest) and Wald tests (testparms). However, the third approach, using "score" tests, has no such general implementation. This omission is particularly serious when dealing with "clustered" data using the Huber-White approach. Here the likelihood ratio test is lost, leaving only the Wald test. This has relatively poor asymptotic pro...

  12. Estimates of variance components for postweaning feed intake and ...

    African Journals Online (AJOL)

    Mike

    2013-03-09

    Mar 9, 2013 ... examined, contained either EBV for: 1) ADG and DFI assuming relative economic values of 7 and 1, respectively; 2) MBW, ADG, and RFI; or 3) RDG and DFI. Results & Discussion. Means and phenotypic standard deviations (in parentheses) of MBW, ADG, and DFI were 79.7 (4.19) kg0.75, 1.74 (0.22) kg/d, ...

  13. Variance component and heritability estimates for growth traits in the ...

    African Journals Online (AJOL)

    Limousin; Shh = Shorthorn;. Sim = Simmental b = a vector of fixed effects consisting of year of birth, sex and the linear and quadratic regression of age of dam on year of birth,. Z,t Zz = known incidence matrices relating elements of a and m to y,.

  14. (Co) variance Components and Genetic Parameter Estimates for Re

    African Journals Online (AJOL)

    Mapula

    generation interval which in turn hinders genetic improvement. Live animal ultrasound measures of carcass traits were recently introduced to supplement progeny testing programmes or for usage as sole source of carcass data in beef cattle breeding programmes (Crews et al., 2004; MacNeil & Northcutt, 2008). Ultrasonic.

  15. Estimating Additive and Dominance Variance for Liner Traits in ...

    African Journals Online (AJOL)

    pennanent environmental effects. The second ... Mackay, (1996). The prediction of additive and dominance genetic effects concurrently should ... morning and forage (Pal1iculn nlaxinluln ... mating pair or parental dominance class. LSB. LSIV ...

  16. Variance Risk Premia on Stocks and Bonds

    DEFF Research Database (Denmark)

    Mueller, Philippe; Sabtchevsky, Petar; Vedolin, Andrea

    : First, exposure to bond market volatility is strongly priced with a Sharpe ratio of -1.8, 20% higher than what is observed in the equity market. Second, while there is strong co-movement between equity and bond market variance risk, there are distinct periods when the bond variance risk premium...... is different from the equity variance risk premium. Third, the conditional correlation between stock and bond market variance risk premium switches sign often and ranges between -60% and +90%. We then show that these stylized facts pose a challenge to standard consumption-based asset pricing models.......Investors in fixed income markets are willing to pay a very large premium to be hedged against shocks in expected volatility and the size of this premium can be studied through variance swaps. Using thirty years of option and high-frequency data, we document the following novel stylized facts...

  17. Portfolio optimization with mean-variance model

    Science.gov (United States)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  18. A full Bayesian hierarchical mixture model for the variance of gene differential expression

    Directory of Open Access Journals (Sweden)

    Walls Rebecca E

    2007-04-01

    Full Text Available Abstract Background In many laboratory-based high throughput microarray experiments, there are very few replicates of gene expression levels. Thus, estimates of gene variances are inaccurate. Visual inspection of graphical summaries of these data usually reveals that heteroscedasticity is present, and the standard approach to address this is to take a log2 transformation. In such circumstances, it is then common to assume that gene variability is constant when an analysis of these data is undertaken. However, this is perhaps too stringent an assumption. More careful inspection reveals that the simple log2 transformation does not remove the problem of heteroscedasticity. An alternative strategy is to assume independent gene-specific variances; although again this is problematic as variance estimates based on few replications are highly unstable. More meaningful and reliable comparisons of gene expression might be achieved, for different conditions or different tissue samples, where the test statistics are based on accurate estimates of gene variability; a crucial step in the identification of differentially expressed genes. Results We propose a Bayesian mixture model, which classifies genes according to similarity in their variance. The result is that genes in the same latent class share the similar variance, estimated from a larger number of replicates than purely those per gene, i.e. the total of all replicates of all genes in the same latent class. An example dataset, consisting of 9216 genes with four replicates per condition, resulted in four latent classes based on their similarity of the variance. Conclusion The mixture variance model provides a realistic and flexible estimate for the variance of gene expression data under limited replicates. We believe that in using the latent class variances, estimated from a larger number of genes in each derived latent group, the p-values obtained are more robust than either using a constant gene or

  19. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  20. A proxy for variance in dense matching over homogeneous terrain

    Science.gov (United States)

    Altena, Bas; Cockx, Liesbet; Goedemé, Toon

    2014-05-01

    Automation in photogrammetry and avionics have brought highly autonomous UAV mapping solutions on the market. These systems have great potential for geophysical research, due to their mobility and simplicity of work. Flight planning can be done on site and orientation parameters are estimated automatically. However, one major drawback is still present: if contrast is lacking, stereoscopy fails. Consequently, topographic information cannot be obtained precisely through photogrammetry for areas with low contrast. Even though more robustness is added in the estimation through multi-view geometry, a precise product is still lacking. For the greater part, interpolation is applied over these regions, where the estimation is constrained by uniqueness, its epipolar line and smoothness. Consequently, digital surface models are generated with an estimate of the topography, without holes but also without an indication of its variance. Every dense matching algorithm is based on a similarity measure. Our methodology uses this property to support the idea that if only noise is present, no correspondence can be detected. Therefore, the noise level is estimated in respect to the intensity signal of the topography (SNR) and this ratio serves as a quality indicator for the automatically generated product. To demonstrate this variance indicator, two different case studies were elaborated. The first study is situated at an open sand mine near the village of Kiezegem, Belgium. Two different UAV systems flew over the site. One system had automatic intensity regulation, and resulted in low contrast over the sandy interior of the mine. That dataset was used to identify the weak estimations of the topography and was compared with the data from the other UAV flight. In the second study a flight campaign with the X100 system was conducted along the coast near Wenduine, Belgium. The obtained images were processed through structure-from-motion software. Although the beach had a very low

  1. Portfolio optimization using median-variance approach

    Science.gov (United States)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  2. Prediction of Breeding Values and Selection Responses With Genetic Heterogeneity of Environmental Variance

    Science.gov (United States)

    Mulder, H. A.; Bijma, P.; Hill, W. G.

    2007-01-01

    There is empirical evidence that genotypes differ not only in mean, but also in environmental variance of the traits they affect. Genetic heterogeneity of environmental variance may indicate genetic differences in environmental sensitivity. The aim of this study was to develop a general framework for prediction of breeding values and selection responses in mean and environmental variance with genetic heterogeneity of environmental variance. Both means and environmental variances were treated as heritable traits. Breeding values and selection responses were predicted with little bias using linear, quadratic, and cubic regression on individual phenotype or using linear regression on the mean and within-family variance of a group of relatives. A measure of heritability was proposed for environmental variance to standardize results in the literature and to facilitate comparisons to “conventional” traits. Genetic heterogeneity of environmental variance can be considered as a trait with a low heritability. Although a large amount of information is necessary to accurately estimate breeding values for environmental variance, response in environmental variance can be substantial, even with mass selection. The methods developed allow use of the well-known selection index framework to evaluate breeding strategies and effects of natural selection that simultaneously change the mean and the variance. PMID:17277375

  3. Improving Efficiency of Model Based Estimation in Longitudinal Surveys Through the Use of Historical Data

    Directory of Open Access Journals (Sweden)

    Roberto Gismondi

    2014-01-01

    Full Text Available In this context, supposing a sampling survey framework and a model-based approach, the attention has been focused on the main features of the optimal prediction strategy for a population mean, which implies knowledge of some model parameters and functions, normally unknown. In particular, a wrong specification of the model individual variances may lead to a serious loss of efficiency of estimates. For this reason, we have proposed some techniques for the estimation of model variances, which instead of being put equal to given a priori functions, can be estimated through historical data concerning past survey occasions. A time series of past observations is almost always available, especially in a longitudinal survey context. Usefulness of the technique proposed has been tested through an empirical attempt, concerning the quarterly wholesale trade survey carried out by ISTAT (Italian National Statistical Institute in the period 2005-2010. In this framework, the problem consists in minimising magnitude of revisions, given by the differences between preliminary estimates (based on the sub-sample of quick respondents and final estimates (which take into account late respondents as well. Main results show that modelvariances estimation through historical data lead to efficiency gains which cannot be neglected. This outcome was confirmed by a further exercise, based on 1000 random replications of late responses.

  4. Analytic solution to variance optimization with no short positions

    Science.gov (United States)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  5. Cross-cultural adaptation of the short-form condom attitude scale: validity assessment in a sub-sample of rural-to-urban migrant workers in Bangladesh

    Science.gov (United States)

    2013-01-01

    Background The reliable and valid measurement of attitudes towards condom use are essential to assist efforts to design population specific interventions aimed at promoting positive attitude towards, and increased use of condoms. Although several studies, mostly in English speaking western world, have demonstrated the utility of condom attitude scales, very limited culturally relevant condom attitude measures have been developed till to date. We have developed a scale and evaluated its psychometric properties in a sub-sample of rural-to-urban migrant workers in Bangladesh. Methods This paper reports mostly on cross-sectional survey components of a mixed methods sexual health research in Bangladesh. The survey sample (n = 878) comprised rural-to-urban migrant taxi drivers (n = 437) and restaurant workers (n = 441) in Dhaka (aged 18–35 years). The study also involved focus group sessions with same populations to establish the content validity and cultural equivalency of the scale. The current scale was administered with a large sexual health survey questionnaire and consisted of 10 items. Quantitative and qualitative data were assessed with statistical and thematic analysis, respectively, and then presented. Results The participants found the scale simple and easy to understand and use. The internal consistency (α) of the scale was 0.89 with high construct validity (the first component accounted for about 52% of variance and second component about 20% of the total variance with an Eigen-value for both factors greater than one). The test-retest reliability (repeatability) was also found satisfactory with high inter-item correlations (the majority of the intra-class correlation coefficient values was above 2 and was significant for all items on the scale, p Bengali version of the scale have good metric properties for assessing attitudes toward condom use. Validated scale is a short, simple and reliable instrument for measuring attitudes towards condom

  6. On the variance of the number of real roots of a random trigonometric polynomial

    Directory of Open Access Journals (Sweden)

    K. Farahmand

    1990-01-01

    Full Text Available This paper provides an upper estimate for the variance of the number of real zeros of the random trigonometric polynomial g1cosθ+g2cos2θ+…+gncosnθ. The coefficients gi(i=1,2,…,n are assumed independent and normally distributed with mean zero and variance one.

  7. Genetic and environmental heterogeneity of residual variance of weight traits in Nellore beef cattle

    Directory of Open Access Journals (Sweden)

    Neves Haroldo HR

    2012-07-01

    Full Text Available Abstract Background Many studies have provided evidence of the existence of genetic heterogeneity of environmental variance, suggesting that it could be exploited to improve robustness and uniformity of livestock by selection. However, little is known about the perspectives of such a selection strategy in beef cattle. Methods A two-step approach was applied to study the genetic heterogeneity of residual variance of weight gain from birth to weaning and long-yearling weight in a Nellore beef cattle population. First, an animal model was fitted to the data and second, the influence of additive and environmental effects on the residual variance of these traits was investigated with different models, in which the log squared estimated residuals for each phenotypic record were analyzed using the restricted maximum likelihood method. Monte Carlo simulation was performed to assess the reliability of variance component estimates from the second step and the accuracy of estimated breeding values for residual variation. Results The results suggest that both genetic and environmental factors have an effect on the residual variance of weight gain from birth to weaning and long-yearling in Nellore beef cattle and that uniformity of these traits could be improved by selecting for lower residual variance, when considering a large amount of information to predict genetic merit for this criterion. Simulations suggested that using the two-step approach would lead to biased estimates of variance components, such that more adequate methods are needed to study the genetic heterogeneity of residual variance in beef cattle.

  8. Grammatical and lexical variance in English

    CERN Document Server

    Quirk, Randolph

    2014-01-01

    Written by one of Britain's most distinguished linguists, this book is concerned with the phenomenon of variance in English grammar and vocabulary across regional, social, stylistic and temporal space.

  9. Bias/Variance Analysis for Relational Domains

    National Research Council Canada - National Science Library

    Neville, Jennifer; Jensen, David

    2007-01-01

    .... To date, the impact of inference error on model performance has not been investigated. In this paper, we propose a new bias/variance framework that decomposes loss into errors due to both the learning and inference process...

  10. Importance Sampling Variance Reduction in GRESS ATMOSIM

    Energy Technology Data Exchange (ETDEWEB)

    Wakeford, Daniel Tyler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-26

    This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.

  11. The Variance Composition of Firm Growth Rates

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2009-04-01

    Full Text Available Firms exhibit a wide variability in growth rates. This can be seen as another manifestation of the fact that firms are different from one another in several respects. This study investigated this variability using the variance components technique previously used to decompose the variance of financial performance. The main source of variation in growth rates, responsible for more than 40% of total variance, corresponds to individual, idiosyncratic firm aspects and not to industry, country, or macroeconomic conditions prevailing in specific years. Firm growth, similar to financial performance, is mostly unique to specific firms and not an industry or country related phenomenon. This finding also justifies using growth as an alternative outcome of superior firm resources and as a complementary dimension of competitive advantage. This also links this research with the resource-based view of strategy. Country was the second source of variation with around 10% of total variance. The analysis was done using the Compustat Global database with 80,320 observations, comprising 13,221 companies in 47 countries, covering the years of 1994 to 2002. It also compared the variance structure of growth to the variance structure of financial performance in the same sample.

  12. Impact of time-inhomogeneous jumps and leverage type effects on returns and realised variances

    DEFF Research Database (Denmark)

    Veraart, Almut

    This paper studies the effect of time-inhomogeneous jumps and leverage type effects on realised variance calculations when the logarithmic asset price is given by a Lévy-driven stochastic volatility model. In such a model, the realised variance is an inconsistent estimator of the integrated varia...... of inhomogeneous ordinary differential equations....... variance. Nevertheless it can be used within a quasi-maximumlikelihood setup to draw inference on the model parameters. In order to do that, this paper introduces a new methodology for deriving all cumulants of the returns and realised variance in explicit form by solving a recursive system...

  13. Infinite variance in fermion quantum Monte Carlo calculations.

    Science.gov (United States)

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  14. Depression in postmenopause: a study on a subsample of the Acupuncture on Hot Flushes Among Menopausal Women (ACUFLASH) study.

    Science.gov (United States)

    Dørmænen, Annbjørg; Heimdal, Marte Rye; Wang, Catharina Elisabeth Arfwedson; Grimsgaard, Anne Sameline

    2011-05-01

    The current study was conducted on a subsample of postmenopausal women with a high frequency of hot flashes who participated in the Norwegian Acupuncture on Hot Flushes Among Menopausal Women study. The purpose of this study was to examine the prevalence of depressive symptoms, as measured by the Beck Depression Inventory; the effect of acupuncture therapy for menopausal hot flashes on depressive symptoms; and the associations between depressive symptoms and hot flashes, sleep disturbances, and self-reported health. The Acupuncture on Hot Flushes Among Menopausal Women study was a multicenter, pragmatic, randomized controlled trial. The present subsample consisted of 72 women who were randomized to two groups: self-care only and acupuncture in addition to self-care for a period of 12 weeks. The prevalence of depressive symptoms was 30.6% at baseline, decreased similarly in both study groups during the study period, and was 14.1% at the end of the intervention. Depressive symptoms were significantly associated with sleep disturbances and self-reported health, but not with frequency of hot flashes. Postmenopausal women experiencing a high frequency of hot flashes reported a high prevalence of depressive symptoms compared with the general female population. Study results lend support to previous findings of an increased risk for depression during menopause, at least in women with severe hot flashes. Results further indicate that symptoms of depression in postmenopausal women may be alleviated with limited resources.

  15. Modelling heterogeneity variances in multiple treatment comparison meta-analysis ��� Are informative priors the better solution?

    Directory of Open Access Journals (Sweden)

    Thorlund Kristian

    2013-01-01

    Full Text Available Abstract Background Multiple treatment comparison (MTC meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption. This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between

  16. A Bayesian compressed-sensing approach for reconstructing neural connectivity from subsampled anatomical data.

    Science.gov (United States)

    Mishchenko, Yuriy; Paninski, Liam

    2012-10-01

    In recent years, the problem of reconstructing the connectivity in large neural circuits ("connectomics") has re-emerged as one of the main objectives of neuroscience. Classically, reconstructions of neural connectivity have been approached anatomically, using electron or light microscopy and histological tracing methods. This paper describes a statistical approach for connectivity reconstruction that relies on relatively easy-to-obtain measurements using fluorescent probes such as synaptic markers, cytoplasmic dyes, transsynaptic tracers, or activity-dependent dyes. We describe the possible design of these experiments and develop a Bayesian framework for extracting synaptic neural connectivity from such data. We show that the statistical reconstruction problem can be formulated naturally as a tractable L₁-regularized quadratic optimization. As a concrete example, we consider a realistic hypothetical connectivity reconstruction experiment in C. elegans, a popular neuroscience model where a complete wiring diagram has been previously obtained based on long-term electron microscopy work. We show that the new statistical approach could lead to an orders of magnitude reduction in experimental effort in reconstructing the connectivity in this circuit. We further demonstrate that the spatial heterogeneity and biological variability in the connectivity matrix--not just the "average" connectivity--can also be estimated using the same method.

  17. Maximum Variance Hashing via Column Generation

    Directory of Open Access Journals (Sweden)

    Lei Luo

    2013-01-01

    item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.

  18. Broadband minimum variance beamforming for ultrasound imaging.

    Science.gov (United States)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2009-02-01

    A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization weights for each frequency subband. The performance of the proposed MV beamformer is tested on simulated data obtained using Field II. The method is validated using synthetic aperture data and data obtained from a plane wave emission. Data for 13 point targets and a circular cyst with a radius of 5 mm are simulated. The performance of the MV beamformer is compared with delay-and-sum (DS) using boxcar weights and Hanning weights and is quantified by the full width at half maximum (FWHM) and the peak-side-lobe level (PSL). Single emission {DS boxcar, DS Hanning, MV} provide a PSL of {-16, -36, -49} dB and a FWHM of {0.79, 1.33, 0.08} mm. Using all 128 emissions, {DS boxcar, DS Hanning, MV} provides a PSL of {-32, -49, -65} dB, and a FWHM of {0.63, 0.97, 0.08} mm. The contrast of the beamformed single emission responses of the circular cyst was calculated as {-18, -37, -40} dB. The simulations have shown that the frequency subband MV beamformer provides a significant increase in lateral resolution compared with DS, even when using considerably fewer emissions. An increase in resolution is seen when using only one single emission. Furthermore, the effect of steering vector errors is investigated. The steering vector errors are investigated by applying an error of the sound speed estimate to the ultrasound data. As the error increases, it is seen that the MV beamformer is not as robust compared with the DS beamformer with boxcar and Hanning weights. Nevertheless, it is noted that the DS does not outperform the MV beamformer. For errors of 2% and 4% of the correct value, the FWHM are {0.81, 1.25, 0.34} mm and {0.89, 1.44, 0.46} mm, respectively.

  19. Integrating Variances into an Analytical Database

    Science.gov (United States)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  20. Helping financial analysts communicate variance analysis.

    Science.gov (United States)

    Dove, H G; Forthman, T

    1995-04-01

    Healthcare organizations often use variance analysis to explain variation between planned and actual costs and charges. This type of analysis is becoming even more common as healthcare executives work to improve efficiency, to set priorities for organizational improvement as part of strategic planning, and to explain costs and charges to interested groups such as purchasers and payers. Variance analysis produces data that must be presented in a format useful to senior executives. An effective format would express the data in a visual summary that is simple enough to be readily understood and detailed enough to provide valuable information.

  1. Meta-analysis of SNPs involved in variance heterogeneity using Levene's test for equal variances.

    Science.gov (United States)

    Deng, Wei Q; Asma, Senay; Paré, Guillaume

    2014-03-01

    Meta-analysis is a commonly used approach to increase the sample size for genome-wide association searches when individual studies are otherwise underpowered. Here, we present a meta-analysis procedure to estimate the heterogeneity of the quantitative trait variance attributable to genetic variants using Levene's test without needing to exchange individual-level data. The meta-analysis of Levene's test offers the opportunity to combine the considerable sample size of a genome-wide meta-analysis to identify the genetic basis of phenotypic variability and to prioritize single-nucleotide polymorphisms (SNPs) for gene-gene and gene-environment interactions. The use of Levene's test has several advantages, including robustness to departure from the normality assumption, freedom from the influence of the main effects of SNPs, and no assumption of an additive genetic model. We conducted a meta-analysis of the log-transformed body mass index of 5892 individuals and identified a variant with a highly suggestive Levene's test P-value of 4.28E-06 near the NEGR1 locus known to be associated with extreme obesity.

  2. A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform

    Science.gov (United States)

    Zhang, Baohua; Lu, Xiaoqi; Pei, Haiquan; Zhao, Ying

    2015-11-01

    This paper proposed a novel fusion method for the infrared and visible image based on the accurate extraction of the target region. Firstly, the super-pixels-based saliency analysis method is used to extract the salient regions of the infrared image and obtain the coarse contour of the infrared target. Then the multi-directional detection operators and the adaptive threshold algorithm are used to refine the boundary of the target region and obtain the fusion decision map. In order to capture the details of the visible image, non-subsampled Shearlet transform (NSST) is used to select the fusion coefficients of the background. Experimental results indicate that the proposed method is superior to other state of the-art methods in subjective visual and objective performance.

  3. Evaluating optical properties of real photonic crystal fibers with compressed sensing based on non-subsampled contourlet transform

    Science.gov (United States)

    Shen, Yan; Liu, Jing; Lou, Shuqin; Hou, Ya-Li; Chen, Houjin

    2017-09-01

    A real photonic crystal fibers (PCFs) evaluation approach based on compressed sensing with non-subsampled contourlet transform (NSCT) and the total variation model is proposed for modeling optical properties of the real PCFs. The classical images of a commercial large mode area PCF and polarization-maintaining PCF are used to verify the effectiveness of the proposed method. Experimental results demonstrate that the cross section images of real PCFs are rebuilt effectively by using only 36% image data for evaluating the optical properties with the same accuracy as by 100% data. To the best of our knowledge, this is the instance of applying the compressed sensing with the NSCT and total variation to reconstruct the cross section images of PCFs for quickly evaluating the optical properties of real PCFs without the requirement of long fiber samples and expensive measurement apparatuses.

  4. 21 CFR 1010.4 - Variances.

    Science.gov (United States)

    2010-04-01

    ... PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for... provisions of any performance standard under subchapter J of this chapter for an electronic product subject... purposes of Subchapter C—Electronic Product Radiation Control of the Federal Food, Drug, and Cosmetic Act...

  5. 7 CFR 205.290 - Temporary variances.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM Organic...

  6. Expected Stock Returns and Variance Risk Premia

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Zhou, Hao

    We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia predi...... to daily, data. Our findings suggest that temporal variation in both risk-aversion and volatility-risk play an important role in determining stock market returns.......We find that the difference between implied and realized variation, or the variance risk premium, is able to explain more than fifteen percent of the ex-post time series variation in quarterly excess returns on the market portfolio over the 1990 to 2005 sample period, with high (low) premia...... predicting high (low) future returns. The magnitude of the return predictability of the variance risk premium easily dominates that afforded by standard predictor variables like the P/E ratio, the dividend yield, the default spread, and the consumption-wealth ratio (CAY). Moreover, combining the variance...

  7. Broadband Minimum Variance Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2009-01-01

    A minimum variance (MV) approach for near-field beamforming of broadband data is proposed. The approach is implemented in the frequency domain, and it provides a set of adapted, complex apodization weights for each frequency subband. The performance of the proposed MV beamformer is tested on simu...

  8. Variance decomposition using an IRT measurement model

    NARCIS (Netherlands)

    van den Berg, Stéphanie Martine; van den Berg, Stephanie M.; Glas, Cornelis A.W.; Boomsma, Dorret I.

    2007-01-01

    Large scale research projects in behaviour genetics and genetic epidemiology are often based on questionnaire or interview data. Typically, a number of items is presented to a number of subjects, the subjects’ sum scores on the items are computed, and the variance of sum scores is decomposed into a

  9. Data Sparseness and Variance in Accounting Profitability

    NARCIS (Netherlands)

    S. Stavropoulos (Spyridon); M.J. Burger (Martijn); D. Skuras (Dimitris)

    2015-01-01

    markdownabstract__Abstract__ A central question in strategic management is why some firms perform better than others. One approach to addressing this question empirically is to decompose the variance in firm-level profitability into firm, industry, location, and year components. Although it is

  10. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with addit...

  11. (co)variances for growth and efficiency

    African Journals Online (AJOL)

    42, 295. CANTET, R.J.C., KRESS, D.D., ANDERSON, D.C., DOORNBOS, D.E., BURFENING, P.J. &. BLACKWELL, R.L., 1988. Direct and maternal variances and covariances and maternal phenotypic effects on preweaning growth of beef cattle. J. Anim. Sci. 66, 648. CUNNINGHAM. E.P., MOON, R.A. & GJEDREN, T., 1970.

  12. Biological Variance in Agricultural Products. Theoretical Considerations

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Konopacki, P.

    2003-01-01

    The food that we eat is uniform neither in shape or appearance nor in internal composition or content. Since technology became increasingly important, the presence of biological variance in our food became more and more of a nuisance. Techniques and procedures (statistical, technical) were

  13. Relationship between Allan variances and Kalman Filter parameters

    Science.gov (United States)

    Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.

    1984-01-01

    A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.

  14. Mean and Variance Modeling of Under- and Overdispersed Count Data

    Directory of Open Access Journals (Sweden)

    David M. Smith

    2016-03-01

    Full Text Available This article describes the R package CountsEPPM and its use in determining maximum likelihood estimates of the parameters of extended Poisson process models. These provide a Poisson process based family of flexible models that can handle both underdispersion and overdispersion in observed count data, with the negative binomial and Poisson distributions being special cases. Within CountsEPPM models with mean and variance related to covariates are constructed to match a generalized linear model formulation. Use of the package is illustrated by application to several published datasets.

  15. Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental.

    Science.gov (United States)

    Lidauer, M H; Emmerling, R; Mäntysaari, E A

    2008-06-01

    A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region x year x month x parity effect and a random herd x test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking.

  16. Source Characterization by the Allan Variance

    Science.gov (United States)

    Gattano, C.; Lambert, S.

    2016-12-01

    Until now, the main criteria for selecting geodetic sources were based on astrometric stability and structure at 8 GHz te{Fey2015}. But with more observations and the increase of accuracy, the statistical tools used to determine this stability become inappropriate with regards to sudden motions of the radiocenter. In this work, we propose to replace these tools by the Allan Variance te{Allan1966}, first used on VLBI sources by M. Feissel-Vernier te{Feissel2003}, leading to a new classification of sources into three groups according to the shape of the Allan Variance. In parallel, we combine two catalogs, the Large Quasar Astrometric Catalogue te{Souchay2015} and the Optical Characteristics of Astrometric Radio Sources te{Malkin2013}, in order to gather most physical characteristics known about these VLBI targets. By doing so, we may reveal physical criteria that may be useful in the selection of new targets for future VLBI observations.

  17. Fundamentals of exploratory analysis of variance

    CERN Document Server

    Hoaglin, David C; Tukey, John W

    2009-01-01

    The analysis of variance is presented as an exploratory component of data analysis, while retaining the customary least squares fitting methods. Balanced data layouts are used to reveal key ideas and techniques for exploration. The approach emphasizes both the individual observations and the separate parts that the analysis produces. Most chapters include exercises and the appendices give selected percentage points of the Gaussian, t, F chi-squared and studentized range distributions.

  18. The value of travel time variance

    OpenAIRE

    Fosgerau, Mogens; Engelson, Leonid

    2010-01-01

    This paper considers the value of travel time variability under scheduling preferences that are de�fined in terms of linearly time-varying utility rates associated with being at the origin and at the destination. The main result is a simple expression for the value of travel time variability that does not depend on the shape of the travel time distribution. The related measure of travel time variability is the variance of travel time. These conclusions apply equally to travellers ...

  19. The Genealogical Consequences of Fecundity Variance Polymorphism

    Science.gov (United States)

    Taylor, Jesse E.

    2009-01-01

    The genealogical consequences of within-generation fecundity variance polymorphism are studied using coalescent processes structured by genetic backgrounds. I show that these processes have three distinctive features. The first is that the coalescent rates within backgrounds are not jointly proportional to the infinitesimal variance, but instead depend only on the frequencies and traits of genotypes containing each allele. Second, the coalescent processes at unlinked loci are correlated with the genealogy at the selected locus; i.e., fecundity variance polymorphism has a genomewide impact on genealogies. Third, in diploid models, there are infinitely many combinations of fecundity distributions that have the same diffusion approximation but distinct coalescent processes; i.e., in this class of models, ancestral processes and allele frequency dynamics are not in one-to-one correspondence. Similar properties are expected to hold in models that allow for heritable variation in other traits that affect the coalescent effective population size, such as sex ratio or fecundity and survival schedules. PMID:19433628

  20. Discussion on variance reduction technique for shielding

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)

  1. Variance heterogeneity analysis for detection of potentially interacting genetic loci: method and its limitations

    Directory of Open Access Journals (Sweden)

    van Duijn Cornelia

    2010-10-01

    Full Text Available Abstract Background Presence of interaction between a genotype and certain factor in determination of a trait's value, it is expected that the trait's variance is increased in the group of subjects having this genotype. Thus, test of heterogeneity of variances can be used as a test to screen for potentially interacting single-nucleotide polymorphisms (SNPs. In this work, we evaluated statistical properties of variance heterogeneity analysis in respect to the detection of potentially interacting SNPs in a case when an interaction variable is unknown. Results Through simulations, we investigated type I error for Bartlett's test, Bartlett's test with prior rank transformation of a trait to normality, and Levene's test for different genetic models. Additionally, we derived an analytical expression for power estimation. We showed that Bartlett's test has acceptable type I error in the case of trait following a normal distribution, whereas Levene's test kept nominal Type I error under all scenarios investigated. For the power of variance homogeneity test, we showed (as opposed to the power of direct test which uses information about known interacting factor that, given the same interaction effect, the power can vary widely depending on the non-estimable direct effect of the unobserved interacting variable. Thus, for a given interaction effect, only very wide limits of power of the variance homogeneity test can be estimated. Also we applied Levene's approach to test genome-wide homogeneity of variances of the C-reactive protein in the Rotterdam Study population (n = 5959. In this analysis, we replicate previous results of Pare and colleagues (2010 for the SNP rs12753193 (n = 21, 799. Conclusions Screening for differences in variances among genotypes of a SNP is a promising approach as a number of biologically interesting models may lead to the heterogeneity of variances. However, it should be kept in mind that the absence of variance heterogeneity for

  2. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    Science.gov (United States)

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  3. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    Directory of Open Access Journals (Sweden)

    Pingsha Hu

    Full Text Available Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  4. MENENTUKAN PORTOFOLIO OPTIMAL MENGGUNAKAN MODEL CONDITIONAL MEAN VARIANCE

    Directory of Open Access Journals (Sweden)

    I GEDE ERY NISCAHYANA

    2016-08-01

    Full Text Available When the returns of stock prices show the existence of autocorrelation and heteroscedasticity, then conditional mean variance models are suitable method to model the behavior of the stocks. In this thesis, the implementation of the conditional mean variance model to the autocorrelated and heteroscedastic return was discussed. The aim of this thesis was to assess the effect of the autocorrelated and heteroscedastic returns to the optimal solution of a portfolio. The margin of four stocks, Fortune Mate Indonesia Tbk (FMII.JK, Bank Permata Tbk (BNLI.JK, Suryamas Dutamakmur Tbk (SMDM.JK dan Semen Gresik Indonesia Tbk (SMGR.JK were estimated by GARCH(1,1 model with standard innovations following the standard normal distribution and the t-distribution.  The estimations were used to construct a portfolio. The portfolio optimal was found when the standard innovation used was t-distribution with the standard deviation of 1.4532 and the mean of 0.8023 consisting of 0.9429 (94% of FMII stock, 0.0473 (5% of  BNLI stock, 0% of SMDM stock, 1% of  SMGR stock.

  5. Analysis of variance frameworks in clinical child and adolescent psychology: advanced issues and recommendations.

    Science.gov (United States)

    Jaccard, James; Guilamo-Ramos, Vincent

    2002-06-01

    Explores more advanced issues that researchers need to consider when using analysis of variance frameworks, building on basic issues for analysis of variance discussed in Jaccard and Guilamo-Ramos (2002). These include (a) using confidence intervals, (b) asserting group equivalence after a nonsignificant result, (c) use of magnitude estimation approaches, (d) sample size and power considerations, (e) outlier analysis, (f) violations of assumptions, and (g) missing data. Suggestions are offered for analytic practices in each of these domains.

  6. On the variance of the number of real zeros of a random trigonometric polynomial

    Directory of Open Access Journals (Sweden)

    K. Farahmand

    1997-01-01

    is a sequence of independent normally distributed random variables is known. The present paper provides an upper estimate for the variance of such a number. To achieve this result we first present a general formula for the covariance of the number of real zeros of any normal process, ξ(t, occurring in any two disjoint intervals. A formula for the variance of the number of real zeros of ξ(t follows from this result.

  7. SDR: A Better Trigger for Adaptive Variance Scaling in Normal EDAs

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); J. Grahl; F. Rothlauf; D. Thierens (Dirk)

    2007-01-01

    htmlabstractRecently, advances have been made in continuous, normal-distribution-based Estimation-of-Distribution Algorithms (EDAs) by scaling the variance up from the maximum-likelihood estimate. When done properly, such scaling has been shown to prevent premature convergence on slope-like regions

  8. A simulation study of how simple mark-recapture methods can be combined with destructive subsampling to facilitate surveys of flying insects

    DEFF Research Database (Denmark)

    Nachman, G; Skovgård, H

    2012-01-01

    Mark-recapture techniques are used for studies of animal populations. With only three sampling occasions, both Bailey's triple-catch (BTC) and Jolly—Seber's (J—S) stochastic method can be applied. As marking and handling of fragile organisms may harm them, and thereby affect their chances of being...... recaptured, handling should be minimized. This can be achieved by taking a subsample before the main sample at the second sampling occasion. Individuals in the main sample are marked and released, whereas those in the subsample are only used for identifying recaptures. Monte-Carlo simulation was used...

  9. Jackknife variance of the partial area under the empirical receiver operating characteristic curve.

    Science.gov (United States)

    Bandos, Andriy I; Guo, Ben; Gur, David

    2017-04-01

    Receiver operating characteristic analysis provides an important methodology for assessing traditional (e.g., imaging technologies and clinical practices) and new (e.g., genomic studies, biomarker development) diagnostic problems. The area under the clinically/practically relevant part of the receiver operating characteristic curve (partial area or partial area under the receiver operating characteristic curve) is an important performance index summarizing diagnostic accuracy at multiple operating points (decision thresholds) that are relevant to actual clinical practice. A robust estimate of the partial area under the receiver operating characteristic curve is provided by the area under the corresponding part of the empirical receiver operating characteristic curve. We derive a closed-form expression for the jackknife variance of the partial area under the empirical receiver operating characteristic curve. Using the derived analytical expression, we investigate the differences between the jackknife variance and a conventional variance estimator. The relative properties in finite samples are demonstrated in a simulation study. The developed formula enables an easy way to estimate the variance of the empirical partial area under the receiver operating characteristic curve, thereby substantially reducing the computation burden, and provides important insight into the structure of the variability. We demonstrate that when compared with the conventional approach, the jackknife variance has substantially smaller bias, and leads to a more appropriate type I error rate of the Wald-type test. The use of the jackknife variance is illustrated in the analysis of a data set from a diagnostic imaging study.

  10. Breast cancer detection and classification in digital mammography based on Non-Subsampled Contourlet Transform (NSCT) and Super Resolution.

    Science.gov (United States)

    Pak, Fatemeh; Kanan, Hamidreza Rashidy; Alikhassi, Afsaneh

    2015-11-01

    Breast cancer is one of the most perilous diseases among women. Breast screening is a method of detecting breast cancer at a very early stage which can reduce the mortality rate. Mammography is a standard method for the early diagnosis of breast cancer. In this paper, a new algorithm is proposed for breast cancer detection and classification in digital mammography based on Non-Subsampled Contourlet Transform (NSCT) and Super Resolution (SR). The presented algorithm includes three main parts including pre-processing, feature extraction and classification. In the pre-processing stage, after determining the region of interest (ROI) by an automatic technique, the quality of image is improved using NSCT and SR algorithm. In the feature extraction part, several features of the image components are extracted and skewness of each feature is calculated. Finally, AdaBoost algorithm is used to classify and determine the probability of benign and malign disease. The obtained results on Mammographic Image Analysis Society (MIAS) database indicate the significant performance and superiority of the proposed method in comparison with the state of the art approaches. According to the obtained results, the proposed technique achieves 91.43% and 6.42% as a mean accuracy and FPR, respectively. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Componentes de (covariância e parâmetros genéticos de características de crescimento da raça Simental no Brasil Variance components and genetic parameters estimates for growth traits of Simmental cattle in Brazil

    Directory of Open Access Journals (Sweden)

    L.F.A. Marques

    1999-08-01

    Full Text Available Informações de genealogia e produção, cedidas pela Associação Brasileira de Criadores da Raça Simental (ABCRS, relativas aos pesos desde o nascimento até um ano de idade, foram utilizadas para estimar, sob modelos alternativos, os componentes de variância e os parâmetros genéticos em animais da raça Simental no Brasil. A matriz de parentesco incluiu 25.812 animais dos quais 7587 com dados de produção. O modelo 1 contém, além do erro, o efeito genético direto. Os modelos seguintes contêm os componentes do modelo 1, mais o efeito permanente de ambiente materno (modelo 2, ou o componente genético materno (modelo 3, ambos os componentes (modelo 5, os componentes do modelo 3 mais a covariância entre os efeitos genéticos direto e materno (modelo 4 e todos os componentes citados (modelo 6. Os modelos foram comparados pelo teste de razão de verossimilhança pelo chi² (PBirth, 100-day, weaning and yearling weights of 7587 Simmental cattle, and 25,812 pedigree data were used to estimate genetic parameters using different animal models. The simplest model (model l included additive genetic and residual random effects. Models 2 and 3 were the same as model 1, but included, respectively, maternal permanent and maternal genetic effects. Model 4 did not include permanent effect. The most complete model (model 6 also included maternal additive and permanent effects, assuming covariance between them. Model 5 was the same as model 6, but did not included direct maternal covariance. Contemporary groups considered animals born in the same herd, year and season, from the same sex and raised under the same nutritional system. The models were compared using likelihood ratio tests. The (covariance components and the genetic parameters decreased from the most simple (model 1 to the most complete model (model 6. One-hundred-day weight showed no (.00±.00 maternal genetic variance but moderate maternal environmental permanent effect (.17±.07. The

  12. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  13. Genetic heterogeneity of within-family variance of body weight in Atlantic salmon (Salmo salar).

    Science.gov (United States)

    Sonesson, Anna K; Odegård, Jørgen; Rönnegård, Lars

    2013-10-17

    Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro

  14. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  15. Non-exercise estimation of VO2max using the International Physical Activity Questionnaire

    Science.gov (United States)

    Schembre, Susan M.; Riebe, Deborah A.

    2011-01-01

    Non-exercise equations developed from self-reported physical activity can estimate maximal oxygen uptake (VO2max) as well as submaximal exercise testing. The International Physical Activity Questionnaire (IPAQ) is the most widely used and validated self-report measure of physical activity. This study aimed to develop and test a VO2max estimation equation derived from the IPAQ-Short Form (IPAQ-S). College-aged males and females (n = 80) completed the IPAQ-S and performed a maximal exercise test. The estimation equation was created with multivariate regression in a gender-balanced subsample of participants, equally representing five levels of fitness (n = 50) and validated in the remaining participants (n = 30). The resulting equation explained 43% of the variance in measured VO2max (SEE = 5.45 ml·kg-1·min-1). Estimated VO2max for 87% of individuals fell within acceptable limits of error observed with submaximal exercise testing (20% error). The IPAQ-S can be used to successfully estimate VO2max as well as submaximal exercise tests. Development of other population-specific estimation equations is warranted. PMID:21927551

  16. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  17. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.

    Science.gov (United States)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-01

    The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  18. Partitioning Phenotypic Variance Due to Parent-of-Origin Effects Using Genomic Relatedness Matrices.

    Science.gov (United States)

    Laurin, Charles; Cuellar-Partida, Gabriel; Hemani, Gibran; Smith, George Davey; Yang, Jian; Evans, David M

    2018-01-01

    We propose a new method, G-REMLadp, to estimate the phenotypic variance explained by parent-of-origin effects (POEs) across the genome. Our method uses restricted maximum likelihood analysis of genome-wide genetic relatedness matrices based on individuals' phased genotypes. Genome-wide SNP data from parent child duos or trios is required to obtain relatedness matrices indexing the parental origin of offspring alleles, as well as offspring phenotype data to partition the trait variation into variance components. To calibrate the power of G-REMLadp to detect non-null POEs when they are present, we provide an analytic approximation derived from Haseman-Elston regression. We also used simulated data to quantify the power and Type I Error rates of G-REMLadp, as well as the sensitivity of its variance component estimates to violations of underlying assumptions. We subsequently applied G-REMLadp to 36 phenotypes in a sample of individuals from the Avon Longitudinal Study of Parents and Children (ALSPAC). We found that the method does not seem to be inherently biased in estimating variance due to POEs, and that substantial correlation between parental genotypes is necessary to generate biased estimates. Our empirical results, power calculations and simulations indicate that sample sizes over 10000 unrelated parent-offspring duos will be necessary to detect POEs explaining POEs tagged by our genetic relationship matrices are unlikely to explain large proportions of the phenotypic variance (i.e. > 15%) for the 36 traits that we have examined.

  19. A 12GHz 210fs 6mW digital PLL with sub-sampling binary phase detector and voltage-time modulated DCO

    NARCIS (Netherlands)

    Ru, Z.; Geraedts, P.F.J.; Klumperink, Eric A.M.; He, X.; Nauta, Bram

    2013-01-01

    An integer-N digital PLL architecture is presented that simplifies the critical phase path using a sub-sampling binary (bang-bang) phase detector. Two power-efficient techniques are presented that can reduce DCO frequency tuning step by voltage-domain and time-domain (pulse-width) modulating the DCO

  20. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2017-02-01

    Full Text Available Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2 with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the

  1. Variance in faking across noncognitive measures.

    Science.gov (United States)

    McFarland, L A; Ryan, A M

    2000-10-01

    There are discrepant findings in the literature regarding the effects of applicant faking on the validity of noncognitive measures. One explanation for these mixed results may be the failure of some studies to consider individual differences in faking. This study demonstrates that there is considerable variance across individuals in the extent of faking 3 types of noncognitive measures (i.e., personality test, biodata inventory, and integrity test). Participants completed measures honestly and with instructions to fake. Results indicated some measures were more difficult to fake than others. The authors found that integrity, conscientiousness, and neuroticism were related to faking. In addition, individuals faked fairly consistently across the measures. Implications of these results and a model of faking that includes factors that may influence faking behavior are provided.

  2. Dynamics of Variance Risk Premia, Investors' Sentiment and Return Predictability

    DEFF Research Database (Denmark)

    Rombouts, Jerome V.K.; Stentoft, Lars; Violante, Francesco

    We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data and real......We develop a joint framework linking the physical variance and its risk neutral expectation implying variance risk premia that are persistent, appropriately reacting to changes in level and variability of the variance and naturally satisfying the sign constraint. Using option market data...

  3. A Monte Carlo Study of Seven Homogeneity of Variance Tests

    OpenAIRE

    Howard B. Lee; Gary S. Katz; Alberto F. Restori

    2010-01-01

    Problem statement: The decision by SPSS (now PASW) to use the unmodified Levene test to test homogeneity of variance was questioned. It was compared to six other tests. In total, seven homogeneity of variance tests used in Analysis Of Variance (ANOVA) were compared on robustness and power using Monte Carlo studies. The homogeneity of variance tests were (1) Levene, (2) modified Levene, (3) Z-variance, (4) Overall-Woodward Modified Z-variance, (5) OBrien, (6) Samiuddin Cube Root and (7) F-Max....

  4. A Realized Variance for the Whole Day Based on Intermittent High-Frequency Data

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2005-01-01

    We consider the problem of deriving an empirical measure of daily integrated variance (IV) in the situation where high-frequency price data are unavailable for part of the day. We study three estimators in this context and characterize the assumptions that justify their use. We show that the opti......We consider the problem of deriving an empirical measure of daily integrated variance (IV) in the situation where high-frequency price data are unavailable for part of the day. We study three estimators in this context and characterize the assumptions that justify their use. We show...

  5. Inferring the trajectory of genetic variance in the course of artificial selection.

    Science.gov (United States)

    Sorensen, D; Fernando, R; Gianola, D

    2001-02-01

    A method is proposed to infer genetic parameters within a cohort, using data from all individuals in an experiment. An application is the study of changes in additive genetic variance over generations, employing data from all generations. Inferences about the genetic variance in a given generation are based on its marginal posterior distribution, estimated via Markov chain Monte Carlo methods. As defined, the additive genetic variance within the group is directly related to the amount of selection response to be expected if parents are chosen within the group. Results from a simulated selection experiment are used to illustrate properties of the method. Four sets of data are analysed: directional selection with and without environmental trend, and random selection, with and without environmental trend. In all cases, posterior credibility intervals of size 95% assign relatively high density to values of the additive genetic variance and heritability in the neighbourhood of the true values. Properties and generalizations of the method are discussed.

  6. Interdependence of NAFTA capital markets: A minimum variance portfolio approach

    Directory of Open Access Journals (Sweden)

    López-Herrera Francisco

    2014-01-01

    Full Text Available We estimate the long-run relationships among NAFTA capital market returns and then calculate the weights of a “time-varying minimum variance portfolio” that includes the Canadian, Mexican, and USA capital markets between March 2007 and March 2009, a period of intense turbulence in international markets. Our results suggest that the behavior of NAFTA market investors is not consistent with that of a theoretical “risk-averse” agent during periods of high uncertainty and may be either considered as irrational or attributed to a possible “home country bias”. This finding represents valuable information for portfolio managers and contributes to a better understanding of the nature of the markets in which they invest. It also has practical implications in the design of international portfolio investment policies.

  7. Estimation for Domains in Double Sampling with Probabilities ...

    African Journals Online (AJOL)

    Available publications show that the variance of an estimator of a domain parameter depends variance of the study variable for the domain elements and on the variance of the mean of that variable for element of the domain in each constituent stratum. In this article, we show that the variance of an estimator of a domain total ...

  8. Seleção para peso pós-desmama em um rebanho Gir. 2. Estimativas de variâncias e parâmetros genéticos dos efeitos direto e materno para características de crescimento Selection for growth traits in Gyr cattle. 2. Estimates of variances and genetic parameters due to direct and maternal effects

    Directory of Open Access Journals (Sweden)

    Fabiana Batalha Knackfuss

    2006-06-01

    Full Text Available Componentes de variância e parâmetros genéticos para características de crescimento foram estimados usando diferentes modelos em um rebanho da raça Gir. Utilizou-se o método da máxima verossimilhança restrita sob modelo animal univariado. Os modelos de análise incluíram os efeitos fixos de mês de nascimento, grupo contemporâneo e idade da vaca. Cinco modelos diferindo quanto aos efeitos aleatórios foram testados. Para todas as características da fase pré-desmama, o teste de razão de verossimilhança (LRT indicou o modelo com efeito genético aditivo direto e efeitos maternos (genético e de ambiente permanente como o de melhor ajuste. As estimativas de herdabilidade direta para peso ao nascer (PN, peso aos quatro meses corrigido para 120 dias (P120, peso à desmama corrigido para 210 dias (P210 e ganho diário na fase pré-desmama (GPRE foram, respectivamente, 0,31± 0,07; 0,14 ± 0,06; 0,23 ± 0,07 e 0,22 ± 0,07. Para as características da fase pós-desmama, o modelo que forneceu o melhor ajuste aos dados incluiu apenas o efeito genético aditivo direto. As estimativas de herdabilidade direta para peso de machos ao final da prova de ganho de peso (P378, peso de fêmeas corrigido para 550 dias (P550, ganho diário na prova de ganho de peso (G112, altura aos 378 dias em machos (AM e altura aos 550 dias, em fêmeas (AF foram, respectivamente: 0,45 ± 0,11; 0,29 ± 0,11; 0,37 ± 0,11; 0,79 ± 0,13 e 0,36 ± 0,0. Os efeitos maternos, tanto o genético quanto o de ambiente permanente, foram fontes de variação importantes para as características da fase pré-desmama, não sendo verificada influência desses efeitos sobre as características da fase pós-desmama.Variance components and genetic parameters for growth traits of Gyr breed were estimated using univariate analyses by restricted maximum likelihood. Five models differing in the random effects were evaluated. The models included the fixed effects of month of birth

  9. Adjustment for heterogeneous variances due to days in milk and ...

    African Journals Online (AJOL)

    ARC-IRENE

    models for national genetic evaluation of dairy cattle in South Africa ... Test-Day Model (FRTDM), which assumes equal variances of the response variable at different .... adjusted for heterogeneous variances, BLUEs are the best linear unbiased .... This makes sense as part of the residual variance has already been taken ...

  10. Hidden Item Variance in Multiple Mini-Interview Scores

    Science.gov (United States)

    Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen

    2017-01-01

    The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…

  11. A New Nonparametric Levene Test for Equal Variances

    Science.gov (United States)

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  12. The Operating Characteristics of the Nonparametric Levene Test for Equal Variances with Assessment and Evaluation Data

    Directory of Open Access Journals (Sweden)

    David W. Nordstokke

    2011-02-01

    Full Text Available Many assessment and evaluation studies use statistical hypothesis tests, such as the independent samples t test or analysis of variance, to test the equality of two or more means for gender, age groups, cultures or language group comparisons. In addition, some, but far fewer, studies compare variability across these same groups or research conditions. Tests of the equality of variances can therefore be used on their own for this purpose but they are most often used alongside other methods to support assumptions made about variances. This is often done so that variances can be pooled across groups to yield an estimate of variance that is used in the standard error of the statistic in question. The purposes of this paper are twofold. The first purpose is to describe a new nonparametric Levene test for equal variances that can be used with widely available statistical software such as SPSS or SAS, and the second purpose is to investigate this test's operating characteristics, Type I error and statistical power, with real assessment and evaluation data. To date, the operating characteristics of the nonparametric Levene test have been studied with mathematical distributions in computer experiments and, although that information is valuable, this study will be an important next step in documenting both the level of non-normality (skewness and kurtosis of real assessment and evaluation data, and how this new statistical test operates in these conditions.

  13. The Evolution of Human Intelligence and the Coefficient of Additive Genetic Variance in Human Brain Size

    Science.gov (United States)

    Miller, Geoffrey F.; Penke, Lars

    2007-01-01

    Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…

  14. A New Approach for Predicting the Variance of Random Decrement Functions

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...

  15. Genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations

    NARCIS (Netherlands)

    Mäki, K.; Groen, A.F.; Liinamo, A.E.; Ojala, M.

    2002-01-01

    The aims of this study were to assess genetic variances, trends and mode of inheritance for hip and elbow dysplasia in Finnish dog populations. The influence of time-dependent fixed effects in the model when estimating the genetic trends was also studied. Official hip and elbow dysplasia screening

  16. Firm Size and Growth Rate Variance: the Effects of Data Truncation

    NARCIS (Netherlands)

    Capasso, M.; Cefis, E.

    2010-01-01

    This paper discusses the effects of the existence of natural and/or exogenously imposed thresholds in firm size distributions, on estimations of the relation between firm size and variance in firm growth rates. We explain why the results in the literature on this relationship are not consistent. We

  17. How to assess intra- and inter-observer agreement with quantitative PET using variance component analysis

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie Holm; Segtnan, Eivind Antonsen

    2016-01-01

    BACKGROUND: Quantitative measurement procedures need to be accurate and precise to justify their clinical use. Precision reflects deviation of groups of measurement from another, often expressed as proportions of agreement, standard errors of measurement, coefficients of variation, or the Bland-A....... The involved linear mixed effects models require carefully considered sample sizes to account for the challenge of sufficiently accurately estimating variance components....

  18. Estimating constituent loads

    Science.gov (United States)

    Cohn, T.A.; DeLong, L.L.; Gilroy, E.J.; Hirsch, R.M.; Wells, D.K.

    1989-01-01

    This paper compares the bias and variance of three procedures that can be used with log linear regression models: the traditional rating curve estimator, a modified rating curve method, and a minimum variance unbiased estimator (MVUE). Analytical derivations of the bias and efficiency of all three estimators are presented. It is shown that for many conditions the traditional and the modified estimator can provide satisfactory estimates. However, other conditions exist where they have substantial bias and a large mean square error. These conditions commonly occur when sample sizes are small, or when loads are estimated during high-flow conditions. The MVUE, however, is unbiased and always performs nearly as well or better than the rating curve estimator or the modified estimator provided that the hypothesis of the log linear model is correct. Since an efficient unbiased estimator is available, there seems to be no reason to employ biased estimators. -from Authors

  19. Genetic variance components and heritability of multiallelic heterozygosity under inbreeding

    Science.gov (United States)

    Nietlisbach, P; Keller, L F; Postma, E

    2016-01-01

    The maintenance of genetic diversity in fitness-related traits remains a central topic in evolutionary biology, for example, in the context of sexual selection for genetic benefits. Among the solutions that have been proposed is directional sexual selection for heterozygosity. The importance of such selection is highly debated. However, a critical evaluation requires knowledge of the heritability of heterozygosity, a quantity that is rarely estimated in this context, and often assumed to be zero. This is at least partly the result of the lack of a general framework that allows for its quantitative prediction in small and inbred populations, which are the focus of most empirical studies. Moreover, while current predictors are applicable only to biallelic loci, fitness-relevant loci are often multiallelic, as are the neutral markers typically used to estimate genome-wide heterozygosity. To this end, we first review previous, but little-known, work showing that under most circumstances, heterozygosity at biallelic loci and in the absence of inbreeding is heritable. We then derive the heritability of heterozygosity and the underlying variances for multiple alleles and any inbreeding level. We also show that heterozygosity at multiallelic loci can be highly heritable when allele frequencies are unequal, and that this heritability is reduced by inbreeding. Our quantitative genetic framework can provide new insights into the evolutionary dynamics of heterozygosity in inbred and outbred populations. PMID:26174022

  20. ANOVA and the variance homogeneity assumption: Exploring a better gatekeeper.

    Science.gov (United States)

    Kim, Yoosun Jamie; Cribbie, Robert A

    2018-02-01

    Valid use of the traditional independent samples ANOVA procedure requires that the population variances are equal. Previous research has investigated whether variance homogeneity tests, such as Levene's test, are satisfactory as gatekeepers for identifying when to use or not to use the ANOVA procedure. This research focuses on a novel homogeneity of variance test that incorporates an equivalence testing approach. Instead of testing the null hypothesis that the variances are equal against an alternative hypothesis that the variances are not equal, the equivalence-based test evaluates the null hypothesis that the difference in the variances falls outside or on the border of a predetermined interval against an alternative hypothesis that the difference in the variances falls within the predetermined interval. Thus, with the equivalence-based procedure, the alternative hypothesis is aligned with the research hypothesis (variance equality). A simulation study demonstrated that the equivalence-based test of population variance homogeneity is a better gatekeeper for the ANOVA than traditional homogeneity of variance tests. © 2017 The British Psychological Society.

  1. Image registration error variance as a measure of overlay quality. [satellite data processing

    Science.gov (United States)

    Mcgillem, C. D.; Svedlow, M.

    1976-01-01

    When one image (the signal) is to be registered with a second image (the signal plus noise) of the same scene, one would like to know the accuracy possible for this registration. This paper derives an estimate of the variance of the registration error that can be expected via two approaches. The solution in each instance is found to be a function of the effective bandwidth of the signal and the noise, and the signal-to-noise ratio. Application of these results to LANDSAT-1 data indicates that for most cases, registration variances will be significantly less than the diameter of one picture element.

  2. Feasibility study: protein denaturation and coagulation monitoring with speckle variance optical coherence tomography

    Science.gov (United States)

    Lee, Changho; Cheon, Gyeongwoo; Kim, Do-Hyun; Kang, Jin U.

    2016-12-01

    We performed the feasibility study using speckle variance optical coherence tomography (SvOCT) to monitor the thermally induced protein denaturation and coagulation process as a function of temperature and depth. SvOCT provided the depth-resolved image of protein denaturation and coagulation with microscale resolution. This study was conducted using egg white. During the heating process, as the temperature increased, increases in the speckle variance signal was observed as the egg white proteins coagulated. Additionally, by calculating the cross-correlation coefficient in specific areas, denaturized egg white conditions were successfully estimated. These results indicate that SvOCT could be used to monitor the denaturation process of various proteins.

  3. First-order variance of travel time in nonstationary formations

    National Research Council Canada - National Science Library

    Olaf A. Cirpka; Wolfgang Nowak

    2004-01-01

    ... is the variance of travel time, i.e., the time it takes for a solute particle to be transported from the release point to an observation plane [ Shapiro and Cvetkovic , 1988 ; Dagan et al. , 1992 ]. The travel time is also given by the first temporal moment of a concentration breakthrough curve normalized by its zeroth moment [ Harvey and Gorelick , 1995 ]. Together with the variance of lateral displacement, the travel time variance has been used in solute‐flux approaches to macrod...

  4. A New Nonparametric Levene Test for Equal Variances

    OpenAIRE

    Bruno D. Zumbo; David W. Nordstokke

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current 'gold standard' method, the median-based Levene test, in a computer simulation study. The simulation results show that when sampling from either symmetric or ske...

  5. Statistical power to detect genetic (co)variance of complex traits using SNP data in unrelated samples.

    Science.gov (United States)

    Visscher, Peter M; Hemani, Gibran; Vinkhuyzen, Anna A E; Chen, Guo-Bo; Lee, Sang Hong; Wray, Naomi R; Goddard, Michael E; Yang, Jian

    2014-04-01

    We have recently developed analysis methods (GREML) to estimate the genetic variance of a complex trait/disease and the genetic correlation between two complex traits/diseases using genome-wide single nucleotide polymorphism (SNP) data in unrelated individuals. Here we use analytical derivations and simulations to quantify the sampling variance of the estimate of the proportion of phenotypic variance captured by all SNPs for quantitative traits and case-control studies. We also derive the approximate sampling variance of the estimate of a genetic correlation in a bivariate analysis, when two complex traits are either measured on the same or different individuals. We show that the sampling variance is inversely proportional to the number of pairwise contrasts in the analysis and to the variance in SNP-derived genetic relationships. For bivariate analysis, the sampling variance of the genetic correlation additionally depends on the harmonic mean of the proportion of variance explained by the SNPs for the two traits and the genetic correlation between the traits, and depends on the phenotypic correlation when the traits are measured on the same individuals. We provide an online tool for calculating the power of detecting genetic (co)variation using genome-wide SNP data. The new theory and online tool will be helpful to plan experimental designs to estimate the missing heritability that has not yet been fully revealed through genome-wide association studies, and to estimate the genetic overlap between complex traits (diseases) in particular when the traits (diseases) are not measured on the same samples.

  6. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    Science.gov (United States)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  7. Using variance structure to quantify responses to perturbation in fish catches

    Science.gov (United States)

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  8. A general approach to mixed effects modeling of residual variances in generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Kizilkaya Kadir

    2005-01-01

    Full Text Available Abstract We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM analysis of birth weight (BW and the cumulative probit mixed model (CPMM analysis of calving ease (CE. The deviance information criterion (DIC was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values.

  9. Minimum variance beamformers for coherent plane-wave compounding

    Science.gov (United States)

    Nguyen, Nghia Q.; Prager, Richard W.

    2017-03-01

    In this paper we present and analyse a technique for applying minimum variance distortionless response (MVDR) beamforming to a coherent plane-wave compounding (CPWC) acquisition system. In the past, this has been done using a spatial smoothing approach that reduces the effective size of the receive aperture and degrades the image resolution. In this paper, we apply the MVDR algorithms in a novel way to the acquired data from the individual transducer elements, before any summation or other compounding. This enables us to propose a new approach for estimation of the covariance matrix that decorrelates the coherence among the components at all the different acquisition angles. This results in a new approach to receive beamforming for CPWC acquisition. The new beamformer is demonstrated on imaging data acquired with a research scanner. We find the new beamformer offers substantial improvements over the DAS method. It also significantly outperforms the previously published MVDR/CPWC beamformer on phantom studies where the signal from the main target is dominated by noise and interference. These improvements motivate further study in this new approach for enhancing image quality.

  10. Cosmic variance and the measurement of the local Hubble parameter.

    Science.gov (United States)

    Marra, Valerio; Amendola, Luca; Sawicki, Ignacy; Valkenburg, Wessel

    2013-06-14

    There is an approximately 9% discrepancy, corresponding to 2.4 σ, between two independent constraints on the expansion rate of the Universe: one indirectly arising from the cosmic microwave background and baryon acoustic oscillations and one more directly obtained from local measurements of the relation between redshifts and distances to sources. We argue that by taking into account the local gravitational potential at the position of the observer this tension--strengthened by the recent Planck results--is partially relieved and the concordance of the Standard Model of cosmology increased. We estimate that measurements of the local Hubble constant are subject to a cosmic variance of about 2.4% (limiting the local sample to redshifts z > 0.010) or 1.3% (limiting it to z > 0.023), a more significant correction than that taken into account already. Nonetheless, we show that one would need a very rare fluctuation to fully explain the offset in the Hubble rates. If this tension is further strengthened, a cosmology beyond the Standard Model may prove necessary.

  11. Variation of the ulnar variance with powerful grip.

    Science.gov (United States)

    Sönmez, M; Turaçlar, U T; Taş, F; Sabancioğullari, V

    2002-01-01

    Causal relationships between ulnar variance and wrist disorders are known. Gripping and pronation cause proximal translation of the radius with respect to the ulna, leading to a statistically significant increase in ulnar variance. The purpose of this study was to investigate variation of the ulnar variance with powerful grip. A total of 41 male volunteers aged between 19 and 25 years (mean, 21.2+/-1.7 years) were studied. Posteroanterior X-ray films of all wrists were taken in the standardized position. After neutral posteroanterior X-ray films had been taken, subjects were asked to grip a Takei hand dynamometer with maximum force while repeated standardized posteroanterior X-ray films were obtained. Ulnar variance values were measured using the perpendicular method. Mean maximum grip force was 38.1 kg (range, 26.6-47.9 kg). Mean values of force-free (neutral) and forced ulnar variances were 0.06+/-0.21 mm and 1.87+/-0.23 mm, respectively. The difference in ulnar variance between the two groups was statistically significant ( P<0.001). The increase in ulnar variance with grip observed varied between 0.00 mm (minimum) and 3.97 mm (maximum), with a mean of 1.81 mm. Gaining an understanding of normal limits of ulnar variance modification with grip may be helpful in planning surgical treatment.

  12. Capturing option anomalies with a variance-dependent pricing kernel

    NARCIS (Netherlands)

    Christoffersen, P.; Heston, S.; Jacobs, K.

    2013-01-01

    We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is

  13. Gender Variance and Educational Psychology: Implications for Practice

    Science.gov (United States)

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  14. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    Science.gov (United States)

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  15. Evidence of Heterogeneity of Variance in Milk Yield among Holstein ...

    African Journals Online (AJOL)

    Three thousand, nine hundred and seventy five lactation records of Holstein- Friesian cows between 1968 and 1984 were used to investigate the existence of heterogeneity of variance in milk yield in Kenya. Coefficient of variation and standard deviations across herds were used to test heterogeneity of variance. Average ...

  16. Conceptual Complexity and the Bias/Variance Tradeoff

    Science.gov (United States)

    Briscoe, Erica; Feldman, Jacob

    2011-01-01

    In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…

  17. A Cautionary Tale about Levene's Tests for Equal Variances

    Science.gov (United States)

    Nordstokke, David W.; Zumbo, Bruno D.

    2007-01-01

    The central messages of this paper are that (a) unequal variances may be more prevalent than typically imagined in educational and policy research, and (b) when considering tests of equal variances one needs to be cautious about what is being referred to as "Levene's test" because Levene's test is actually a family of techniques. Depending on…

  18. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    Science.gov (United States)

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  19. Methodology of the fasting sub-sample from the Mexican Health Survey, 2000 Metodología de la submuestra de suero de la Encuesta Nacional de Salud 2000

    OpenAIRE

    Simón Barquera; Citlalli Carrión; Ismael Campos; Juan Espinosa; Juan Rivera; Gustavo Olaiz-Fernández

    2007-01-01

    OBJECTIVE: To report the comparative results of the sub-sample of fasting adults selected for the biochemical measurement of cardiovascular risk factors and the rest of the Mexican Health Survey (MHS) (2000) participants. MATERIAL AND METHODS: The nationally representative, cross-sectional Mexican Health Survey (2000) was analyzed. Survey participants reporting a fasting state period of 9- to 12-h were included in a sub-sample (n= 2 535) and compared with all other participants (n= 41 126). P...

  20. Performance of optimal registration estimators

    NARCIS (Netherlands)

    Pham, T.Q.; Bezuijen, M.; Van Vliet, L.J.; Schutte, K.; Luengo Hendriks, C.L.

    2005-01-01

    This paper derives a theoretical limit for image registration and presents an iterative estimator that achieves the limit. The variance of any parametric registration is bounded by the Cramer-Rao bound (CRB). This bound is signal-dependent and is proportional to the variance of input noise. Since

  1. Male size composition affects male reproductive variance in Atlantic cod Gadus morhua L. spawning aggregations

    DEFF Research Database (Denmark)

    Bekkevold, Dorte

    2006-01-01

    Estimates of Atlantic cod Gadus morhua reproductive success, determined using experimental spawning groups and genetic paternity assignment of offspring, showed that within-group variance in male size correlated positively with the degree of male mating skew, predicting a decrease in male...... reproductive skew with decreasing size variation among males under natural conditions. (c) 2006 The Author Journal compilation (c) 2006 The Fisheries Society of the British Isles...

  2. Analysis of experiments in square lattice with emphasis on variance components. i. Individual analysis

    OpenAIRE

    Silva,Heyder Diniz; Regazzi,Adair José; Cruz,Cosme Damião; Viana,José Marcelo Soriano

    1999-01-01

    This paper focused on four alternatives of analysis of experiments in square lattice as far as the estimation of variance components and some genetic parameters are concerned: 1) intra-block analysis with adjusted treatment and blocks within unadjusted repetitions; 2) lattice analysis as complete randomized blocks; 3) intrablock analysis with unadjusted treatment and blocks within adjusted repetitions; 4) lattice analysis as complete randomized blocks, by utilizing the adjusted means of treat...

  3. On the relationship between epistasis and genetic variance heterogeneity.

    Science.gov (United States)

    Forsberg, Simon K G; Carlborg, Örjan

    2017-11-28

    Epistasis and genetic variance heterogeneity are two non-additive genetic inheritance patterns that are often, but not always, related. Here we use theoretical examples and empirical results from earlier analyses of experimental data to illustrate the connection between the two. This includes an introduction to the relationship between epistatic gene action, statistical epistasis, and genetic variance heterogeneity, and a brief discussion about how genetic processes other than epistasis can also give rise to genetic variance heterogeneity. © The Author 2017. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  4. Accuracy and precision of variance components in occupational posture recordings: a simulation study of different data collection strategies

    Directory of Open Access Journals (Sweden)

    Liv Per

    2012-06-01

    Full Text Available Abstract Background Information on exposure variability, expressed as exposure variance components, is of vital use in occupational epidemiology, including informed risk control and efficient study design. While accurate and precise estimates of the variance components are desirable in such cases, very little research has been devoted to understanding the performance of data sampling strategies designed specifically to determine the size and structure of exposure variability. The aim of this study was to investigate the accuracy and precision of estimators of between-subjects, between-days and within-day variance components obtained by sampling strategies differing with respect to number of subjects, total sampling time per subject, number of days per subject and the size of individual sampling periods. Methods Minute-by-minute values of average elevation, percentage time above 90° and percentage time below 15° were calculated in a data set consisting of measurements of right upper arm elevation during four full shifts from each of 23 car mechanics. Based on this parent data, bootstrapping was used to simulate sampling with 80 different combinations of the number of subjects (10, 20, total sampling time per subject (60, 120, 240, 480 minutes, number of days per subject (2, 4, and size of sampling periods (blocks within days (1, 15, 60, 240 minutes. Accuracy (absence of bias and precision (prediction intervals of the variance component estimators were assessed for each simulated sampling strategy. Results Sampling in small blocks within days resulted in essentially unbiased variance components. For a specific total sampling time per subject, and in particular if this time was small, increasing the block size resulted in an increasing bias, primarily of the between-days and the within-days variance components. Prediction intervals were in general wide, and even more so at larger block sizes. Distributing sampling time across more days gave in

  5. Zero-variance zero-bias principle for observables in quantum Monte Carlo: Application to forces

    Science.gov (United States)

    Assaraf, Roland; Caffarel, Michel

    2003-11-01

    A simple and stable method for computing accurate expectation values of observables with variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC) algorithms is presented. The basic idea consists in replacing the usual "bare" estimator associated with the observable by an improved or "renormalized" estimator. Using this estimator more accurate averages are obtained: Not only the statistical fluctuations are reduced but also the systematic error (bias) associated with the approximate VMC or (fixed-node) DMC probability densities. It is shown that improved estimators obey a zero-variance zero-bias property similar to the usual zero-variance zero-bias property of the energy with the local energy as improved estimator. Using this property improved estimators can be optimized and the resulting accuracy on expectation values may reach the remarkable accuracy obtained for total energies. As an important example, we present the application of our formalism to the computation of forces in molecular systems. Calculations of the entire force curve of the H2,LiH, and Li2 molecules are presented. Spectroscopic constants Re (equilibrium distance) and ωe (harmonic frequency) are also computed. The equilibrium distances are obtained with a relative error smaller than 1%, while the harmonic frequencies are computed with an error of about 10%.

  6. Additive genetic variance in polyandry enables its evolution, but polyandry is unlikely to evolve through sexy or good sperm processes.

    Science.gov (United States)

    Travers, L M; Simmons, L W; Garcia-Gonzalez, F

    2016-05-01

    Polyandry is widespread despite its costs. The sexually selected sperm hypotheses ('sexy' and 'good' sperm) posit that sperm competition plays a role in the evolution of polyandry. Two poorly studied assumptions of these hypotheses are the presence of additive genetic variance in polyandry and sperm competitiveness. Using a quantitative genetic breeding design in a natural population of Drosophila melanogaster, we first established the potential for polyandry to respond to selection. We then investigated whether polyandry can evolve through sexually selected sperm processes. We measured lifetime polyandry and offensive sperm competitiveness (P2 ) while controlling for sampling variance due to male × male × female interactions. We also measured additive genetic variance in egg-to-adult viability and controlled for its effect on P2 estimates. Female lifetime polyandry showed significant and substantial additive genetic variance and evolvability. In contrast, we found little genetic variance or evolvability in P2 or egg-to-adult viability. Additive genetic variance in polyandry highlights its potential to respond to selection. However, the low levels of genetic variance in sperm competitiveness suggest that the evolution of polyandry may not be driven by sexy sperm or good sperm processes. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  7. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    Science.gov (United States)

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  8. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....

  9. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  10. Potential Coefficient and Anomaly Degree Variance Modelling Revisited,

    Science.gov (United States)

    1979-09-01

    using least squares collocation . Moritz (1977) suggested an Improved model for the anomaly degree variances that was investigated by Jekeli (1978...KEY WORDOS (Continue ont ,r.ve aide it ne’-entev and identify by block nool.ber) Geodesy, gray itY, collocation , cova rialncts. 20. AS 4-&CT (Continue...provided one Is willing to accept a high gradient variance. Excellent fit to all data types is obtained with the two com- ponent model suggested by Moritz

  11. Bias-variance tradeoff of soft decision trees

    OpenAIRE

    Olaru, Cristina; Wehenkel, Louis

    2004-01-01

    This paper focuses on the study of the error composition of a fuzzy decision tree induction method recently proposed by the authors, called soft decision trees. This error may be expressed as a sum of three types of error: residual error, bias and variance. The paper studies empirically the tradeoff between bias and variance in a soft decision tree method and compares it with the tradeoff of classical crisp regression and classification trees. The m...

  12. Global Variance Risk Premium and Forex Return Predictability

    OpenAIRE

    Aloosh, Arash

    2014-01-01

    In a long-run risk model with stochastic volatility and frictionless markets, I express expected forex returns as a function of consumption growth variances and stock variance risk premiums (VRPs)—the difference between the risk-neutral and statistical expectations of market return variation. This provides a motivation for using the forward-looking information available in stock market volatility indices to predict forex returns. Empirically, I find that stock VRPs predict forex returns at a ...

  13. The Durbin-Watson Ratio Under Infinite Variance Errors

    OpenAIRE

    Phillips, Peter C.B.; Mico Loretan

    1989-01-01

    This paper studies the properties of the von Neumann ratio for time series with infinite variance. The asymptotic theory is developed using recent results on the weak convergence of partial sums of time series with infinite variance to stable processes and of sample serial correlations to functions of stable variables. Our asymptotics cover the null of iid variates and general moving average (MA) alternatives. Regression residuals are also considered. In the static regression model the Durbin...

  14. Two-Variance-Component Model Improves Genetic Prediction in Family Datasets.

    Science.gov (United States)

    Tucker, George; Loh, Po-Ru; MacLeod, Iona M; Hayes, Ben J; Goddard, Michael E; Berger, Bonnie; Price, Alkes L

    2015-11-05

    Genetic prediction based on either identity by state (IBS) sharing or pedigree information has been investigated extensively with best linear unbiased prediction (BLUP) methods. Such methods were pioneered in plant and animal-breeding literature and have since been applied to predict human traits, with the aim of eventual clinical utility. However, methods to combine IBS sharing and pedigree information for genetic prediction in humans have not been explored. We introduce a two-variance-component model for genetic prediction: one component for IBS sharing and one for approximate pedigree structure, both estimated with genetic markers. In simulations using real genotypes from the Candidate-gene Association Resource (CARe) and Framingham Heart Study (FHS) family cohorts, we demonstrate that the two-variance-component model achieves gains in prediction r(2) over standard BLUP at current sample sizes, and we project, based on simulations, that these gains will continue to hold at larger sample sizes. Accordingly, in analyses of four quantitative phenotypes from CARe and two quantitative phenotypes from FHS, the two-variance-component model significantly improves prediction r(2) in each case, with up to a 20% relative improvement. We also find that standard mixed-model association tests can produce inflated test statistics in datasets with related individuals, whereas the two-variance-component model corrects for inflation. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  15. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    Science.gov (United States)

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  16. A comparison between temporal and subband minimum variance adaptive beamforming

    Science.gov (United States)

    Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis

    2014-03-01

    This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar

  17. A novel approach to the bias-variance problem in bump hunting

    Science.gov (United States)

    Williams, M.

    2017-09-01

    This study explores various data-driven methods for performing background-model selection, and for assigning uncertainty on the signal-strength estimator that arises due to the choice of background model. The performance of these methods is evaluated in the context of several realistic example problems. Furthermore, a novel strategy is proposed that greatly simplifies the process of performing a bump hunt when little is assumed to be known about the background. This new approach is shown to greatly reduce the potential bias in the signal-strength estimator, without degrading the sensitivity by increasing the variance, and to produce confidence intervals with valid coverage properties.

  18. Double decomposition: decomposing the variance in subcomponents of male extra-pair reproductive success.

    Science.gov (United States)

    Losdat, Sylvain; Arcese, Peter; Reid, Jane M

    2015-09-01

    1. Extra-pair reproductive success (EPRS) is a key component of male fitness in socially monogamous systems and could cause selection on female extra-pair reproduction if extra-pair offspring (EPO) inherit high value for EPRS from their successful extra-pair fathers. However, EPRS is itself a composite trait that can be fully decomposed into subcomponents of variation, each of which can be further decomposed into genetic and environmental variances. However, such decompositions have not been implemented in wild populations, impeding evolutionary inference. 2. We first show that EPRS can be decomposed into the product of three life-history subcomponents: the number of broods available to a focal male to sire EPO, the male's probability of siring an EPO in an available brood and the number of offspring in available broods. This decomposition of EPRS facilitates estimation from field data because all subcomponents can be quantified from paternity data without need to quantify extra-pair matings. Our decomposition also highlights that the number of available broods, and hence population structure and demography, might contribute substantially to variance in male EPRS and fitness. 3. We then used 20 years of complete genetic paternity and pedigree data from wild song sparrows (Melospiza melodia) to partition variance in each of the three subcomponents of EPRS, and thereby estimate their additive genetic variance and heritability conditioned on effects of male coefficient of inbreeding, age and social status. 4. All three subcomponents of EPRS showed some degree of within-male repeatability, reflecting combined permanent environmental and genetic effects. Number of available broods and offspring per brood showed low additive genetic variances. The estimated additive genetic variance in extra-pair siring probability was larger, although the 95% credible interval still converged towards zero. Siring probability also showed inbreeding depression and increased with male age

  19. New insights into the correlation structure of DSM-IV depression symptoms in the general population v. subsamples of depressed individuals.

    Science.gov (United States)

    Foster, S; Mohler-Kuo, M

    2017-01-09

    Previous research failed to uncover a replicable dimensional structure underlying the symptoms of depression. We aimed to examine two neglected methodological issues in this research: (a) adjusting symptom correlations for overall depression severity; and (b) analysing general population samples v. subsamples of currently depressed individuals. Using population-based cross-sectional and longitudinal data from two nations (Switzerland, 5883 young men; USA, 2174 young men and 2244 young women) we assessed the dimensions of the nine DSM-IV depression symptoms in young adults. In each general-population sample and each subsample of currently depressed participants, we conducted a standardised process of three analytical steps, based on exploratory and confirmatory factor and bifactor analysis, to reveal any replicable dimensional structure underlying symptom correlations while controlling for overall depression severity. We found no evidence of a replicable dimensional structure across samples when adjusting symptom correlations for overall depression severity. In the general-population samples, symptoms correlated strongly and a single dimension of depression severity was revealed. Among depressed participants, symptom correlations were surprisingly weak and no replicable dimensions were identified, regardless of severity-adjustment. First, caution is warranted when considering studies assessing dimensions of depression because general population-based studies and studies of depressed individuals generate different data that can lead to different conclusions. This problem likely generalises to other models based on the symptoms' inter-relationships such as network models. Second, whereas the overall severity aligns individuals on a continuum of disorder intensity that allows non-affected individuals to be distinguished from affected individuals, the clinical evaluation and treatment of depressed individuals should focus directly on each individual's symptom profile.

  20. Neuroticism explains unwanted variance in Implicit Association Tests of personality: possible evidence for an affective valence confound

    Science.gov (United States)

    Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja

    2013-01-01

    Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding. PMID

  1. Neuroticism explains unwanted variance in Implicit Association Tests of personality: Possible evidence for an affective valence confound

    Directory of Open Access Journals (Sweden)

    Monika eFleischhauer

    2013-09-01

    Full Text Available Meta-analytic data highlight the value of the Implicit Association Test (IAT as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling, latent Big-Five personality factors (based on self- and peer-report were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign, biases that might result, for example, from the IAT’s stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis. However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis, a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to

  2. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model.

    Science.gov (United States)

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. CMB-S4 and the hemispherical variance anomaly

    Science.gov (United States)

    O'Dwyer, Márcio; Copi, Craig J.; Knox, Lloyd; Starkman, Glenn D.

    2017-09-01

    Cosmic microwave background (CMB) full-sky temperature data show a hemispherical asymmetry in power nearly aligned with the Ecliptic. In real space, this anomaly can be quantified by the temperature variance in the Northern and Southern Ecliptic hemispheres, with the Northern hemisphere displaying an anomalously low variance while the Southern hemisphere appears unremarkable [consistent with expectations from the best-fitting theory, Lambda Cold Dark Matter (ΛCDM)]. While this is a well-established result in temperature, the low signal-to-noise ratio in current polarization data prevents a similar comparison. This will change with a proposed ground-based CMB experiment, CMB-S4. With that in mind, we generate realizations of polarization maps constrained by the temperature data and predict the distribution of the hemispherical variance in polarization considering two different sky coverage scenarios possible in CMB-S4: full Ecliptic north coverage and just the portion of the North that can be observed from a ground-based telescope at the high Chilean Atacama plateau. We find that even in the set of realizations constrained by the temperature data, the low Northern hemisphere variance observed in temperature is not expected in polarization. Therefore, observing an anomalously low variance in polarization would make the hypothesis that the temperature anomaly is simply a statistical fluke more unlikely and thus increase the motivation for physical explanations. We show, within ΛCDM, how variance measurements in both sky coverage scenarios are related. We find that the variance makes for a good statistic in cases where the sky coverage is limited, however, full northern coverage is still preferable.

  4. The scope and control of attention: Sources of variance in working memory capacity.

    Science.gov (United States)

    Chow, Michael; Conway, Andrew R A

    2015-04-01

    Working memory capacity is a strong positive predictor of many cognitive abilities, across various domains. The pattern of positive correlations across domains has been interpreted as evidence for a unitary source of inter-individual differences in behavior. However, recent work suggests that there are multiple sources of variance contributing to working memory capacity. The current study (N = 71) investigates individual differences in the scope and control of attention, in addition to the number and resolution of items maintained in working memory. Latent variable analyses indicate that the scope and control of attention reflect independent sources of variance and each account for unique variance in general intelligence. Also, estimates of the number of items maintained in working memory are consistent across tasks and related to general intelligence whereas estimates of resolution are task-dependent and not predictive of intelligence. These results provide insight into the structure of working memory, as well as intelligence, and raise new questions about the distinction between number and resolution in visual short-term memory.

  5. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    Directory of Open Access Journals (Sweden)

    Lee Yi-Kang

    2017-01-01

    Full Text Available Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  6. The relationship between observational scale and explained variance in benthic communities.

    Directory of Open Access Journals (Sweden)

    Alison M Flanagan

    Full Text Available This study addresses the impact of spatial scale on explaining variance in benthic communities. In particular, the analysis estimated the fraction of community variation that occurred at a spatial scale smaller than the sampling interval (i.e., the geographic distance between samples. This estimate is important because it sets a limit on the amount of community variation that can be explained based on the spatial configuration of a study area and sampling design. Six benthic data sets were examined that consisted of faunal abundances, common environmental variables (water depth, grain size, and surficial percent cover, and sonar backscatter treated as a habitat proxy (categorical acoustic provinces. Redundancy analysis was coupled with spatial variograms generated by multiscale ordination to quantify the explained and residual variance at different spatial scales and within and between acoustic provinces. The amount of community variation below the sampling interval of the surveys ( 71% of the remaining variance was explained by the environmental and province variables. Furthermore, these variables effectively explained the spatial structure present in the infaunal community. Overall, no scale problems remained to compromise inferences, and unexplained infaunal community variation had no apparent spatial structure within the observational scale of the surveys (> 100 m, although small-scale gradients (< 100 m below the observational scale may be present.

  7. The relationship between observational scale and explained variance in benthic communities.

    Science.gov (United States)

    Flanagan, Alison M; Flood, Roger D; Frisk, Michael G; Garza, Corey D; Lopez, Glenn R; Maher, Nicole P; Cerrato, Robert M

    2018-01-01

    This study addresses the impact of spatial scale on explaining variance in benthic communities. In particular, the analysis estimated the fraction of community variation that occurred at a spatial scale smaller than the sampling interval (i.e., the geographic distance between samples). This estimate is important because it sets a limit on the amount of community variation that can be explained based on the spatial configuration of a study area and sampling design. Six benthic data sets were examined that consisted of faunal abundances, common environmental variables (water depth, grain size, and surficial percent cover), and sonar backscatter treated as a habitat proxy (categorical acoustic provinces). Redundancy analysis was coupled with spatial variograms generated by multiscale ordination to quantify the explained and residual variance at different spatial scales and within and between acoustic provinces. The amount of community variation below the sampling interval of the surveys ( 71% of the remaining variance was explained by the environmental and province variables. Furthermore, these variables effectively explained the spatial structure present in the infaunal community. Overall, no scale problems remained to compromise inferences, and unexplained infaunal community variation had no apparent spatial structure within the observational scale of the surveys (> 100 m), although small-scale gradients (< 100 m) below the observational scale may be present.

  8. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    Science.gov (United States)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  9. On the estimation of the volatility-growth link

    DEFF Research Database (Denmark)

    Launov, Andrey; Posch, Olaf; Wälde, Klaus

    It is common practice to estimate the volatility-growth link by specifying a standard growth equation such that the variance of the error term appears as an explanatory variable in this growth equation. The variance in turn is modelled by a second equation. Hardly any of existing applications...... the appropriate controls are included in the variance equation consistency is restored. In short, we suggest that the variance equation must include relevant control variables to estimate the volatility-growth link....

  10. Genetic and Environmental Variance Among F2 Families in a Commercial Breeding Program for Perennial Ryegrass (em>Lolium perenneem> L.)

    DEFF Research Database (Denmark)

    Fé, Dario; Greve-Pedersen, Morten; Jensen, Christian Sig

    of this study was to estimate the genetic and environmental variance in the training set composed of F2 families selected from a ten year breeding period. Variance components were estimated on 1193 of those families, sown in 2001, 2003 and 2005 in five locations around Europe. Families were tested together...... programs based on GWS. Future work will focus on developing association models based on tailored phenotype data and genotype-by-sequencing-derived allele frequencies...

  11. Genetic and Environmental Variance Among F2 Families in a Commercial Breeding Program for Perennial Ryegrass (em>Lolium perenneem> L.)

    DEFF Research Database (Denmark)

    Fé, Dario; Greve-Pedersen, Morten; Jensen, Christian Sig

    2013-01-01

    of this study was to estimate the genetic and environmental variance in the training set composed of F2 families selected from a ten year breeding period. Variance components were estimated on 1193 of those families, sown in 2001, 2003 and 2005 in five locations around Europe. Families were tested together...... programs based on GWS. Future work will focus on developing association models based on tailored phenotype data and genotype-by-sequencing-derived allele frequencies....

  12. Measuring past changes in ENSO variance using Mg/Ca measurements on individual planktic foraminifera

    Science.gov (United States)

    Marchitto, T. M.; Grist, H. R.; van Geen, A.

    2013-12-01

    Previous work in Soledad Basin, located off Baja California Sur in the eastern subtropical Pacific, supports a La Niña-like mean-state response to enhanced radiative forcing at both orbital and millennial (solar) timescales during the Holocene. Mg/Ca measurements on the planktic foraminifer Globigerina bulloides indicate cooling when insolation is higher, consistent with an ';ocean dynamical thermostat' response that shoals the thermocline and cools the surface in the eastern tropical Pacific. Some, but not all, numerical models simulate reduced ENSO variance (less frequent and/or less intense events) when the Pacific is driven into a La Niña-like mean state by radiative forcing. Hypothetically the question of ENSO variance can be examined by measuring individual planktic foraminiferal tests from within a sample interval. Koutavas et al. (2006) used d18O on single specimens of Globigerinoides ruber from the eastern equatorial Pacific to demonstrate a 50% reduction in variance at ~6 ka compared to ~2 ka, consistent with the sense of the model predictions at the orbital scale. Here we adapt this approach to Mg/Ca and apply it to the millennial-scale question. We present Mg/Ca measured on single specimens of G. bulloides (cold season) and G. ruber (warm season) from three time slices in Soledad Basin: the 20th century, the warm interval (and solar low) at 9.3 ka, and the cold interval (and solar high) at 9.8 ka. Each interval is uniformly sampled over a ~100-yr (~10-cm or more) window to ensure that our variance estimate is not biased by decadal-scale stochastic variability. Theoretically we can distinguish between changing ENSO variability and changing seasonality: a reduction in ENSO variance would result in narrowing of both the G. bulloides and G. ruber temperature distributions without necessarily changing the distance between their two medians; while a reduction in seasonality would cause the two species' distributions to move closer together.

  13. Extinction risk, coloured noise and the scaling of variance.

    Science.gov (United States)

    Wichmann, Matthias C; Johst, Karin; Schwager, Monika; Blasius, Bernd; Jeltsch, Florian

    2005-07-01

    The impact of temporally correlated fluctuating environments (coloured noise) on the extinction risk of populations has become a main focus in theoretical population ecology. In this study we particularly focus on the extinction risk in strongly correlated environments. Here, we found that, in contrast to moderate auto-correlation, the extinction risk was highly dependent on the process of noise generation, in particular on the method of variance scaling. Such scaling is commonly applied to avoid variance-driven biases when comparing the extinction risk under white and coloured noise. We show that for strong auto-correlation often-used scaling techniques lead to a high variability in the variances of the resulting time series and thus to deviations in the subsequent extinction risk. Therefore, we present an alternative scaling method that always delivers the target variance, even in the case of strong auto-correlation. In contrast to earlier techniques, our very intuitive method is not bound to auto-regressive processes but can be applied to all types of coloured noises. We strongly recommend our method to generate time series when the target of interest is the effect of noise colour on extinction risk not obscured by any variance effects.

  14. A New Nonparametric Levene Test for Equal Variances

    Directory of Open Access Journals (Sweden)

    Bruno D. Zumbo

    2010-01-01

    Full Text Available Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current 'gold standard' method, the median-based Levene test, in a computer simulation study. The simulation results show that when sampling from either symmetric or skewed population distributions both the median based and nonparametric Levene tests maintain their nominal Type I error rate; however, when one is sampling from skewed population distributions the nonparametric test has more statistical power.

  15. ESTIMATING NUMBER DENSITY NV – A COMPARISON OF AN IMPROVED SALTYKOV ESTIMATOR AND THE DISECTOR METHOD

    Directory of Open Access Journals (Sweden)

    Ashot Davtian

    2011-05-01

    Full Text Available Two methods for the estimation of number per unit volume NV of spherical particles are discussed: the (physical disector (Sterio, 1984 and Saltykov's estimator (Saltykov, 1950; Fullman, 1953. A modification of Saltykov's estimator is proposed which reduces the variance. Formulae for bias and variance are given for both disector and improved Saltykov estimator for the case of randomly positioned particles. They enable the comparison of the two estimators with respect to their precision in terms of mean squared error.

  16. Stable limits for sums of dependent infinite variance random variables

    DEFF Research Database (Denmark)

    Bartkiewicz, Katarzyna; Jakubowski, Adam; Mikosch, Thomas

    2011-01-01

    The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most of these......The aim of this paper is to provide conditions which ensure that the affinely transformed partial sums of a strictly stationary process converge in distribution to an infinite variance stable distribution. Conditions for this convergence to hold are known in the literature. However, most...

  17. A Broadband Beamformer Using Controllable Constraints and Minimum Variance

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Benesty, Jacob; Jensen, Jesper Rindom

    2014-01-01

    The minimum variance distortionless response (MVDR) and the linearly constrained minimum variance (LCMV) beamformers are two optimal approaches in the sense of noise reduction. The LCMV beamformer can also reject interferers using linear constraints at the expense of reducing the degree of freedom...... in a limited number of microphones. However, it may magnify noise that causes a lower output signal-to-noise ratio (SNR) than the MVDR beamformer. Contrarily, the MVDR beamformer suffers from interference in output. In this paper, we propose a controllable LCMV (C-LCMV) beamformer based on the principles...

  18. Why analysis of variance is inappropriate for multiclinic trials.

    Science.gov (United States)

    Salsburg, D

    1999-10-01

    Violations of the assumptions behind analysis of variance (ANOVA) models do not tend to affect the alpha-level but can greatly decrease the power of a clinical trial to detect treatment effects. The very nature of multiclinic studies guarantees the violation of some of these assumptions. In this article, I explore the reduction in power that results from two of these violations--heterogeneity of variance across sites and the existence of "floor" and "ceiling" effects. I propose other methods of statistical analysis that avoid this loss of power.

  19. Levine's guide to SPSS for analysis of variance

    CERN Document Server

    Braver, Sanford L; Page, Melanie

    2003-01-01

    A greatly expanded and heavily revised second edition, this popular guide provides instructions and clear examples for running analyses of variance (ANOVA) and several other related statistical tests of significance with SPSS. No other guide offers the program statements required for the more advanced tests in analysis of variance. All of the programs in the book can be run using any version of SPSS, including versions 11 and 11.5. A table at the end of the preface indicates where each type of analysis (e.g., simple comparisons) can be found for each type of design (e.g., mixed two-factor desi

  20. A Monte Carlo Study of Levene's Test of Homogeneity of Variance: Empirical Frequencies of Type I Error in Normal Distributions.

    Science.gov (United States)

    Neel, John H.; Stallings, William M.

    An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…

  1. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Energy Technology Data Exchange (ETDEWEB)

    Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.

    2016-11-01

    The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)

  2. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Directory of Open Access Journals (Sweden)

    Marcin Studnicki

    2016-06-01

    Full Text Available The main objectives of multi-environmental trials (METs are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E interactions. Linear mixed models (LMMs with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011 from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset.

  3. Stratospheric Air Sub-sampler (SAS) and its application to analysis of Δ17O(CO2) from small air samples collected with an AirCore

    Science.gov (United States)

    Janina Mrozek, Dorota; van der Veen, Carina; Hofmann, Magdalena E. G.; Chen, Huilin; Kivi, Rigel; Heikkinen, Pauli; Röckmann, Thomas

    2016-11-01

    We present the set-up and a scientific application of the Stratospheric Air Sub-sampler (SAS), a device to collect and to store the vertical profile of air collected with an AirCore (Karion et al., 2010) in numerous sub-samples for later analysis in the laboratory. The SAS described here is a 20 m long 1/4 inch stainless steel tubing that is separated by eleven valves to divide the tubing into 10 identical segments, but it can be easily adapted to collect smaller or larger samples. In the collection phase the SAS is directly connected to the outlet of an optical analyzer that measures the mole fractions of CO2, CH4 and CO from an AirCore sampler. The stratospheric part (or if desired any part of the AirCore air) is then directed through the SAS. When the SAS is filled with the selected air, the valves are closed and the vertical profile is maintained in the different segments of the SAS. The segments can later be analysed to retrieve vertical profiles of other trace gas signatures that require slower instrumentation. As an application, we describe the coupling of the SAS to an analytical system to determine the 17O excess of CO2, which is a tracer for photochemical processing of stratospheric air. For this purpose the analytical system described by Mrozek et al. (2015) was adapted for analysis of air directly from the SAS. The performance of the coupled system is demonstrated for a set of air samples from an AirCore flight in November 2014 near Sodankylä, Finland. The standard error for a 25 mL air sample at stratospheric CO2 mole fraction is 0.56 ‰ (1σ) for δ17O and 0.03 ‰ (1σ) for both δ18O and δ13C. Measured Δ17O(CO2) values show a clear correlation with N2O in agreement with already published data.

  4. An observation on the variance of a predicted response in ...

    African Journals Online (AJOL)

    In studying individual parameters and the predicted response in regression analysis, three important properties are usually distinguished. These are bias, variance and mean-square error. The choice of a predicted response has to be made on a balance of these properties and computational simplicity. To avoid over fitting, ...

  5. Variance decomposition of apolipoproteins and lipids in Danish twins

    DEFF Research Database (Denmark)

    Fenger, Mogens; Schousboe, Karoline; Sørensen, Thorkild I A

    2007-01-01

    been used in bivariate or multivariate analysis to elucidate common genetic factors to two or more traits. METHODS AND RESULTS: In the present study the variances of traits related to lipid metabolism is decomposed in a relatively large Danish twin population, including bivariate analysis to detect...

  6. Age Differences in the Variance of Personality Characteristics

    Czech Academy of Sciences Publication Activity Database

    Mottus, R.; Allik, J.; Hřebíčková, Martina; Kööts-Ausmees, L.; Realo, A.

    2016-01-01

    Roč. 30, č. 1 (2016), s. 4-11 ISSN 0890-2070 R&D Projects: GA ČR GA13-25656S Institutional support: RVO:68081740 Keywords : variance * individual differences * personality * five-factor model Subject RIV: AN - Psychology Impact factor: 3.707, year: 2016

  7. 41 CFR 50-204.1a - Variances.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances. 50-204.1a Section 50-204.1a Public Contracts and Property Management Other Provisions Relating to Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 204-SAFETY AND HEALTH STANDARDS FOR FEDERAL SUPPLY CONTRACTS Scope...

  8. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  9. Variance-optimal hedging for processes with stationary independent increments

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Kallsen, J.; Krawczyk, L.

    We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we...

  10. Similarities Derived from 3-D Nonlinear Psychophysics: Variance Distributions.

    Science.gov (United States)

    Gregson, Robert A. M.

    1994-01-01

    The derivation of the variance of similarity judgments is made from the 3-D process in nonlinear psychophysics. The idea of separability of dimensions in metric space theories of similarity is replaced by one parameter that represents the degree of a form of interdimensional cross-sampling. (SLD)

  11. 40 CFR 268.44 - Variance from a treatment standard.

    Science.gov (United States)

    2010-07-01

    ...-Chambers Works must dispose of this waste in their on-site Subtitle C hazardous waste landfill. (9)—This... RCRA permitted mixed waste landfill without further treatment. This treatment variance is conditioned on EnergySolutions complying with a Waste Family Demonstration Testing Plan specifically addressing...

  12. Starting design for use in variance exchange algorithms | Iwundu ...

    African Journals Online (AJOL)

    A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...

  13. A variance-minimizing filter for large-scale applications

    NARCIS (Netherlands)

    Leeuwen, P.J. van

    A data-assimilation method is introduced for large-scale applications in the ocean and the atmosphere that does not rely on Gaussian assumptions, i.e. it is completely general following Bayes theorem. It is a so-called particle filter. A truly variance minimizing filter is introduced and its

  14. Adjustment of heterogenous variances and a calving year effect in ...

    African Journals Online (AJOL)

    Adjustment of heterogenous variances and a calving year effect in test-day models for national genetic evaluation of dairy cattle in South Africa. ... Although cow and bull rankings were not influenced much, significant changes in breeding values for individual animals and genetic trends of especially young animals, were ...

  15. Genetic variance components for residual feed intake and feed ...

    African Journals Online (AJOL)

    admin

    South African Society for Animal Science. 257. Genetic variance components for residual feed intake and feed conversion ratio and their correlations with other production traits in beef bulls. R.R. van der Westhuizen. #. , J. van der Westhuizen and S.J. Schoeman. 1. ARC-Animal Improvement Institute, Private Bag X2, Irene ...

  16. Heritability, variance components and genetic advance of some ...

    African Journals Online (AJOL)

    Eighty-eight (88) finger millet (Eleusine coracana (L.) Gaertn.) germplasm collections were tested using augmented randomized complete block design at Adet Agricultural Research Station in 2008 cropping season. The objective of this study was to find out heritability, variance components, variability and genetic advance ...

  17. An entropy approach to size and variance heterogeneity

    NARCIS (Netherlands)

    Balasubramanyan, L.; Stefanou, S.E.; Stokes, J.R.

    2012-01-01

    In this paper, we investigate the effect of bank size differences on cost efficiency heterogeneity using a heteroskedastic stochastic frontier model. This model is implemented by using an information theoretic maximum entropy approach. We explicitly model both bank size and variance heterogeneity

  18. 10 CFR 52.93 - Exemptions and variances.

    Science.gov (United States)

    2010-01-01

    ... application referencing an early site permit issued under subpart A of this part may include in the... conditions of the permit, or from the site safety analysis report. In determining whether to grant the... referencing an early site permit is issued, variances from the early site permit will not be granted for that...

  19. Automatic IMU sensor characterization using Allan variance plots

    Science.gov (United States)

    Skurowski, Przemysław; Paszkuta, Marcin

    2017-07-01

    We present an automatic method for the evaluation of the noise parameters of IMU devices. The method is a the two-stage optimization problem for polyline regression of the Allan variance in log-log domain. We address the initialization issue and segmentation to identify the existing noises which results in the robustness of the results obtained with the numerical solver.

  20. Molecular variance of the Tunisian almond germplasm assessed by ...

    African Journals Online (AJOL)

    The genetic variance analysis of 82 almond (Prunus dulcis Mill.) genotypes was performed using ten genomic simple sequence repeats (SSRs). A total of 50 genotypes from Tunisia including local landraces identified while prospecting the different sites of Bizerte and Sidi Bouzid (Northern and central parts) which are the ...

  1. Hydrograph variances over different timescales in hydropower production networks

    Science.gov (United States)

    Zmijewski, Nicholas; Wörman, Anders

    2016-08-01

    The operation of water reservoirs involves a spectrum of timescales based on the distribution of stream flow travel times between reservoirs, as well as the technical, environmental, and social constraints imposed on the operation. In this research, a hydrodynamically based description of the flow between hydropower stations was implemented to study the relative importance of wave diffusion on the spectrum of hydrograph variance in a regulated watershed. Using spectral decomposition of the effluence hydrograph of a watershed, an exact expression of the variance in the outflow response was derived, as a function of the trends of hydraulic and geomorphologic dispersion and management of production and reservoirs. We show that the power spectra of involved time-series follow nearly fractal patterns, which facilitates examination of the relative importance of wave diffusion and possible changes in production demand on the outflow spectrum. The exact spectral solution can also identify statistical bounds of future demand patterns due to limitations in storage capacity. The impact of the hydraulic description of the stream flow on the reservoir discharge was examined for a given power demand in River Dalälven, Sweden, as function of a stream flow Peclet number. The regulation of hydropower production on the River Dalälven generally increased the short-term variance in the effluence hydrograph, whereas wave diffusion decreased the short-term variance over periods of <1 week, depending on the Peclet number (Pe) of the stream reach. This implies that flow variance becomes more erratic (closer to white noise) as a result of current production objectives.

  2. Measurement of the $M_A^{QE}$ parameter using multiple quasi-elastic dominated sub-samples in the minos near detector

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, Nathan Samuel [Indiana Univ., Bloomington, IN (United States)

    2011-12-05

    The Main Injector Neutrino Oscillation Search (MINOS) is a two detector, long baseline neutrino oscillation experiment. The MINOS near detector is an ironscintillator tracking/sampling calorimeter and has recorded the world’s largest data set of neutrino interactions in the 0-5 GeV region. This high statistics data set is used to make precision measurements of neutrino interaction cross-sections on iron. The Q2 dependence in charged current quasi-elastic (CCQE) scattering probes the axial and vector structure (form factor) of the nucleon/nuclear target, and nuclear effects in neutrino scattering. Presented here is a study of the MINOS Data that will introduce a method that improves the existing MINOS CCQE analysis. This analysis uses an additional CCQE dominated sub-sample from a different kinematic region to reduce correlations between fit parameters in the existing MINOS CCQE analysis. The measured value of the axial-vector mass is MQE A = 1.312+0.037 -0.038(fit)+0.123 -0.265(syst.) GeV.

  3. A marginalized two-part model with heterogeneous variance for semicontinuous data.

    Science.gov (United States)

    Smith, Valerie A; Preisser, John S

    2018-01-01

    Semicontinuous data, characterized by a point mass at zero followed by a positive, continuous distribution, arise frequently in medical research. These data are typically analyzed using two-part mixtures that separately model the probability of incurring a positive outcome and the distribution of positive values among those who incur them. In such a conditional specification, however, standard two-part models do not provide a marginal interpretation of covariate effects on the overall population. We have previously proposed a marginalized two-part model that yields more interpretable effect estimates by parameterizing the model in terms of the marginal mean. In the original formulation, a constant variance was assumed for the positive values. We now extend this model to a more general framework by allowing non-constant variance to be explicitly modeled as a function of covariates, and incorporate this variance into two flexible distributional assumptions, log-skew-normal and generalized gamma, both of which take the log-normal distribution as a special case. Using simulation studies, we compare the performance of each of these models with respect to bias, coverage, and efficiency. We illustrate the proposed modeling framework by evaluating the effect of a behavioral weight loss intervention on health care expenditures in the Veterans Affairs health system.

  4. Using a bootstrap method to choose the sample fraction in tail index estimation

    NARCIS (Netherlands)

    J. Daníelsson (Jón); L.F.M. de Haan (Laurens); L. Peng (Liang); C.G. de Vries (Casper)

    2000-01-01

    textabstractTail index estimation depends for its accuracy on a precise choice of the sample fraction, i.e. the number of extreme order statistics on which the estimation is based. A complete solution to the sample fraction selection is given by means of a two step subsample bootstrap method. This

  5. Estimating the weight of Douglas-fir tree boles and logs with an iterative computer model.

    Science.gov (United States)

    Dale R. Waddell; Dale L Weyermann; Michael B. Lambert

    1987-01-01

    A computer model that estimates the green weights of standing trees was developed and validated for old-growth Douglas-fir. The model calculates the green weight for the entire bole, for the bole to any merchantable top, and for any log length within the bole. The model was validated by estimating the bias and accuracy of an independent subsample selected from the...

  6. Minimum variance system identification with application to digital adaptive flight control

    Science.gov (United States)

    Kotob, S.; Kaufman, H.

    1975-01-01

    A new on-line minimum variance filter for the identification of systems with additive and multiplicative noise is described which embodies both accuracy and computational efficiency. The resulting filter is shown to use both the covariance of the parameter vector itself and the covariance of the error in identification. A bias reduction scheme can be used to yield asymptotically unbiased estimates. Experimental results for simulated linearized lateral aircraft motion in a digital closed loop mode are presented, showing the utility of the identification schemes.

  7. The variance of identity-by-descent sharing in the Wright-Fisher model.

    Science.gov (United States)

    Carmi, Shai; Palamara, Pier Francesco; Vacic, Vladimir; Lencz, Todd; Darvasi, Ariel; Pe'er, Itsik

    2013-03-01

    Widespread sharing of long, identical-by-descent (IBD) genetic segments is a hallmark of populations that have experienced recent genetic drift. Detection of these IBD segments has recently become feasible, enabling a wide range of applications from phasing and imputation to demographic inference. Here, we study the distribution of IBD sharing in the Wright-Fisher model. Specifically, using coalescent theory, we calculate the variance of the total sharing between random pairs of individuals. We then investigate the cohort-averaged sharing: the average total sharing between one individual and the rest of the cohort. We find that for large cohorts, the cohort-averaged sharing is distributed approximately normally. Surprisingly, the variance of this distribution does not vanish even for large cohorts, implying the existence of "hypersharing" individuals. The presence of such individuals has consequences for the design of sequencing studies, since, if they are selected for whole-genome sequencing, a larger fraction of the cohort can be subsequently imputed. We calculate the expected gain in power of imputation by IBD and subsequently in power to detect an association, when individuals are either randomly selected or specifically chosen to be the hypersharing individuals. Using our framework, we also compute the variance of an estimator of the population size that is based on the mean IBD sharing and the variance in the sharing between inbred siblings. Finally, we study IBD sharing in an admixture pulse model and show that in the Ashkenazi Jewish population the admixture fraction is correlated with the cohort-averaged sharing.

  8. An empirical evaluation of five small area estimators

    OpenAIRE

    Costa, Alex

    2003-01-01

    This paper compares five small area estimators. We use Monte Carlo simulation in the context of both artificial and real populations. In addition to the direct and indirect estimators, we consider the optimal composite estimator with population weights, and two composite estimators with estimated weights: one that assumes homogeneity of within area variance and squared bias and one that uses area-specific estimates of variance and squared bias. In the study with real population, we found that...

  9. [Determining the most unfavourable variance to calculate the Measurement Scale Imprecision Factor, and extension to other types of sampling methods].

    Science.gov (United States)

    Martínez García, José Antonio; Martínez Caro, Laura

    2008-05-01

    The precision of estimates must be adequately reported in survey research, where ordinal and interval measurement scales are commonly used. Regarding mean estimate, absolute and relative errors exist as a function of the measurement scales. This manuscript discusses some assumptions underlying the development of the Measurement Scale Imprecision Factor--MSIF--, a tool to assess the degree of imprecision of estimates, regardless of the scale rank considered. Specifically, we propose a new method for determining the most unfavourable variance, which is consistent with the normal distribution assumption, unlike the original assumption based on the bimodal distribution. This method reduces the value of the most unfavourable variance, which is easily computed using the cumulative normal standard distribution function. In addition, we show the relationship between MSIF and other types of probabilistic sampling methods, such as stratified and cluster sampling.

  10. Allan Variance Computed in Space Domain: Definition and Application to InSAR Data to Characterize Noise and Geophysical Signal.

    Science.gov (United States)

    Cavalié, Olivier; Vernotte, François

    2016-04-01

    The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time

  11. Variance estimates and confidence intervals for the Kappa measure of classification accuracy

    Science.gov (United States)

    M. A. Kalkhan; R. M. Reich; R. L. Czaplewski

    1997-01-01

    The Kappa statistic is frequently used to characterize the results of an accuracy assessment used to evaluate land use and land cover classifications obtained by remotely sensed data. This statistic allows comparisons of alternative sampling designs, classification algorithms, photo-interpreters, and so forth. In order to make these comparisons, it is...

  12. A note on the variance of the estimate of the fixation index F

    Indian Academy of Sciences (India)

    Author Affiliations. Paulo A. Otto1 Renan B. Lemes1. Departamento de Genética e Biologia Evolutiva, Instituto de Biociências, Universidade de São Paulo,Caixa Postal (P.O. Box) 11.461, 05422-970 São Paulo, SP, Brazil ...

  13. Phenotypic variance, plasticity and heritability estimates of critical thermal limits depend on methodological context

    DEFF Research Database (Denmark)

    Chown, Steven L.; Jumbam, Keafon R.; Sørensen, Jesper Givskov

    2009-01-01

    1.  Biologists have long been concerned with measuring thermal performance curves and limits because of their significance to fitness. Basic experimental design may have a marked effect on the outcome of such measurements, and this is true especially of the experimental rates of temperature chang...

  14. Explaining the Prevalence, Scaling and Variance of Urban Phenomena

    CERN Document Server

    Gomez-Lievano, Andres; Hausmann, Ricardo

    2016-01-01

    The prevalence of many urban phenomena changes systematically with population size. We propose a theory that unifies models of economic complexity and cultural evolution to derive urban scaling. The theory accounts for the difference in scaling exponents and average prevalence across phenomena, as well as the difference in the variance within phenomena across cities of similar size. The central ideas are that a number of necessary complementary factors must be simultaneously present for a phenomenon to occur, and that the diversity of factors is logarithmically related to population size. The model reveals that phenomena that require more factors will be less prevalent, scale more superlinearly and show larger variance across cities of similar size. The theory applies to data on education, employment, innovation, disease and crime, and it entails the ability to predict the prevalence of a phenomenon across cities, given information about the prevalence in a single city.

  15. Response variance in functional maps: neural darwinism revisited.

    Directory of Open Access Journals (Sweden)

    Hirokazu Takahashi

    Full Text Available The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  16. Response variance in functional maps: neural darwinism revisited.

    Science.gov (United States)

    Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei

    2013-01-01

    The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.

  17. Automated variance reduction for MCNP using deterministic methods.

    Science.gov (United States)

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  18. Climate variance influence on the non-stationary plankton dynamics.

    Science.gov (United States)

    Molinero, Juan Carlos; Reygondeau, Gabriel; Bonnet, Delphine

    2013-08-01

    We examined plankton responses to climate variance by using high temporal resolution data from 1988 to 2007 in the Western English Channel. Climate variability modified both the magnitude and length of the seasonal signal of sea surface temperature, as well as the timing and depth of the thermocline. These changes permeated the pelagic system yielding conspicuous modifications in the phenology of autotroph communities and zooplankton. The climate variance envelope, thus far little considered in climate-plankton studies, is closely coupled with the non-stationary dynamics of plankton, and sheds light on impending ecological shifts and plankton structural changes. Our study calls for the integration of the non-stationary relationship between climate and plankton in prognostic models on the productivity of marine ecosystems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. What Do We Know About Variance in Accounting Profitability?

    OpenAIRE

    Anita M McGahan; Porter, Michael E.

    2002-01-01

    In this paper, we analyze the variance of accounting profitability among a broad cross-section of forms in the American economy from 1981 to 1994. The purpose of the analysis is to identify the importance of year, industry, corporate-parent, and business-specific effects on accounting profitability among operating businesses across sectors. The findings indicate that industry and corporate-parent effects are important and related to one another. As expected, business-specific effects, which a...

  20. Mean-Variance Cointegration and the Expectations Hypothesis

    OpenAIRE

    Strohsal, Till; Weber, Enzo

    2010-01-01

    The present work provides an economic explanation of a well-known (seeming) violation of the expectations hypothesis of the term structure (EHT) - the frequent finding of unit roots in interest rate spreads. We derive from EHT that the nonstationarity stems from the holding premium, which is hence cointegrated with the spread. We model the premium as being proportional to the integrated variance of excess returns and further propose a cointegration test. Simulating the distribution of the tes...

  1. The subgrid-scale scalar variance under supercritical pressure conditions

    Science.gov (United States)

    Masi, Enrica; Bellan, Josette

    2011-08-01

    To model the subgrid-scale (SGS) scalar variance under supercritical-pressure conditions, an equation is first derived for it. This equation is considerably more complex than its equivalent for atmospheric-pressure conditions. Using a previously created direct numerical simulation (DNS) database of transitional states obtained for binary-species systems in the context of temporal mixing layers, the activity of terms in this equation is evaluated, and it is found that some of these new terms have magnitude comparable to that of governing terms in the classical equation. Most prominent among these new terms are those expressing the variation of diffusivity with thermodynamic variables and Soret terms having dissipative effects. Since models are not available for these new terms that would enable solving the SGS scalar variance equation, the adopted strategy is to directly model the SGS scalar variance. Two models are investigated for this quantity, both developed in the context of compressible flows. The first one is based on an approximate deconvolution approach and the second one is a gradient-like model which relies on a dynamic procedure using the Leonard term expansion. Both models are successful in reproducing the SGS scalar variance extracted from the filtered DNS database, and moreover, when used in the framework of a probability density function (PDF) approach in conjunction with the β-PDF, they excellently reproduce a filtered quantity which is a function of the scalar. For the dynamic model, the proportionality coefficient spans a small range of values through the layer cross-stream coordinate, boding well for the stability of large eddy simulations using this model.

  2. Sample variance in the local measurements of the Hubble constant

    Science.gov (United States)

    Wu, Hao-Yi; Huterer, Dragan

    2017-11-01

    The current >3σ tension between the Hubble constant H0 measured from local distance indicators and from cosmic microwave background is one of the most highly debated issues in cosmology, as it possibly indicates new physics or unknown systematics. In this work, we explore whether this tension can be alleviated by the sample variance in the local measurements, which use a small fraction of the Hubble volume. We use a large-volume cosmological N-body simulation to model the local measurements and to quantify the variance due to local density fluctuations and sample selection. We explicitly take into account the inhomogeneous spatial distribution of type Ia supernovae. Despite the faithful modelling of the observations, our results confirm previous findings that sample variance in the local Hubble constant (H_0^loc) measurements is small; we find σ (H_0^loc)=0.31 {km s^{-1}Mpc^{-1}}, a nearly negligible fraction of the ˜6 km s-1Mpc-1 necessary to explain the difference between the local and global H0 measurements. While the H0 tension could in principle be explained by our local neighbourhood being a underdense region of radius ˜150 Mpc, the extreme required underdensity of such a void (δ ≃ -0.8) makes it very unlikely in a ΛCDM universe, and it also violates existing observational constraints. Therefore, sample variance in a ΛCDM universe cannot appreciably alleviate the tension in H0 measurements even after taking into account the inhomogeneous selection of type Ia supernovae.

  3. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  4. Performance of a procedure for yield estimation in fruit orchards

    DEFF Research Database (Denmark)

    Aravena Zamora, Felipe; Potin, Camila; Wulfsohn, Dvora-Laio

    Early estimation of expected fruit tree yield is important for the market planning and for growers and exporters to plan for labour and boxes. Large variations in tree yield may be found, posing a challenge for accurate yield estimation. We evaluated a multilevel systematic sampling procedure...... errors of about 20%. An analysis based on systematic sub-sampling of sample data across each sampling stage was used to determine how to distribute sampling effort to acheive the desired precision....

  5. Vasculopathy related to manic/hypomanic symptom burden and first-generation antipsychotics in a sub-sample from the collaborative depression study.

    Science.gov (United States)

    Fiedorowicz, Jess G; Coryell, William H; Rice, John P; Warren, Lois L; Haynes, William G

    2012-01-01

    Mood disorders substantially increase the risk of cardiovascular disease, though the mechanisms are unclear. We assessed for a dose-dependent relationship between course of illness or treatment with vasculopathy in a well-characterized cohort. Participants with mood disorders were recruited for the National Institute of Mental Health Collaborative Depression Study (CDS) and followed prospectively. A cross-sectional metabolic and vascular function evaluation was performed on a sub-sample near completion after a mean follow-up of 27 years. A total of 35 participants from the University of Iowa (33) and Washington University (2) sites of the CDS consented to a metabolic and vascular function assessment at the Iowa site. In multivariate linear regression, controlling for age, gender, and smoking, manic/hypomanic, but not depressive, symptom burden was associated with lower flow-mediated dilation. Cumulative exposure to antipsychotics and mood stabilizers was associated with elevated augmentation pressure and mean aortic systolic blood pressure. This appeared specifically related to first-generation antipsychotic exposure and mediated by increases in brachial systolic pressure. Although second-generation antipsychotics were associated with dyslipidemia and insulin resistance, they were not associated with vasculopathy. These results provide evidence that chronicity of mood symptoms contribute to vasculopathy in a dose-dependent fashion. Patients with more manic/hypomanic symptoms had poorer endothelial function. First-generation antipsychotic exposure was associated with arterial stiffness, evidenced by higher augmentation pressure, perhaps secondary to elevated blood pressure. Vascular phenotyping methods may provide a promising means of elucidating the mechanisms linking mood disorders to vascular disease. Copyright © 2012 S. Karger AG, Basel.

  6. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Goutsias, John

    2010-05-12

    Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. We present four techniques, derivative approximation (DA), polynomial approximation (PA), Gauss-Hermite integration (GHI), and orthonormal Hermite approximation (OHA), for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the four approximation techniques considered in

  7. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Science.gov (United States)

    2010-01-01

    Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA), polynomial approximation (PA), Gauss-Hermite integration (GHI), and orthonormal Hermite approximation (OHA), for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the four approximation

  8. Empirical methods in the evaluation of estimators

    Science.gov (United States)

    Gerald S. Walton; C.J. DeMars; C.J. DeMars

    1973-01-01

    The authors discuss the problem of selecting estimators of density and survival by making use of data on a forest-defoliating larva, the spruce budworm. Varlous estimators are compared. The results show that, among the estimators considered, ratio-type estimators are superior in terms of bias and variance. The methods used in making comparisons, particularly simulation...

  9. Nonsymbolic number and cumulative area representations contribute shared and unique variance to symbolic math competence.

    Science.gov (United States)

    Lourenco, Stella F; Bonny, Justin W; Fernandez, Edmund P; Rao, Sonia

    2012-11-13

    Humans and nonhuman animals share the capacity to estimate, without counting, the number of objects in a set by relying on an approximate number system (ANS). Only humans, however, learn the concepts and operations of symbolic mathematics. Despite vast differences between these two systems of quantification, neural and behavioral findings suggest functional connections. Another line of research suggests that the ANS is part of a larger, more general system of magnitude representation. Reports of cognitive interactions and common neural coding for number and other magnitudes such as spatial extent led us to ask whether, and how, nonnumerical magnitude interfaces with mathematical competence. On two magnitude comparison tasks, college students estimated (without counting or explicit calculation) which of two arrays was greater in number or cumulative area. They also completed a battery of standardized math tests. Individual differences in both number and cumulative area precision (measured by accuracy on the magnitude comparison tasks) correlated with interindividual variability in math competence, particularly advanced arithmetic and geometry, even after accounting for general aspects of intelligence. Moreover, analyses revealed that whereas number precision contributed unique variance to advanced arithmetic, cumulative area precision contributed unique variance to geometry. Taken together, these results provide evidence for shared and unique contributions of nonsymbolic number and cumulative area representations to formally taught mathematics. More broadly, they suggest that uniquely human branches of mathematics interface with an evolutionarily primitive general magnitude system, which includes partially overlapping representations of numerical and nonnumerical magnitude.

  10. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

    DEFF Research Database (Denmark)

    Greve, Douglas N; Svarer, Claus; Fisher, Patrick M

    2014-01-01

    -based smoothing, level of smoothing, use of voxelwise partial volume correction (PVC), and PVC masking threshold. PVC was implemented using the Muller-Gartner method with the masking out of voxels with low gray matter (GM) partial volume fraction. Dynamic PET scans of an antagonist serotonin-4 receptor...... radioligand ([(11)C]SB2307145) were collected on sixteen healthy subjects using a Siemens HRRT PET scanner. Kinetic modeling was used to compute maps of non-displaceable binding potential (BPND) after preprocessing. The results showed a complicated interaction between smoothing, PVC, and masking on BPND...... estimates. Volume-based smoothing resulted in large bias and intersubject variance because it smears signal across tissue types. In some cases, PVC with volume smoothing paradoxically caused the estimated BPND to be less than when no PVC was used at all. When applied in the absence of PVC, cortical surface...

  11. Bias-Variance Tradeoff of Graph Laplacian Regularizer

    Science.gov (United States)

    Chen, Pin-Yu; Liu, Sijia

    2017-08-01

    This paper presents a bias-variance tradeoff of graph Laplacian regularizer, which is widely used in graph signal processing and semi-supervised learning tasks. The scaling law of the optimal regularization parameter is specified in terms of the spectral graph properties and a novel signal-to-noise ratio parameter, which suggests selecting a mediocre regularization parameter is often suboptimal. The analysis is applied to three applications, including random, band-limited, and multiple-sampled graph signals. Experiments on synthetic and real-world graphs demonstrate near-optimal performance of the established analysis.

  12. A guide to SPSS for analysis of variance

    CERN Document Server

    Levine, Gustav

    2013-01-01

    This book offers examples of programs designed for analysis of variance and related statistical tests of significance that can be run with SPSS. The reader may copy these programs directly, changing only the names or numbers of levels of factors according to individual needs. Ways of altering command specifications to fit situations with larger numbers of factors are discussed and illustrated, as are ways of combining program statements to request a variety of analyses in the same program. The first two chapters provide an introduction to the use of SPSS, Versions 3 and 4. General rules conce

  13. Adding bias to reduce variance in psychological results: A tutorial on penalized regression

    Directory of Open Access Journals (Sweden)

    Helwig, Nathaniel E.

    2017-01-01

    Full Text Available Regression models are commonly used in psychological research. In most studies, regression coefficients are estimated via maximum likelihood (ML estimation. It is well-known that ML estimates have desirable large sample properties, but are prone to overfitting in small to moderate sized samples. In this paper, we discuss the benefits of using penalized regression, which is a form of penalized likelihood (PL estimation. Informally, PL estimation can be understood as introducing bias to estimators for the purpose of reducing their variance, with the ultimate goal of providing better solutions. We focus on the Gaussian regression model, where ML and PL estimation reduce to ordinary least squares (OLS and penalized least squares (PLS estimation, respectively. We cover classic OLS and stepwise regression, as well as three popular penalized regression approaches: ridge regression, the lasso, and the elastic net. We compare the different penalties (or biases imposed by each method, and discuss the resulting features each penalty encourages in the solution. To demonstrate the methods, we use an example where the goal is to predict a student's math exam performance from 30 potential predictors. Using a step-by-step tutorial with R code, we demonstrate how to (i load and prepare the data for analysis, (ii fit the OLS, stepwise, ridge, lasso, and elastic net models, (iii extract and compare the model fitting results, and (iv evaluate the performance of each method. Our example reveals that penalized regression methods can produce more accurate and more interpretable results than the classic OLS and stepwise regression solutions.

  14. On the range of future Sahel precipitation projections and the selection of a sub-sample of CMIP5 models for impact studies

    Science.gov (United States)

    Monerie, Paul-Arthur; Sanchez-Gomez, Emilia; Boé, Julien

    2017-04-01

    The future evolution of the West African Monsoon is studied by analyzing 32 CMIP5 models under the rcp8.5 emission scenario. A hierarchical clustering method based on the simulated pattern of precipitation changes is used to classify the models. Four groups, which do not agree on the simple sign of future Sahel precipitation change, are obtained. We find that the inter-group differences are mainly associated with the large spread in (1) temperature increase over the Sahara and North Atlantic and in (2) the strengthening of low and mid-level winds. A wetter Sahel is associated with a strong increase in temperature over the Sahara (>6 °C), a northward shift of the monsoon system and a weakening of the African Easterly jet. A dryer Sahel is associated with subsidence anomalies, a strengthening of the 600 hPa wind speed, and a weaker warming over the Northern Hemisphere. Moreover, the western (central) Sahel is projected to become dryer (wetter) during the first months (last months) of the rainy season in a majority of models. We propose several methods to select a sub-sample of models that captures both the ensemble mean pattern and/or the spread of precipitation changes from the full ensemble. This methodology is useful in all the situations for which it is not possible to deal with a large ensemble of models, and in particular most impact studies. We show that no relationship exists between the climatological mean biases in precipitation and temperature and the future changes in the monsoon intensity. This indicates that the mean bias is therefore not a reliable metric for the model selection. For this reason, we propose several methodologies, based on the projected precipitation changes: The "diversity" method, which consists in the selection of one model from each group is the most appropriate to capture the spread in precipitation change. The "pattern selection" method, which consists in the selection of models in a single group allows to select models for the

  15. Genetic and environmental variance components for physical activity patterns of twins. Exploring the possibilities of approximate entropy

    Directory of Open Access Journals (Sweden)

    José António Ribeiro Maia

    2010-06-01

    Full Text Available The main objective of this study was to estimate the contribution of genetic factorsto physical activity patterns (PAPs in twins using approximate entropy (ApEn statistics. Thesample consisted of 162 monozygotic and dizygotic twins from Portugal aged 6 to 18 years.Physical activity was measured with a Tritrac-RT3 triaxial accelerometer over 5 days of a usualweek. PAPs were described by ApEn using the Cine Wizard software. Zygosity was assessed bydirect DNA analysis. Data were analyzed using the SYSTAT 10, STATA 10 and Twinan92softwares. PAPs were estimated for 5, 3 and 2 days. In addition, structural equation modelingwas used to compute different sources of variance genetic, common environmental and uniqueenvironmental variance.The level of significance was set at 5%. Sibling aggregation was identifiedby ApEn analysis, with monozygotic twins showing greater homogeneity. In conclusion, geneticfactors accounted for 44 to 89% of the total variation in PAP.

  16. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  17. Argentine Population Genetic Structure: Large Variance in Amerindian Contribution

    Science.gov (United States)

    Seldin, Michael F.; Tian, Chao; Shigeta, Russell; Scherbarth, Hugo R.; Silva, Gabriel; Belmont, John W.; Kittles, Rick; Gamron, Susana; Allevi, Alberto; Palatnik, Simon A.; Alvarellos, Alejandro; Paira, Sergio; Caprarulo, Cesar; Guillerón, Carolina; Catoggio, Luis J.; Prigione, Cristina; Berbotto, Guillermo A.; García, Mercedes A.; Perandones, Carlos E.; Pons-Estel, Bernardo A.; Alarcon-Riquelme, Marta E.

    2011-01-01

    Argentine population genetic structure was examined using a set of 78 ancestry informative markers (AIMs) to assess the contributions of European, Amerindian, and African ancestry in 94 individuals members of this population. Using the Bayesian clustering algorithm STRUCTURE, the mean European contribution was 78%, the Amerindian contribution was 19.4%, and the African contribution was 2.5%. Similar results were found using weighted least mean square method: European, 80.2%; Amerindian, 18.1%; and African, 1.7%. Consistent with previous studies the current results showed very few individuals (four of 94) with greater than 10% African admixture. Notably, when individual admixture was examined, the Amerindian and European admixture showed a very large variance and individual Amerindian contribution ranged from 1.5 to 84.5% in the 94 individual Argentine subjects. These results indicate that admixture must be considered when clinical epidemiology or case control genetic analyses are studied in this population. Moreover, the current study provides a set of informative SNPs that can be used to ascertain or control for this potentially hidden stratification. In addition, the large variance in admixture proportions in individual Argentine subjects shown by this study suggests that this population is appropriate for future admixture mapping studies. PMID:17177183

  18. Variance of indoor radon concentration: Major influencing factors

    Energy Technology Data Exchange (ETDEWEB)

    Yarmoshenko, I., E-mail: ivy@ecko.uran.ru [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Vasilyev, A.; Malinovsky, G. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation); Bossew, P. [German Federal Office for Radiation Protection (BfS), Berlin (Germany); Žunić, Z.S. [Institute of Nuclear Sciences “Vinca”, University of Belgrade (Serbia); Onischenko, A.; Zhukovsky, M. [Institute of Industrial Ecology UB RAS, Sophy Kovalevskoy, 20, Ekaterinburg (Russian Federation)

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. - Highlights: • Influence of lithosphere and anthroposphere on variance of indoor radon is found. • Level-by-level analysis reduces GSD by a factor of 1.9. • Worldwide GSD is underestimated.

  19. Variance of the Quantum Dwell Time for a Nonrelativistic Particle

    Science.gov (United States)

    Hahne, Gerhard

    2012-01-01

    Munoz, Seidel, and Muga [Phys. Rev. A 79, 012108 (2009)], following an earlier proposal by Pollak and Miller [Phys. Rev. Lett. 53, 115 (1984)] in the context of a theory of a collinear chemical reaction, showed that suitable moments of a two-flux correlation function could be manipulated to yield expressions for the mean quantum dwell time and mean square quantum dwell time for a structureless particle scattering from a time-independent potential energy field between two parallel lines in a two-dimensional spacetime. The present work proposes a generalization to a charged, nonrelativistic particle scattering from a transient, spatially confined electromagnetic vector potential in four-dimensional spacetime. The geometry of the spacetime domain is that of the slab between a pair of parallel planes, in particular those defined by constant values of the third (z) spatial coordinate. The mean Nth power, N = 1, 2, 3, . . ., of the quantum dwell time in the slab is given by an expression involving an N-flux-correlation function. All these means are shown to be nonnegative. The N = 1 formula reduces to an S-matrix result published previously [G. E. Hahne, J. Phys. A 36, 7149 (2003)]; an explicit formula for N = 2, and of the variance of the dwell time in terms of the S-matrix, is worked out. A formula representing an incommensurability principle between variances of the output-minus-input flux of a pair of dynamical variables (such as the particle s time flux and others) is derived.

  20. Risk Management - Variance Minimization or Lower Tail Outcome Elimination

    DEFF Research Database (Denmark)

    Aabo, Tom

    2002-01-01

    This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess on......-adding perspective. A cross-case study of blue-chip industrial companies partly supports the empirical use of a risk management strategy of lower tail outcome elimination but does not exclude other factors from (co-)driving the observations.......This paper illustrates the profound difference between a risk management strategy of variance minimization and a risk management strategy of lower tail outcome elimination. Risk managers concerned about the variability of cash flows will tend to center their hedge decisions on their best guess...... on future cash flows (the budget), while risk managers concerned about costly lower tail outcomes will hedge (considerably) less depending on the level of uncertainty. A risk management strategy of lower tail outcome elimination is in line with theoretical recommendations in a corporate value...

  1. Hidden temporal order unveiled in stock market volatility variance

    Directory of Open Access Journals (Sweden)

    Y. Shapira

    2011-06-01

    Full Text Available When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  2. On an Allan variance approach to classify VLBI radio-sources on the basis of their astrometric stability

    Science.gov (United States)

    Gattano, C.; Lambert, S.; Bizouard, C.

    2017-12-01

    In the context of selecting sources defining the celestial reference frame, we compute astrometric time series of all VLBI radio-sources from observations in the International VLBI Service database. The time series are then analyzed with Allan variance in order to estimate the astrometric stability. From results, we establish a new classification that takes into account the whole multi-time scales information. The algorithm is flexible on the definition of ``stable source" through an adjustable threshold.

  3. Variance of phase fluctuations of waves propagating through a random medium

    Science.gov (United States)

    Chu, Nelson C.; Kong, Jin AU; Yueh, Simon H.; Nghiem, Son V.; Fleischman, Jack G.; Ayasli, Serpil; Shin, Robert T.

    1992-01-01

    As an electromagnetic wave propagates through a random scattering medium, such as a forest, its energy is attenuated and random phase fluctuations are induced. The magnitude of the random phase fluctuations induced is important in estimating how well a Synthetic Aperture Radar (SAR) can image objects within the scattering medium. The two-layer random medium model, consisting of a scattering layer between free space and ground, is used to calculate the variance of the phase fluctuations induced between a transmitter located above the random medium and a receiver located below the random medium. The scattering properties of the random medium are characterized by a correlation function of the random permittivity fluctuations. The effective permittivity of the random medium is first calculated using the strong fluctuation theory, which accounts for large permittivity fluctuations of the scatterers. The distorted Born approximation is used to calculate the first-order scattered field. A perturbation series for the phase of the received field in the Rytov approximation is then introduced and the variance of the phase fluctuations is also calculated assuming that the transmitter and receiver are in the paraxial limit of the random medium, which allows an analytic solution to be obtained. Results are compared using the paraxial approximation, scalar Green's function formulation, and dyadic Green's function formulation. The effects studied are the dependence of the variance of the phase fluctuations on receiver location in lossy and lossless regions, medium thickness, correlation length and fractional volume of scatterers, depolarization of the incident wave, ground layer permittivity, angle of incidence, and polarization.

  4. Development and validation of a variance model for dynamic PET: uses in fitting kinetic data and optimizing the injected activity

    Energy Technology Data Exchange (ETDEWEB)

    Walker, M D; Matthews, J C; Asselin, M-C; Julyan, P J [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ (United Kingdom); Watson, C C [Siemens Medical Solutions Molecular Imaging, Knoxville, TN 37932 (United States); Saleem, A; Dickinson, C; Charnley, N; Price, P M; Jones, T, E-mail: matthew.walker@manchester.ac.u [Academic Department of Radiation Oncology, Christie NHS Foundation Trust, University of Manchester, M20 4BX (United Kingdom)

    2010-11-21

    The precision of biological parameter estimates derived from dynamic PET data can be limited by the number of acquired coincidence events (prompts and randoms). These numbers are affected by the injected activity (A{sub 0}). The benefits of optimizing A{sub 0} were assessed using a new model of data variance which is formulated as a function of A{sub 0}. Seven cancer patients underwent dynamic [{sup 15}O]H{sub 2}O PET scans (32 scans) using a Biograph PET-CT scanner (Siemens), with A{sub 0} varied (142-839 MBq). These data were combined with simulations to (1) determine the accuracy of the new variance model, (2) estimate the improvements in parameter estimate precision gained by optimizing A{sub 0}, and (3) examine changes in precision for different size regions of interest (ROIs). The new variance model provided a good estimate of the relative variance in dynamic PET data across a wide range of A{sub 0}s and time frames for FBP reconstruction. Patient data showed that relative changes in estimate precision with A{sub 0} were in reasonable agreement with the changes predicted by the model: Pearson's correlation coefficients were 0.73 and 0.62 for perfusion (F) and the volume of distribution (V{sub T}), respectively. The between-scan variability in the parameter estimates agreed with the estimated precision for small ROIs (<5 mL). An A{sub 0} of 500-700 MBq was near optimal for estimating F and V{sub T} from abdominal [{sup 15}O]H{sub 2}O scans on this scanner. This optimization improved the precision of parameter estimates for small ROIs (<5 mL), with an injection of 600 MBq reducing the standard error on F by a factor of 1.13 as compared to the injection of 250 MBq, but by the more modest factor of 1.03 as compared to A{sub 0} = 400 MBq.

  5. Genetic co-variance functions for live weight, feed intake, and efficiency measures in growing pigs.

    Science.gov (United States)

    Coyne, J M; Berry, D P; Matilainen, K; Sevon-Aimonen, M-L; Mantysaari, E A; Juga, J; Serenius, T; McHugh, N

    2017-09-01

    The objective of the present study was to estimate genetic co-variance parameters pertaining to live weight, feed intake, and 2 efficiency traits (i.e., residual feed intake and residual daily gain) in a population of pigs over a defined growing phase using Legendre polynomial equations. The data set used consisted of 51,893 live weight records and 903,436 feed intake, residual feed intake (defined as the difference between an animal's actual feed intake and its expected feed intake), and residual daily gain (defined as the difference between an animal's actual growth rate and its expected growth rate) records from 10,201 growing pigs. Genetic co-variance parameters for all traits were estimated using random regression Legendre polynomials. Daily heritability estimates for live weight ranged from 0.25 ± 0.04 (d 73) to 0.50 ± 0.03 (d 122). Low to moderate heritability estimates were evident for feed intake, ranging from 0.07 ± 0.03 (d 66) to 0.25 ± 0.02 (d 170). The estimated heritability for residual feed intake was generally lower than those of both live weight and feed intake and ranged from 0.04 ± 0.01 (d 96) to 0.17 ± 0.02 (d 159). The heritability for feed intake and residual feed intake increased in the early stages of the test period and subsequently sharply declined, coinciding with older ages. Heritability estimates for residual daily gain ranged from 0.26 ± 0.03 (d 188) to 0.42 ± 0.03 (d 101). Genetic correlations within trait were strongest between adjacent ages but weakened as the interval between ages increased; however, the genetic correlations within all traits tended to strengthen between the extremes of the trajectory. Moderate to strong genetic correlations were evident among live weight, feed intake, and the efficiency traits, particularly in the early stage of the trial period (d 66 to 86), but weakened with age. Results from this study could be implemented into the national genetic evaluation for pigs, providing comprehensive

  6. Reducing experimental variability in variance-based sensitivity analysis of biochemical reaction systems.

    Science.gov (United States)

    Zhang, Hong-Xuan; Goutsias, John

    2011-03-21

    Sensitivity analysis is a valuable task for assessing the effects of biological variability on cellular behavior. Available techniques require knowledge of nominal parameter values, which cannot be determined accurately due to experimental uncertainty typical to problems of systems biology. As a consequence, the practical use of existing sensitivity analysis techniques may be seriously hampered by the effects of unpredictable experimental variability. To address this problem, we propose here a probabilistic approach to sensitivity analysis of biochemical reaction systems that explicitly models experimental variability and effectively reduces the impact of this type of uncertainty on the results. The proposed approach employs a recently introduced variance-based method to sensitivity analysis of biochemical reaction systems [Zhang et al., J. Chem. Phys. 134, 094101 (2009)] and leads to a technique that can be effectively used to accommodate appreciable levels of experimental variability. We discuss three numerical techniques for evaluating the sensitivity indices associated with the new method, which include Monte Carlo estimation, derivative approximation, and dimensionality reduction based on orthonormal Hermite approximation. By employing a computational model of the epidermal growth factor receptor signaling pathway, we demonstrate that the proposed technique can greatly reduce the effect of experimental variability on variance-based sensitivity analysis results. We expect that, in cases of appreciable experimental variability, the new method can lead to substantial improvements over existing sensitivity analysis techniques.

  7. Linear mean-variance negative binomial models for analysis of orange tissue-culture data

    Directory of Open Access Journals (Sweden)

    Naratip Jansakul

    2004-09-01

    Full Text Available Negative binomial maximum likelihood regression models are commonly used to analyze overdispersed Poisson data. There are various forms of the negative binomial model with different mean-variance relationships, however, the most generally used are those with linear, denoted by NB1 and quadratic relationships, represented by NB2. In literature, NB1 model is commonly approximated by quasi-likelihood approach. This paper discusses the possible use of the Newton-Raphson algorithm to obtain maximum likelihood estimates of the linear mean-variance negative binomial (NB1 regression model and of the overdispersion parameter. Description of constructing a half-normal plot with a simulated envelope for checking the adequacyof a selected NB1 model is also discussed. These procedures are applied to analyze data of a number of embryos from an orange tissue culture experiment. The experimental design is a completely randomized block design with 3 sugars: maltose, lactose and galactose at dose levels of 18, 37, 75, 110 and 150 µM. Theanalysis shows that the NB1 regression model with a cubic response function over the dose levels is consistentwith the data.

  8. Probability variance CHI feature selection method for unbalanced data

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    The problem of feature selection on unbalanced text data is a difficult problem to be solved. In view of the above problems, this paper analyzes the distribution of the feature items in the class and the class and the difference of the document under the unbalanced data set. The research is based on the word frequency probability and the document probability measurement feature and the document in the unbalanced data this paper proposes a CHI feature selection method based on probabilistic variance, which improves the traditional chi-square statistical model by introducing the intra-class word frequency probability factor, inter-class document probability concentration factor and intra-class uniformity factor. The experiment proves the effectiveness and feasibility of the method.

  9. A comparison between temporal and subband minimum variance adaptive beamforming

    DEFF Research Database (Denmark)

    Diamantis, Konstantinos; Voxen, Iben Holfort; Greenaway, Alan H.

    2014-01-01

    for the temporal approach. The same numbers for the subband approach are 0.62 109 for the point and 1.33 109 for the cystphantom. The comparison demonstrates similar resolution but slightly lower side-lobes and higher contrast for the subband approach at the expense of increased computation time.......This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated...... with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time...

  10. Batch variation between branchial cell cultures: An analysis of variance

    DEFF Research Database (Denmark)

    Hansen, Heinz Johs. Max; Grosell, M.; Kristensen, L.

    2003-01-01

    We present in detail how a statistical analysis of variance (ANOVA) is used to sort out the effect of an unexpected batch-to-batch variation between cell cultures. Two separate cultures of rainbow trout branchial cells were grown on permeable filtersupports ("inserts"). They were supposed...... and introducing the observed difference between batches as one of the factors in an expanded three-dimensional ANOVA, we were able to overcome an otherwisecrucial lack of sufficiently reproducible duplicate values. We could thereby show that the effect of changing the apical medium was much more marked when...... the radioactive lipid precursors were added on the apical, rather than on the basolateral, side. Theinsert cell cultures were obviously polarized. We argue that it is not reasonable to reject troublesome experimental results, when we do not know a priori that something went wrong. The ANOVA is a very useful...

  11. Hodological resonance, hodological variance, psychosis and schizophrenia: A hypothetical model

    Directory of Open Access Journals (Sweden)

    Paul Brian eLawrie Birkett

    2011-07-01

    Full Text Available Schizophrenia is a disorder with a large number of clinical, neurobiological, and cognitive manifestations, none of which is invariably present. However it appears to be a single nosological entity. This article considers the likely characteristics of a pathology capable of such diverse consequences. It is argued that both deficit and psychotic symptoms can be manifestations of a single pathology. A general model of psychosis is proposed in which the informational sensitivity or responsivity of a network ("hodological resonance" becomes so high that it activates spontaneously, to produce a hallucination, if it is in sensory cortex, or another psychotic symptom if it is elsewhere. It is argued that this can come about because of high levels of modulation such as those assumed present in affective psychosis, or because of high levels of baseline resonance, such as those expected in deafferentation syndromes associated with hallucinations, for example, Charles Bonnet. It is further proposed that schizophrenia results from a process (probably neurodevelopmental causing widespread increases of variance in baseline resonance; consequently some networks possess high baseline resonance and become susceptible to spontaneous activation. Deficit symptoms might result from the presence of networks with increased activation thresholds. This hodological variance model is explored in terms of schizo-affective disorder, transient psychotic symptoms, diathesis-stress models, mechanisms of antipsychotic pharmacotherapy and persistence of genes predisposing to schizophrenia. Predictions and implications of the model are discussed. In particular it suggests a need for more research into psychotic states and for more single case-based studies in schizophrenia.

  12. The Efficiency of Variance Reduction in Manufacturing and Service Systems: The Comparison of the Control Variates and Stratified Sampling

    Directory of Open Access Journals (Sweden)

    Ergün Eraslan

    2009-01-01

    Full Text Available There has been a great interest in the use of variance reduction techniques (VRTs in simulation output analysis for the purpose of improving accuracy when the performance measurements of complex production and service systems are estimated. Therefore, a simulation output analysis to improve the accuracy and reliability of the output is required. The performance measurements are required to have a narrow and strong confidence interval. For a given confidence level, a smaller confidence interval is supposed to be better than the larger one. The wide of confidence interval, determined by the half length, will depend on the variance. Generally, increased replication of the simulation model appears to have been the easiest way to reduce variance but this increases the simulation costs in complex-structured and large-sized manufacturing and service systems. Thus, VRTs are used in experiments to avoid computational cost of decision-making processes for more precise results. In this study, the effect of Control Variates (CVs and Stratified Sampling (SS techniques in reducing variance of the performance measurements of M/M/1 and GI/G/1 queue models is investigated considering four probability distributions utilizing randomly generated parameters for arrival and service processes.

  13. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  14. Semi-empirical prediction of moisture build-up in an electronic enclosure using analysis of variance (ANOVA)

    DEFF Research Database (Denmark)

    Shojaee Nasirabadi, Parizad; Conseil, Helene; Mohanty, Sankhya

    2016-01-01

    and temperature are studied. A set of experiments are done based on a fractional factorial design in order to estimate the time constant for moisture transfer into the enclosure by fitting the experimental data to an analytical quasi-steady-state model. According to the statistical analysis, temperature...... and the opening length are found as the most significant factors. Based on analysis of variance of the derived time constants, a semi-empirical regression model is proposed to predict the moisture transfer time constant with an adjusted R2 of 0.98; which demonstrated that the model can be used for estimation...

  15. An exact upper limit for the variance bias in the carry-over model with correlated errors

    OpenAIRE

    Sailer, Oliver

    2009-01-01

    The analysis of crossover designs assuming i.i.d. errors leads to biased variance estimates whenever the true covariance structure is not spherical. As a result, the OLS F-Test for treatment differences is not valid. Bellavance et al. (Biometrics 52:607-612, 1996) use simulations to show that a modified F-Test based on an estimate of the within subjects covariance matrix allows for nearly unbiased tests. Kunert and Utzig (JRSS B 55:919-927, 1993) propose an alternative test that does not need...

  16. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  17. Estimating the variation, autocorrelation, and environmental sensitivity of phenotypic selection

    NARCIS (Netherlands)

    Chevin, Luis-Miguel; Visser, Marcel E.; Tufto, Jarle

    Despite considerable interest in temporal and spatial variation of phenotypic selection, very few methods allow quantifying this variation while correctly accounting for the error variance of each individual estimate. Furthermore, the available methods do not estimate the autocorrelation of

  18. Multilevel models for multiple-baseline data: modeling across-participant variation in autocorrelation and residual variance.

    Science.gov (United States)

    Baek, Eun Kyeng; Ferron, John M

    2013-03-01

    Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.

  19. Genetic variance and covariance components related to intra- and interpopulation recurrent selection in maize (Zea mays L.

    Directory of Open Access Journals (Sweden)

    Arias Carlos Alberto Arrabal

    1998-01-01

    Full Text Available New genetic variance and covariance components related to intra- and interpopulational recurrent selection methods have been theoretically developed by Souza Jr. (Rev. Bras. Genet. 16: 91-105, 1993 to explain the failure of these methods to concomitantly develop hybrid and per se populations. Intra- and interpopulation half-sib progenies of 100 genotypes were sampled from maize (Zea mays L. populations BR-106 and BR-105 to estimate variance and covariance components and to compare the expected responses to reciprocal (RRS, intrapopulational (HSS, and modified (MRS recurrent selection in interpopulation hybrid, populations per se, and to determine heterosis. Four sets of 100 progenies, two intra- and two interpopulational, were evaluated in partially balanced 10 x 10 lattices arranged in split-blocks with two replications in two years (1991/92 and 1992/93 and two locations in Piracicaba, SP. Data for ear weight, plant and ear height, and ear by plant height ratio were recorded. Populations and interpopulation crosses were high yielding and showed high breeding potential for production of hybrids from inbred lines. Mid parent and the highest parent heterosis were relatively high, but lower than values reported for these populations under other environmental conditions. Additive variance estimates of populations per se and interpopulation crosses confirmed the high potential of these materials. The magnitude of the variance estimates for the deviations from intra- and interpopulation additive effects ( for BR-106 and for BR-105 and covariance between additive effects with these deviations ( for BR-106 and for BR-105 indicated that these new components can significantly influence the effectiveness of breeding methods. Genetic component estimates for BR-105 had relatively small errors, with negative for all traits. Estimates of and had relatively larger errors for BR-106. The MRS method was more effective than the RRS and HSS methods in producing

  20. Two-microphone separation of speech mixtures based on interclass variance maximization.

    Science.gov (United States)

    Cobos, Maximo; Lopez, Jose J

    2010-03-01

    Sparse methods for speech separation have become a discussed issue in acoustic signal processing. These sparse methods provide a powerful approach to the separation of several signals in the underdetermined case, i.e., when there are more sources than sensors. In this paper, a two-microphone separation method is presented. The proposed algorithm is based on grouping time-frequency points with similar direction-of-arrival (DOA) using a multi-level thresholding approach. The thresholds are calculated via the maximization of the interclass variance between DOA estimates and allow to identify angular sections, wherein the speakers are located with a strong likelihood. These sections define a set of time-frequency masks that are able to separate several sound sources in realistic scenarios and with little computational cost. Several experiments carried out under different mixing situations are discussed, showing the validity of the proposed approach.

  1. A flexible model for the mean and variance functions, with application to medical cost data.

    Science.gov (United States)

    Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T

    2013-10-30

    Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.

  2. WHOLE BRAIN GROUP NETWORK ANALYSIS USING NETWORK BIAS AND VARIANCE PARAMETERS.

    Science.gov (United States)

    Akhondi-Asl, Alireza; Hans, Arne; Scherrer, Benoit; Peters, Jurriaan M; Warfield, Simon K

    2012-05-01

    The disruption of normal function and connectivity of neural circuits is common across many diseases and disorders of the brain. This disruptive effect can be studied and analyzed using the brain's complex functional and structural connectivity network. Complex network measures from the field of graph theory have been used for this purpose in the literature. In this paper we have introduced a new approach for analyzing the brain connectivity network. In our approach the true connectivity network and each subject's bias and variance are estimated using a population of patients and healthy controls. These parameters can then be used to compare two groups of brain networks. We have used this approach for the comparison of the resting state functional MRI network of pediatric Tuberous Sclerosis Complex (TSC) patients and healthy subjects. We have shown that a significant difference between the two groups can be found. For validation, we have compared our findings with three well known complex network measures.

  3. Detection of Outliers in Panel Data of Intervention Effects Model Based on Variance of Remainder Disturbance

    Directory of Open Access Journals (Sweden)

    Yanfang Lyu

    2015-01-01

    Full Text Available The presence of outliers can result in seriously biased parameter estimates. In order to detect outliers in panel data models, this paper presents a modeling method to assess the intervention effects based on the variance of remainder disturbance using an arbitrary strictly positive twice continuously differentiable function. This paper also provides a Lagrange Multiplier (LM approach to detect and identify a general type of outlier. Furthermore, fixed effects models and random effects models are discussed to identify outliers and the corresponding LM test statistics are given. The LM test statistics for an individual-based model to detect outliers are given as a particular case. Finally, this paper performs an application using panel data and explains the advantages of the proposed method.

  4. Covariance estimators for generalized estimating equations (GEE) in longitudinal analysis with small samples.

    Science.gov (United States)

    Wang, Ming; Kong, Lan; Li, Zheng; Zhang, Lijun

    2016-05-10

    Generalized estimating equations (GEE) is a general statistical method to fit marginal models for longitudinal data in biomedical studies. The variance-covariance matrix of the regression parameter coefficients is usually estimated by a robust "sandwich" variance estimator, which does not perform satisfactorily when the sample size is small. To reduce the downward bias and improve the efficiency, several modified variance estimators have been proposed for bias-correction or efficiency improvement. In this paper, we provide a comprehensive review on recent developments of modified variance estimators and compare their small-sample performance theoretically and numerically through simulation and real data examples. In particular, Wald tests and t-tests based on different variance estimators are used for hypothesis testing, and the guideline on appropriate sample sizes for each estimator is provided for preserving type I error in general cases based on numerical results. Moreover, we develop a user-friendly R package "geesmv" incorporating all of these variance estimators for public usage in practice. Copyright © 2015 John Wiley & Sons, Ltd.

  5. The variance of sodium current fluctuations at the node of Ranvier.

    Science.gov (United States)

    Sigworth, F J

    1980-10-01

    1. Single myelinated nerve fibres 12-17 mum in diameter from Rana temporaria and Rana pipiens were voltage clamped at 2-5 degrees C. Potassium currents were blocked by internal Cs(+) and external tetraethylammonium ion. Series resistance compensation was employed.2. Sets of 80-512 identical, 20 ms depolarizations were applied, with the pulses repeated at intervals of 300-600 ms. The resulting membrane current records, filtered at 5 kHz, showed record-to-record variations of the current on the order of 1%. From each set of records the time course of the mean current and the time course of the variance were calculated.3. The variance was assumed to arise primarily from two independent sources of current fluctuations: the stochastic gating of sodium channels and the thermal noise background in the voltage clamp. Measurement of the passive properties of the nerve preparation allowed the thermal noise variance to be estimated, and these estimates accounted for the variance observed in the presence of tetrodotoxin and at the reversal potential.4. After the variance sigma(2) was corrected for the contribution from the background, its relationship to the mean current I could be fitted by the function sigma(2) = iI-I(2)/N expected for N independent channels having one non-zero conductance level. The single channel currents i corresponded to a single-channel chord conductance gamma = 6.4 +/- 0.9 pS (S.D.; n = 14). No significant difference in gamma was observed between the two species of frogs. The size of the total population of channels ranged from 20,000 to 46,000.5. The voltage dependence of i corresponded closely to the form of the instantaneous current-voltage relationship of the sodium conductance, except at the smallest depolarizations. The small values of i at small depolarizations may have resulted from the filtering of high-frequency components of the fluctuations.6. It is concluded that sodium channels have only two primary levels of conductance, corresponding to

  6. VIVA (from virus variance), a library to reconstruct icosahedral viruses based on the variance of structural models.

    Science.gov (United States)

    Cantele, Francesca; Lanzavecchia, Salvatore; Bellon, Pier Luigi

    2004-11-01

    VIVA is a software library that obtains low-resolution models of icosahedral viruses from projections observed at the electron microscope. VIVA works in a fully automatic way without any initial model. This feature eliminates the possibility of bias that could originate from the alignment of the projections to an external preliminary model. VIVA determines the viewing direction of the virus images by computation of sets of single particle reconstruction (SPR) followed by a variance analysis and classification of the 3D models. All structures are reduced in size to speed up computation. This limits the resolution of a VIVA reconstruction. The models obtained can be subsequently refined at best with use of standard libraries. Up today, VIVA has successfully solved the structure of all viruses tested, some of which being considered refractory particles. The VIVA library is written in 'C' language and is devised to run on widespread Linux computers.

  7. Variance results for the second and third reduced sample moments in neutron multiplicity counting for randomly triggered or signal-triggered counting gates

    Energy Technology Data Exchange (ETDEWEB)

    Burr, T. [Statistical Sciences Group, Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States)], E-mail: tburr@lanl.gov; Butterfield, K. [Advanced Nuclear Technology Group, Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States)

    2008-09-01

    Neutron multiplicity counting is an established method to estimate the spontaneous fission rate, and therefore also the plutonium mass for example, in a sample that includes other neutron sources. The extent to which the sample and detector obey the 'point model' assumptions impacts the estimate's total measurement error, but, in nearly all cases, for the random error contribution, it is useful to evaluate the variances of the second and third reduced sample moments of the neutron source strength. Therefore, this paper derives exact expressions for the variances and covariances of the second and third reduced sample moments for either randomly triggered or signal-triggered non-overlapping counting gates, and compares them to the corresponding variances in simulated data. Approximate expressions are also provided for the case of overlapping counting gates. These variances and covariances are useful in figure of merit calculations to predict assay performance prior to data collection. In addition, whenever real data are available, a bootstrap method is presented as an alternate but effective way to estimate these variances.

  8. An Improved Fst Estimator

    OpenAIRE

    Chen, Guanjie; Yuan, Ao; Shriner, Daniel; Tekola-Ayele, Fasil; Zhou, Jie; Amy R Bentley; Zhou, Yanxun; Wang, Chuntao; Newport, Melanie J; Adeyemo, Adebowale; Charles N Rotimi

    2015-01-01

    The fixation index F st plays a central role in ecological and evolutionary genetic studies. The estimators of Wright ( F ^ s t 1 ), Weir and Cockerham ( F ^ s t 2 ), and Hudson et al. ( F ^ s t 3 ) are widely used to measure genetic differences among different populations, but all have limitations. We propose a minimum variance estimator F ^ s t m using F ^ s t 1 and F ^ s t 2 . We tested F ^ s t m in simulations and applied it to 120 unrelated East African individuals from Ethiopia and 11 s...

  9. The pricing of long and short run variance and correlation risk in stock returns

    NARCIS (Netherlands)

    Cosemans, M.

    2011-01-01

    This paper studies the pricing of long and short run variance and correlation risk. The predictive power of the market variance risk premium for returns is driven by the correlation risk premium and the systematic part of individual variance premia. Furthermore, I find that aggregate volatility risk

  10. Bayes factors for testing equality and inequality constrained hypotheses on variances

    NARCIS (Netherlands)

    Böing-Messing, Florian

    2017-01-01

    Lay Summary There are often reasons to expect certain relations between the variances of multiple populations. For example, in an educational study one might expect that the variance of students’ performances increases or decreases across grades. Alternatively, it might be expected that the variance

  11. An investigation into heterogeneity of variance for milk and fat yields of Holstein cows in Brazilian herd environments

    Directory of Open Access Journals (Sweden)

    Costa Claudio Napolis

    1999-01-01

    Full Text Available Heterogeneity of variance in Brazilian herd environments was studied using first-lactation 305-day mature equivalent (ME milk and fat records of Holstein cows. Herds were divided into two categories, according to low or high herd-year phenotypic standard deviation for ME milk (HYSD. There were 330 sires with daughter records in both HYSD categories. Components of (covariance, heritability, and genetic correlations for milk and fat yields were estimated using a sire model from bivariate analyses with a restricted maximum likelihood (REML derivative-free algorithm. Sire and residual variances for milk yield in low HYSD herds were 79 and 57% of those obtained in high HYSD herd. For fat yield they were 67 and 60%, respectively. Heritabilities for milk and fat yields in low HYSD herds were larger (0.30 and 0.22 than in high HYSD herds (0.23 and 0.20. Genetic correlation between expression in low and high HYSD herds was 0.997 for milk yield and 0.985 for fat yield. Expected correlated response in low HYSD herds based on sires selected on half-sister information from high HYSD was 0.89 kg/kg for milk and 0.80 kg/kg for fat yield. Genetic evaluations in Brazil need to account for heterogeneity of variances to increase the accuracy of evaluations and the selection efficiency for milk and fat yields of Holstein cows. Selection response will be lower in low variance herds than in high variance herds because of reduced differences in daughter response and among breeding values of sires in low HYSD herds. Genetic investments in sire selection to improve production are more likely to be successful in high HYSD herds than in low HYSD Brazilian herds.

  12. Population dynamics of stable flies Stomoxys calcitrans (Diptera: Muscidae) at an organic dairy farm in Denmark based on mark-recapture with destructive sub-sampling

    DEFF Research Database (Denmark)

    Pedersen, Henrik Skovgård; Nachman, Gøsta Støger

    2012-01-01

    A population of stable flies, Stomoxys calcitrans (L.), was studied on a Danish cattle farm in two successive years. Flies were captured monthly by sweep nettings and marked with fluorescent dust. Absolute population size, dilution rate, loss rate, and adult longevity were estimated by means......-1. The per capita dilution rate increased with temperature and decreased with population size, whereas no effect of these factors on the per capita loss rate could be shown. Mean adult survival time was estimated to 6.3 d with 95% CL ranging from 4.3 to 11.1 d. The study points at the possibility...

  13. Neutrality and the response of rare species to environmental variance.

    Directory of Open Access Journals (Sweden)

    Lisandro Benedetti-Cecchi

    Full Text Available Neutral models and differential responses of species to environmental heterogeneity offer complementary explanations of species abundance distribution and dynamics. Under what circumstances one model prevails over the other is still a matter of debate. We show that the decay of similarity over time in rocky seashore assemblages of algae and invertebrates sampled over a period of 16 years was consistent with the predictions of a stochastic model of ecological drift at time scales larger than 2 years, but not at time scales between 3 and 24 months when similarity was quantified with an index that reflected changes in abundance of rare species. A field experiment was performed to examine whether assemblages responded neutrally or non-neutrally to changes in temporal variance of disturbance. The experimental results did not reject neutrality, but identified a positive effect of intermediate levels of environmental heterogeneity on the abundance of rare species. This effect translated into a marked decrease in the characteristic time scale of species turnover, highlighting the role of rare species in driving assemblage dynamics in fluctuating environments.

  14. A sparse embedding and least variance encoding approach to hashing.

    Science.gov (United States)

    Zhu, Xiaofeng; Zhang, Lei; Huang, Zi

    2014-09-01

    Hashing is becoming increasingly important in large-scale image retrieval for fast approximate similarity search and efficient data storage. Many popular hashing methods aim to preserve the kNN graph of high dimensional data points in the low dimensional manifold space, which is, however, difficult to achieve when the number of samples is big. In this paper, we propose an effective and efficient hashing approach by sparsely embedding a sample in the training sample space and encoding the sparse embedding vector over a learned dictionary. To this end, we partition the sample space into clusters via a linear spectral clustering method, and then represent each sample as a sparse vector of normalized probabilities that it falls into its several closest clusters. This actually embeds each sample sparsely in the sample space. The sparse embedding vector is employed as the feature of each sample for hashing. We then propose a least variance encoding model, which learns a dictionary to encode the sparse embedding feature, and consequently binarize the coding coefficients as the hash codes. The dictionary and the binarization threshold are jointly optimized in our model. Experimental results on benchmark data sets demonstrated the effectiveness of the proposed approach in comparison with state-of-the-art methods.

  15. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    Science.gov (United States)

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  16. Analysis of variance (ANOVA) models in lower extremity wounds.

    Science.gov (United States)

    Reed, James F

    2003-06-01

    Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.

  17. Improving Signal Detection using Allan and Theo Variances

    Science.gov (United States)

    Hardy, Andrew; Broering, Mark; Korsch, Wolfgang

    2017-09-01

    Precision measurements often deal with small signals buried within electronic noise. Extracting these signals can be enhanced through digital signal processing. Improving these techniques provide signal to noise ratios. Studies presently performed at the University of Kentucky are utilizing the electro-optic Kerr effect to understand cell charging effects within ultra-cold neutron storage cells. This work is relevant for the neutron electric dipole moment (nEDM) experiment at Oak Ridge National Laboratory. These investigations, and future investigations in general, will benefit from the illustrated improved analysis techniques. This project will showcase various methods for determining the optimum duration that data should be gathered for. Typically, extending the measuring time of an experimental run reduces the averaged noise. However, experiments also encounter drift due to fluctuations which mitigate the benefits of extended data gathering. Through comparing FFT averaging techniques, along with Allan and Theo variance measurements, quantifiable differences in signal detection will be presented. This research is supported by DOE Grants: DE-FG02-99ER411001, DE-AC05-00OR22725.

  18. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  19. Advanced Variance Reduction for Global k-Eigenvalue Simulations in MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Edward W. Larsen

    2008-06-01

    The "criticality" or k-eigenvalue of a nuclear system determines whether the system is critical (k=1), or the extent to which it is subcritical (k<1) or supercritical (k>1). Calculations of k are frequently performed at nuclear facilities to determine the criticality of nuclear reactor cores, spent nuclear fuel storage casks, and other fissile systems. These calculations can be expensive, and current Monte Carlo methods have certain well-known deficiencies. In this project, we have developed and tested a new "functional Monte Carlo" (FMC) method that overcomes several of these deficiencies. The current state-of-the-art Monte Carlo k-eigenvalue method estimates the fission source for a sequence of fission generations (cycles), during each of which M particles per cycle are processed. After a series of "inactive" cycles during which the fission source "converges," a series of "active" cycles are performed. For each active cycle, the eigenvalue and eigenfunction are estimated; after N >> 1 active cycles are performed, the results are averaged to obtain estimates of the eigenvalue and eigenfunction and their standard deviations. This method has several disadvantages: (i) the estimate of k depends on the number M of particles per cycle, (iii) for optically thick systems, the eigenfunction estimate may not converge due to undersampling of the fission source, and (iii) since the fission source in any cycle depends on the estimated fission source from the previous cycle (the fission sources in different cycles are correlated), the estimated variance in k is smaller than the real variance. For an acceptably large number M of particles per cycle, the estimate of k is nearly independent of M; this essentially takes care of item (i). Item (ii) can be addressed by taking M sufficiently large, but for optically thick systems a sufficiently large M can easily be unrealistic. Item (iii) cannot be accounted for by taking M or N sufficiently large; it is an inherent deficiency due

  20. Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using......-squares (WLS) DOA estimator....